Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Counting OCR errors in typeset text
Sandberg, Jonathan S.
1995-03-01
Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.
Influences of Gate Operation Errors in the Quantum Counting Algorithm
Qing Ai; Yan-Song Li; Gui-Lu Long
2006-01-01
In this article, the error analysis in the quantum counting algorithm is investigated. It has been found that the random error plays as important a role as the systematic error does in the phase inversion operations. Both systematic and random errors are important in the Hadamard transformation. This is quite different from the Grover algorithm and the Shor algorithm.
Estimating the Count Error in the Australian Census
Chipperfield James
2017-03-01
Full Text Available In many countries, counts of people are a key factor in the allocation of government resources. However, it is well known that errors arise in Census counting of people (e.g., undercoverage due to missing people. Therefore, it is common for national statistical agencies to conduct one or more “audit” surveys that are designed to estimate and remove systematic errors in Census counting. For example, the Australian Bureau of Statistics (ABS conducts a single audit sample, called the Post Enumeration Survey (PES, shortly after each Australian Population Census. This article describes the estimator used by the ABS to estimate the count of people in Australia. Key features of this estimator are that it is unbiased when there is systematic measurement error in Census counting and when nonresponse to the PES is nonignorable.
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Estimation of bias errors in measured airplane responses using maximum likelihood method
Klein, Vladiaslav; Morgan, Dan R.
1987-01-01
A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.
Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam [Dept. of Nuclear Medicine, Severance Hospital, Yonsei University, Seoul (Korea, Republic of); Park, Hoon Hee [Dept. of Radiological Technology, Shingu college, Sungnam (Korea, Republic of)
2013-12-15
This study is aimed to evaluate the effect of T{sub 1/2} upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9{sup 9m}TcO{sub 4}- of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ{sup 2} test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T{sub 1/2} error from change of gradient with -0.25% to +0.25%, if T{sub 1/2} is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T{sub 1/2} error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation
无
2009-01-01
In order to restrain the mid-spatial frequency error in magnetorheological finishing (MRF) process, a novel part-random path is designed based on the theory of maximum entropy method (MEM). Using KDMRF-1000F polishing machine, one flat work piece (98 mm in diameter) is polished. The mid-spatial frequency error in the region using part-random path is much lower than that by using common raster path. After one MRF iteration (7.46 min), peak-to-valley (PV) is 0.062 wave (1 wave =632.8 nm), root-mean-square (RMS) is 0.010 wave and no obvious mid-spatial frequency error is found. The result shows that the part-random path is a novel path, which results in a high form accuracy and low mid-spatial frequency error in MRF process.
Maximum ikelihood estimation for the double-count method with independent observers
Manly, Bryan F.J.; McDonald, Lyman L.; Garner, Gerald W.
1996-01-01
Data collected under a double-count protocol during line transect surveys were analyzed using new maximum likelihood methods combined with Akaike's information criterion to provide estimates of the abundance of polar bear (Ursus maritimus Phipps) in a pilot study off the coast of Alaska. Visibility biases were corrected by modeling the detection probabilities using logistic regression functions. Independent variables that influenced the detection probabilities included perpendicular distance of bear groups from the flight line and the number of individuals in the groups. A series of models were considered which vary from (1) the simplest, where the probability of detection was the same for both observers and was not affected by either distance from the flight line or group size, to (2) models where probability of detection is different for the two observers and depends on both distance from the transect and group size. Estimation procedures are developed for the case when additional variables may affect detection probabilities. The methods are illustrated using data from the pilot polar bear survey and some recommendations are given for design of a survey over the larger Chukchi Sea between Russia and the United States.
Representation of layer-counted proxy records as probability densities on error-free time axes
Boers, Niklas; Goswami, Bedartha; Ghil, Michael
2016-04-01
Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the
许有国
2005-01-01
Most people began to count in tens because they had ten fingers on their hands. But in some countries, people counted on one hand and used the three parts of their four fingers. So they counted in twelves, not in tens.
Reducing sampling error in faecal egg counts from black rhinoceros (Diceros bicornis).
Stringer, Andrew P; Smith, Diane; Kerley, Graham I H; Linklater, Wayne L
2014-04-01
Faecal egg counts (FECs) are commonly used for the non-invasive assessment of parasite load within hosts. Sources of error, however, have been identified in laboratory techniques and sample storage. Here we focus on sampling error. We test whether a delay in sample collection can affect FECs, and estimate the number of samples needed to reliably assess mean parasite abundance within a host population. Two commonly found parasite eggs in black rhinoceros (Diceros bicornis) dung, strongyle-type nematodes and Anoplocephala gigantea, were used. We find that collection of dung from the centre of faecal boluses up to six hours after defecation does not affect FECs. More than nine samples were needed to greatly improve confidence intervals of the estimated mean parasite abundance within a host population. These results should improve the cost-effectiveness and efficiency of sampling regimes, and support the usefulness of FECs when used for the non-invasive assessment of parasite abundance in black rhinoceros populations.
Asymptotic correctability of Bell-diagonal quantum states and maximum tolerable bit error rates
Ranade, K S; Ranade, Kedar S.; Alber, Gernot
2005-01-01
The general conditions are discussed which quantum state purification protocols have to fulfill in order to be capable of purifying Bell-diagonal qubit-pair states, provided they consist of steps that map Bell-diagonal states to Bell-diagonal states and they finally apply a suitably chosen Calderbank-Shor-Steane code to the outcome of such steps. As a main result a necessary and a sufficient condition on asymptotic correctability are presented, which relate this problem to the magnitude of a characteristic exponent governing the relation between bit and phase errors under the purification steps. These conditions allow a straightforward determination of maximum tolerable bit error rates of quantum key distribution protocols whose security analysis can be reduced to the purification of Bell-diagonal states.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
RCT: Module 2.03, Counting Errors and Statistics, Course 8768
Hillmer, Kurt T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-01
Radiological sample analysis involves the observation of a random process that may or may not occur and an estimation of the amount of radioactive material present based on that observation. Across the country, radiological control personnel are using the activity measurements to make decisions that may affect the health and safety of workers at those facilities and their surrounding environments. This course will present an overview of measurement processes, a statistical evaluation of both measurements and equipment performance, and some actions to take to minimize the sources of error in count room operations. This course will prepare the student with the skills necessary for radiological control technician (RCT) qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examination (TEST 27566) and by providing in the field skills.
Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan
2016-03-01
Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.
Veberic, Darko
2011-01-01
We present a novel method for combining the analog and photon-counting measurements of lidar transient recorders into reconstructed photon returns. The method takes into account the statistical properties of the two measurement modes and estimates the most likely number of arriving photons and the most likely values of acquisition parameters describing the two measurement modes. It extends and improves the standard combining ("gluing") methods and does not rely on any ad hoc definitions of the overlap region nor on any ackground subtraction methods.
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-02-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Curtis, Tyler E; Roeder, Ryan K
2017-07-06
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in
Maximum error-bounded Piecewise Linear Representation for online stream approximation
Xie, Qing
2014-04-04
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.
Böhmer, L; Hildebrandt, G
1998-01-01
In contrast to the prevailing automatized chemical analytical methods, classical microbiological techniques are linked with considerable material- and human-dependent sources of errors. These effects must be objectively considered for assessing the reliability and representativeness of a test result. As an example for error analysis, the deviation of bacterial counts and the influence of the time of testing, bacterial species involved (total bacterial count, coliform count) and the detection method used (pour-/spread-plate) were determined in a repeated testing of parallel samples of pasteurized (stored for 8 days at 10 degrees C) and raw (stored for 3 days at 6 degrees C) milk. Separate characterization of deviation components, namely, unavoidable random sampling error as well as methodical error and variation between parallel samples, was made possible by means of a test design where variance analysis was applied. Based on the results of the study, the following conclusions can be drawn: 1. Immediately after filling, the total count deviation in milk mainly followed the POISSON-distribution model and allowed a reliable hygiene evaluation of lots even with few samples. Subsequently, regardless of the examination procedure used, the setting up of parallel dilution series can be disregarded. 2. With increasing storage period, bacterial multiplication especially of psychrotrophs leads to unpredictable changes in the bacterial profile and density. With the increase in errors between samples, it is common to find packages which have acceptable microbiological quality but are already spoiled by the time of the expiry date labeled. As a consequence, a uniform acceptance or rejection of the batch is seldom possible. 3. Because the contamination level of coliforms in certified raw milk mostly lies near the detection limit, coliform counts with high relative deviation are expected to be found in milk directly after filling. Since no bacterial multiplication takes place
Qing-ping Deng; Xue-jun Xu; Shu-min Shen
2000-01-01
This paper deals with Crouzeix-Raviart nonconforming finite element approxi mation of Navier-Stokes equation in a plane bounded domain, by using the so-called velocity-pressure mixed formulation. The quasi-optimal maximum norm error es timates of the velocity and its first derivatives and of the pressure are derived for nonconforming C-R scheme of stationary Navier-Stokes problem. The analysis is based on the weighted inf-sup condition and the technique of weighted Sobolev norm. By the way, the optimal L2-error estimate for nonconforming finite element approximation is obtained.
Lee, C.-H.; Herget, C. J.
1976-01-01
This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Effect of Measurement vs. Counting Errors on Parameters' Covariance in Neutron Tomography Analysis
Odyniec, Michał [National Security Technologies, LLC. (NSTec), Mercury, NV (United States); Blair, Jerome J. [National Security Technologies, LLC. (NSTec), Mercury, NV (United States)
2013-06-13
We present here a method that estimates the relative effect of the counting uncertainty and of the instrument uncertainty on that of the parameters in a parametric model for neutron time of flight. The final result, obtained independently of calculation of the parameter values from measured data, presents explicitly the ratio of the two uncertainties in terms of the choice, settings, and placement of the detector and the oscilloscope. Consequently, the method can serve as a tool in planning a measurement setup.
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
Orus Perez, Raul
2017-04-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
Orus Perez, Raul
2016-11-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
Error estimates for the analysis of differential expression from RNA-seq count data
Conrad J. Burden
2014-09-01
Full Text Available Background. A number of algorithms exist for analysing RNA-sequencing data to infer profiles of differential gene expression. Problems inherent in building algorithms around statistical models of over dispersed count data are formidable and frequently lead to non-uniform p-value distributions for null-hypothesis data and to inaccurate estimates of false discovery rates (FDRs. This can lead to an inaccurate measure of significance and loss of power to detect differential expression. Results. We use synthetic and real biological data to assess the ability of several available R packages to accurately estimate FDRs. The packages surveyed are based on statistical models of overdispersed Poisson data and include edgeR, DESeq, DESeq2, PoissonSeq and QuasiSeq. Also tested is an add-on package to edgeR and DESeq which we introduce called Polyfit. Polyfit aims to address the problem of a non-uniform null p-value distribution for two-class datasets by adapting the Storey–Tibshirani procedure. Conclusions. We find the best performing package in the sense that it achieves a low FDR which is accurately estimated over the full range of p-values, albeit with a very slow run time, is the QLSpline implementation of QuasiSeq. This finding holds provided the number of biological replicates in each condition is at least 4. The next best performing packages are edgeR and DESeq2. When the number of biological replicates is sufficiently high, and within a range accessible to multiplexed experimental designs, the Polyfit extension improves the performance DESeq (for approximately 6 or more replicates per condition, making its performance comparable with that of edgeR and DESeq2 in our tests with synthetic data.
Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul
2016-07-01
Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF
Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A; Vaswani, Namrata; Petrich, Jacob W
2016-03-10
The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as "residual minimization" (RM) and "maximum likelihood" (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number of "photon counts" was approximately 20, 200, 1000, 3000, and 6000 and there were about 2-200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson's weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. The robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Kukush, Alexander; Schneeweiss, Hans
2004-01-01
We compare the asymptotic covariance matrix of the ML estimator in a nonlinear measurement error model to the asymptotic covariance matrices of the CS and SQS estimators studied in Kukush et al (2002). For small measurement error variances they are equal up to the order of the measurement error variance and thus nearly equally efficient.
A Maximum-error Specification Oriented Gross Error Identification Method%一种面向最大值指标的粗大误差处理方法
普仕凡; 韩旭; 李智生; 李钊
2014-01-01
A maximum-error specification oriented gross error identification method based on general Paǔta criterion is proposed, which provides a reference for gross error identification in maximum-error specification. It is assumed that the target stochastic observa-tion sequence is subject to IID normal distribution. Then, through a risk analysis on mistaking the maximum observation value as the gross error data, some modifications are made to the classic Paǔta criterion, and the general Paǔta criterion is introduced. The gross error identification threshold calculation method is also given. Practical application test results show that the method is feasible.%提出了一种面向最大值指标的广义拉依达准则粗差处理方法，为最大值指标下粗大误差的有效鉴别提供了参考依据。该方法假设观测序列服从独立同分布的正态分布，从最大观测值被误作为粗差数据的风险分析入手，对拉依达准则的判定标准进行了改进，推导并给出了广义拉依达准则的粗差判决条件。实践应用的结果表明，该方法是可行的。
Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Massidda, Scott; Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Vay, Jean-Luc [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the
Vilmos Simon
2013-01-01
The aim of this study is to define optimal tooth modifications,introduced by appropriately chosen head-cutter geometry and machine tool setting,to simultaneously minimize tooth contact pressure and angular displacement error of the driven gear (transmission error) of face-hobbed spiral bevel gears.As a result of these modifications,the gear pair becomes mismatched,and a point contact replaces the theoretical line contact.In the applied loaded tooth contact analysis it is assumed that the point contact under load is spreading over a surface along the whole or part of the “potential” contact line.A computer program was developed to implement the formulation provided above.By using this program the influence of tooth modifications introduced by the variation in machine tool settings and in head cutter data on load and pressure distributions,transmission errors,and fillet stresses is investigated and discussed.The correlation between the ease-off obtained by pinion tooth modifications and the corresponding tooth contact pressure distribution is investigated and the obtained results are presented.
雷达组网的精确极大似然误差配准算法%An Exact Maximum Likelihood Error Registration Algorithm for Radar Network
丰昌政; 薛强
2012-01-01
针对最小二乘法和卡尔曼滤波方法在雷达网系统中的误差配准问题,提出一种雷达组网的精确极大似然误差配准算法.采用基于圆极投影的极大似然配准算法,利用各雷达站的几何关系,通过极大似然混合高斯-牛顿迭代方法估计出雷达网的系统误差,并进行仿真.仿真结果证明:该配准方法具有良好的一致性,可以用于多雷达组网的误差配准.%For the least square method and Caiman filter method in radar network system's error registration problems, put forward a kind of radar netting exact maximum likelihood error registration algorithm. Using maximum likelihood registration algorithm based on circular polar projection, according to the radar station geometric relationship, to estimate the error of radar network system by maximum likelihood mixed Gauss-Newton iterative method, and carried out a simulation. The simulation results show that the algorithm has good compatibility, can be used for multi radar netted registration.
T. Gnanasekaran
2008-01-01
Full Text Available Problem statement: In this study we propose a method to improve the performance of Maximum A-Posteriori Probability Algorithm, which is used in turbo decoder. Previously the performance of turbo decoder is improved by means of scaling the channel reliability value. Approach: A modification in MAP algorithm proposed in this study, which achieves further improvement in forward error correction by means of scaling the extrinsic information in both decoders without introducing any complexity. The encoder is modified with a new puncturing matrix, which yields Unequal Error Protection (UEP. This modified MAP algorithm is analyzed with the traditional turbo code system Equal Error Protection (EEP and also with Unequal Error Protection (UEP both in AWGN channel and fading channel. Result: MAP and modified MAP achieve coding gain of 0.6 dB over EEP in AWGN channel. The MAP and modified MAP achieve coding gain of 0.4 dB and 0.9dB over EEP respectively in Rayleigh fading channel. Modified MAP in UEP class 1 and class 2 gained 0.8 dB and 0.6 dB respectively in AWGN channel where as in fading channel class 1 and 2 gained 0.4 dB and 0.6 dB respectively. Conclusion/Recommendations: The modified MAP algorithm improves the Bit Error Rate (BER performance in EEP as well as UEP both in AWGN and fading channels. We propose modified MAP error correction algorithm with UEP for broad band communication.
Martin eBouda
2016-02-01
Full Text Available Fractal dimension (FD, estimated by box-counting, is a metric used to characterise plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantisation error (QE, which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterise the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitised in 3D and subjected to box-counts. A pattern search algorithm was used to minimise QE by optimising grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates.QE due to both grid position and orientation was a significant source of error in FD estimates, but pattern search provided an efficient means of minimising it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitisations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did
Sabitha Gauni
2014-03-01
Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.
Optimal allocation of point-count sampling effort
Barker, R.J.; Sauer, J.R.; Link, W.A.
1993-01-01
Both unlimited and fixedradius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a pointcount survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) meansquare error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.
Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex
2012-06-01
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕb. In the presence of large voltage errors, δU≫ΔEb, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
Massidda, Scott [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Lidia, Steven M.; Seidl, Peter [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, {Delta}{Epsilon}{sub b}. In the presence of large voltage errors, {delta}U Double-Nested-Greater-Than {Delta}E{sub b}, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
何洋; 纪昌明; 田开华; 张验科; 李传刚
2016-01-01
为了更好的研究径流预报误差的分布规律，应用最大熵原理，建立径流预报误差分布的最大熵模型；以官地水库径流预报系列为例，计算其不同预见期的径流预报误差概率密度函数及分布曲线，将该分布曲线与理论正态分布曲线和样本直方图进行对比，结果表明最大熵法求得的误差分布能更好地描述径流预报误差的分布特性。考虑流域径流年内的丰枯变化，以枯水期、汛期和过渡期对径流系列进行划分，分别分析各个时期的误差分布规律，并给出预报误差在不同置信区间下的置信度，从而更好地掌握径流预报误差的分布规律，为提高径流预报精度提供一条新途径。%To deeply study the distribution law of runoff forecast error, the maximum entropy principle is applied and the maximum entropy model for the distribution of runoff prediction error is established in this paper. The authors use the runoff forecast series in Guandi Reservoir as an example and calculate the probability density function and distribution curve of the runoff forecast error for different forecasting periods. The distribution curves are compared with the theoretical normal distribution curves and the histogram of the samples. The results show that the distribution characteristics of the error distribution calculated by the maximum entropy method can describe the runoff forecasting error better. Considering the change of runoff years, the runoff series are divided into droughts, flood and transition seasons. The error distribution rule of each period is analyzed, and the confidence of forecasting error at different confidence interval offered, thus mastering the distribution rule of runoff forecasting error better and providing a new way to improve the accuracy of runoff forecasting.
Croft, Stephen [Oak Ridge National Laboratory (ORNL), One Bethel Valley Road, Oak Ridge, TN (United States); Burr, Tom [International Atomic Energy Agency (IAEA), Vienna (Austria); Favalli, Andrea [Los Alamos National Laboratory (LANL), MS E540, Los Alamos, NM 87545 (United States); Nicholson, Andrew [Oak Ridge National Laboratory (ORNL), One Bethel Valley Road, Oak Ridge, TN (United States)
2016-03-01
The declared linear density of {sup 238}U and {sup 235}U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of {sup 235}U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.
Carb counting; Carbohydrate-controlled diet; Diabetic diet; Diabetes-counting carbohydrates ... Many foods contain carbohydrates (carbs), including: Fruit and fruit juice Cereal, bread, pasta, and rice Milk and milk products, soy milk Beans, legumes, ...
National Oceanic and Atmospheric Administration, Department of Commerce — Database of seal counts from aerial photography. Counts by image, site, species, and date are stored in the database along with information on entanglements and...
... their spleen removed surgically Use of birth control pills (oral contraceptives) Some conditions may cause a temporary (transitory) increased ... increased platelet counts include estrogen and birth control pills (oral contraceptives). Mildly decreased platelet counts may be seen in ...
Geist, William H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-12-01
This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: ^{240}Pu_{eff} mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.
杜金华; 王莎
2013-01-01
首先介绍3种典型的用于翻译错误检测和分类的单词后验概率特征,即基于固定位置的词后验概率、基于滑动窗的词后验概率和基于词对齐的词后验概率,分析其对错误检测性能的影响；然后,将其分别与语言学特征如词性、词及由LG句法分析器抽取的句法特征等进行组合,利用最大熵分类器预测翻译错误,并在汉英NIST数据集上进行实验验证和比较.实验结果表明,不同的单词后验概率对分类错误率的影响是显著的,并且在词后验概率基础上加入语言学特征的组合特征可以显著降低分类错误率,提高译文错误预测性能.%The authors firstly introduce three typical word posterior probabilities (WPP) for error detection and classification, which are fixed position WPP, sliding window WPP, and alignment-based WPP, and analyzes their impact on the detection performance. Then each WPP feature is combined with three linguistic features (Word, POS and LG Parsing knowledge) over the maximum entropy classifier to predict the translation errors. Experimental results on Chinese-to-English NIST datasets show that the influences of different WPP features on the classification error rate (CER) are significant, and the combination of WPP with linguistic features can significantly reduce the CER and improve the prediction capability of the classifier.
... radiation therapy, or infection) Cirrhosis of the liver Anemia caused by low iron levels, or low levels of vitamin B12 or folate Chronic kidney disease Reticulocyte count may be higher during pregnancy.
... Lab and Imaging Tests Understanding Blood Counts Understanding Blood Counts Understanding Blood Counts SHARE: Print Glossary Blood cell counts give ... your blood that's occupied by red cells. Normal Blood Counts Normal blood counts fall within a range ...
... limited. Home Visit Global Sites Search Help? White Blood Cell Count Share this page: Was this page helpful? ... Count; Leukocyte Count; White Count Formal name: White Blood Cell Count Related tests: Complete Blood Count , Blood Smear , ...
Perry, Mike; Kader, Gary
1998-01-01
Presents an activity on the simplification of penguin counting by employing the basic ideas and principles of sampling to teach students to understand and recognize its role in statistical claims. Emphasizes estimation, data analysis and interpretation, and central limit theorem. Includes a list of items for classroom discussion. (ASK)
Damonte, Kathleen
2004-01-01
Scientists use sampling to get an estimate of things they cannot easily count. A population is made up of all the organisms of one species living together in one place at the same time. All of the people living together in one town are considered a population. All of the grasshoppers living in a field are a population. Scientists keep track of the…
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
White blood cell counting system
1972-01-01
The design, fabrication, and tests of a prototype white blood cell counting system for use in the Skylab IMSS are presented. The counting system consists of a sample collection subsystem, sample dilution and fluid containment subsystem, and a cell counter. Preliminary test results show the sample collection and the dilution subsystems are functional and fulfill design goals. Results for the fluid containment subsystem show the handling bags cause counting errors due to: (1) adsorption of cells to the walls of the container, and (2) inadequate cleaning of the plastic bag material before fabrication. It was recommended that another bag material be selected.
Alfredo Tomasetta
2010-06-01
Full Text Available Timothy Williamson supports the thesis that every possible entity necessarily exists and so he needs to explain how a possible son of Wittgenstein’s, for example, exists in our world:he exists as a merely possible object (MPO, a pure locus of potential. Williamson presents a short argument for the existence of MPOs: how many knives can be made by fitting together two blades and two handles? Four: at the most two are concrete objects, the others being merely possible knives and merely possible objects. This paper defends the idea that one can avoid reference and ontological commitment to MPOs. My proposal is that MPOs can be dispensed with by using the notion of rules of knife-making. I first present a solution according to which we count lists of instructions - selected by the rules - describing physical combinations between components. This account, however, has its own difficulties and I eventually suggest that one can find a way out by admitting possible worlds, entities which are more commonly accepted - at least by philosophers - than MPOs. I maintain that, in answering Williamson’s questions, we count classes of physically possible worlds in which the same instance of a general rule is applied.
Zhang, Min-juan; Wang, Zhi-bin; Li, Xiao; Li, Jin-hua; Wang, Yan-chao
2015-05-01
In order to improve the accuracy and stability of the rebuilt spectrums, it is necessary that stability analysis and nicety measuring of the maximum optical path difference of interferograms in the photo-elastic modulator Fourier transform spectrometers(PEM-FTS). The maximum optical difference of interferograms is uncertain parameter, and it is relate to the resonant state, characteristic of frequency-thermal drift and driving voltage of PEM. Therefore, based on the principle of photo-elastic modulator Fourier transform interferometer, the model of the freguency-thermal drift is built, and the variety of the maximum optical path difference is analyzed; A measuring method of the maximum optical path difference is put forward, which is zero-crossing counting of laser's interference signal when the driving signal of PEM is as the standard. In the method the dual channel high-speed comparator and FPGA are used to transform sine wave to square wave, to realize zero-crossing trigger counting and errors compensation. On the condition that the 670. 8 nm laser is as the power source to produce the reference interferograms by the PEM interferometer, the 77. 471 µm maximum optical path difference could be measured by the zero-crossing counting the measuring errors is less than 0. 167 nm, the rebuilt spectral peak wavelength errors of the infrared blackbody is less than 2 nm. the result is content with PEM-FTS.
An annual layer counted EDML time scale covering the past 16700 years
Vinther, B. M.; Clausen, H. B.; Kipfstuhl, S.; Fischer, H.; Bigler, M.; Oerter, H.; Wegner, A.; Wilhelms, F.; Severi, M.; Udisti, R.; Beer, J.; Steinhilber, F.; Muscheler, R.; Rasmussen, S. O.; Svensson, A.
2012-04-01
Using high resolution chemical impurity and dielectric profiling data annual layers have been counted on the EPICA ice core from Dronning Maud Land (EDML), Antarctica spanning the past 16700 years. The methodology used for counting Greenland ice cores and creating the Greenland Ice Core Chronology 2005 (GICC05) [Rasmussen et al., 2006] has also been implemented for the EDML counting. The estimated maximum counting error for the EDML counting is approx. 5%, but a preliminary volcanic matching with Greenland ice core records suggest differences of 1% or less during the Holocene between the EDML counting and GICC05. A comparison of cosmogenic isotope records from EDML and Greenland also suggests differences of less than 1% between the two annual layer counted chronologies. Reference: Rasmussen, S.O., Andersen, K.K., Svensson, A., Steffensen, J.P., Vinther, B.M., Clausen, H.B., Andersen, M.L.S., Johnsen, S.J., Larsen, L.B., Dahl-Jensen, D., Bigler, M., Röthlisberger R., Fischer H., Goto-Azuma K., Hansson M.E., Ruth U, A new Greenland ice core chronology for the last glacial termination, Journal of Geophysical Research Vol. 111, D06102, doi:10.1029/2005JD006079. 2006.
Manual and automated reticulocyte counts.
Simionatto, Mackelly; de Paula, Josiane Padilha; Chaves, Michele Ana Flores; Bortoloso, Márcia; Cicchetti, Domenic; Leonart, Maria Suely Soares; do Nascimento, Aguinaldo José
2010-12-01
Manual reticulocyte counts were examined under light microscopy, using the property whereby supravital stain precipitates residual ribosomal RNA versus the automated flow methods, with the suggestion that in the latter there is greater precision and an ability to determine both mature and immature reticulocyte fractions. Three hundred and forty-one venous blood samples of patients were analyzed of whom 224 newborn and the rest adults; 51 males and 66 females, with ages between 0 and 89 years, as part of the laboratory routine for hematological examinations at the Clinical Laboratory of the Hospital Universitário do Oeste do Paraná. This work aimed to compare manual and automated methodologies for reticulocyte countings and evaluate random and systematic errors. The results obtained showed that the difference between the two methods was very small, with an estimated 0·4% systematic error and 3·9% random error. Thus, it has been confirmed that both methods, when well conducted, can reflect precisely the reticulocyte counts for adequate clinical use.
VersaCount: customizable manual tally software for cell counting
DeRisi Joseph L
2010-01-01
Full Text Available Abstract Background The manual counting of cells by microscopy is a commonly used technique across biological disciplines. Traditionally, hand tally counters have been used to track event counts. Although this method is adequate, there are a number of inefficiencies which arise when managing large numbers of samples or large sample sizes. Results We describe software that mimics a traditional multi-register tally counter. Full customizability allows operation on any computer with minimal hardware requirements. The efficiency of counting large numbers of samples and/or large sample sizes is improved through the use of a "multi-count" register that allows single keystrokes to correspond to multiple events. Automatically updated multi-parameter values are implemented as user-specified equations, reducing errors and time required for manual calculations. The user interface was optimized for use with a touch screen and numeric keypad, eliminating the need for a full keyboard and mouse. Conclusions Our software provides an inexpensive, flexible, and productivity-enhancing alternative to manual hand tally counters.
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Coplestone-Loomis, Lenny
1981-01-01
Pumpkin seeds are counted after students convert pumpkins to jack-o-lanterns. Among the activities involved, pupils learn to count by 10s, make estimates, and to construct a visual representation of 1,000. (MP)
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
MA. Lendita Kryeziu
2015-06-01
Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.
Noise Equivalent Counts Based Emission Image Reconstruction Algorithm of Tomographic Gamma Scanning
Wang, Ke; Feng, Wei; Han, Dong
2014-01-01
Tomographic Gamma Scanning (TGS) is a technique used to assay the nuclide distribution and radioactivity in nuclear waste drums. Both transmission and emission scans are performed in TGS and the transmission image is used for the attenuation correction in emission reconstructions. The error of the transmission image, which is not considered by the existing reconstruction algorithms, negatively affects the final results. An emission reconstruction method based on Noise Equivalent Counts (NEC) is presented. Noises from the attenuation image are concentrated to the projection data to apply the NEC Maximum-Likelihood Expectation-Maximization algorithm. Experiments are performed to verify the effectiveness of the proposed method.
王鼎; 潘苗; 吴瑛
2011-01-01
Aim at the self-calibration of direction-dependent gm-phase errors in case of deterministic signal model, the maximum likelihood method(MLM) for calibrating the direction-dependent gain-phase errors with carry-on instrumental sensors was presented. In order to maximize the high-dimensional nonlinear cost function appearing in the MLM, an improved alternative projection iteration algorithm, which could optimize the azimuths and direc6on-dependent gain-phase errors was proposed. The closed-form expressions of the Cramér-Rao bound(CRB) for azimuths and gain-phase errors were derived. Simulation experiments show the effectiveness and advantage of the novel method.%针对确定信号模型条件下方位依赖幅相误差的自校正问题,给出了一种基于辅助阵元的方位依赖幅相误差最大似然自校正方法;针对最大似然估计器中出现的高维非线性优化问题,推导了一种改进型交替投影迭代算法,从而实现了信号方位和方位依赖幅相误差的优化计算.此外,还推导了信号方位和方位依赖幅相误差的无偏克拉美罗界(CRB).仿真实验结果验证了新方法的有效性和优越性.
1970-01-01
The Health Physics counting room, where the quantity of induced radioactivity in materials is determined. This information is used to evaluate possible radiation hazards from the material investigated.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Generalized Entropy Concentration for Counts
Oikonomou, Kostas N
2016-01-01
We consider the phenomenon of entropy concentration under linear constraints in a discrete setting, using the "balls and bins" paradigm, but without the assumption that the number of balls allocated to the bins is known. Therefore instead of \\ frequency vectors and ordinary entropy, we have count vectors with unknown sum, and a certain generalized entropy. We show that if the constraints bound the allowable sums, this suffices for concentration to occur even in this setting. The concentration can be either in terms of deviation from the maximum generalized entropy value, or in terms of the norm of the difference from the maximum generalized entropy vector. Without any asymptotic considerations, we quantify the concentration in terms of various parameters, notably a tolerance on the constraints which ensures that they are always satisfied by an integral vector. Generalized entropy maximization is not only compatible with ordinary MaxEnt, but can also be considered an extension of it, as it allows us to address...
Count rate performance of a silicon-strip detector for photon-counting spectral CT
Liu, X.; Grönberg, F.; Sjölin, M.; Karlsson, S.; Danielsson, M.
2016-08-01
A silicon-strip detector is developed for spectral computed tomography. The detector operates in photon-counting mode and allows pulse-height discrimination with 8 adjustable energy bins. In this work, we evaluate the count-rate performance of the detector in a clinical CT environment. The output counts of the detector are measured for x-ray tube currents up to 500 mA at 120 kV tube voltage, which produces a maximum photon flux of 485 Mphotons/s/mm2 for the unattenuated beam. The corresponding maximum count-rate loss of the detector is around 30% and there are no saturation effects. A near linear relationship between the input and output count rates can be observed up to 90 Mcps/mm2, at which point only 3% of the input counts are lost. This means that the loss in the diagnostically relevant count-rate region is negligible. A semi-nonparalyzable dead-time model is used to describe the count-rate performance of the detector, which shows a good agreement with the measured data. The nonparalyzable dead time τn for 150 evaluated detector elements is estimated to be 20.2±5.2 ns.
Monitoring Milk Somatic Cell Counts
Gheorghe Şteţca
2014-11-01
Full Text Available The presence of somatic cells in milk is a widely disputed issue in milk production sector. The somatic cell counts in raw milk are a marker for the specific cow diseases such as mastitis or swollen udder. The high level of somatic cells causes physical and chemical changes to milk composition and nutritional value, and as well to milk products. Also, the mastitic milk is not proper for human consumption due to its contribution to spreading of certain diseases and food poisoning. According to these effects, EU Regulations established the maximum threshold of admitted somatic cells in raw milk to 400000 cells / mL starting with 2014. The purpose of this study was carried out in order to examine the raw milk samples provided from small farms, industrial type farms and milk processing units. There are several ways to count somatic cells in milk but the reference accepted method is the microscopic method described by the SR EN ISO 13366-1/2008. Generally samples registered values in accordance with the admissible limit. By periodical monitoring of the somatic cell count, certain technological process issues are being avoided and consumer’s health ensured.
Anarthria impairs subvocal counting.
Cubelli, R; Nichelli, P; Pentore, R
1993-12-01
We studied subvocal counting in two pure anarthric patients. Analysis showed that they performed definitively worse than normal subjects free to articulate subvocally and their scores were in the lower bounds of the performances of subjects suppressing articulation. These results suggest that subvocal counting is impaired after anarthria.
Phillip P. Allen
2014-05-01
Full Text Available Techniques that analyze biological remains from sediment sequences for environmental reconstructions are well established and widely used. Yet, identifying, counting, and recording biological evidence such as pollen grains remain a highly skilled, demanding, and time-consuming task. Standard procedure requires the classification and recording of between 300 and 500 pollen grains from each representative sample. Recording the data from a pollen count requires significant effort and focused resources from the palynologist. However, when an adaptation to the recording procedure is utilized, efficiency and time economy improve. We describe EcoCount, which represents a development in environmental data recording procedure. EcoCount is a voice activated fully customizable digital count sheet that allows the investigator to continuously interact with a field of view during the data recording. Continuous viewing allows the palynologist the opportunity to remain engaged with the essential task, identification, for longer, making pollen counting more efficient and economical. EcoCount is a versatile software package that can be used to record a variety of environmental evidence and can be installed onto different computer platforms, making the adoption by users and laboratories simple and inexpensive. The user-friendly format of EcoCount allows any novice to be competent and functional in a very short time.
... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...
... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...
Sublattice Counting and Orbifolds
Hanany, Amihay; Reffert, Susanne
2010-01-01
Abelian orbifolds of C^3 are known to be encoded by hexagonal brane tilings. To date it is not known how to count all such orbifolds. We fill this gap by employing number theoretic techniques from crystallography, and by making use of Polya's Enumeration Theorem. The results turn out to be beautifully encoded in terms of partition functions and Dirichlet Series. The same methods apply to counting orbifolds of any toric non-compact Calabi-Yau singularity. As additional examples, we count the orbifolds of the conifold, of the L^{aba} theories, and of C^4.
US Fish and Wildlife Service, Department of the Interior — The goal of St. Vincent National Wildlife Refuge's Track Count Protocol is to provide an index to the population size of game animals inhabiting St. Vincent Island.
Your blood contains red blood cells (RBC), white blood cells (WBC), and platelets. Blood count tests measure the number and types of cells in your blood. This helps doctors check on your overall health. ...
Kersting, Kristian; Natarajan, Sriraam
2012-01-01
A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques such as belief propagation (BP). In this paper, we present a new and simple BP algorithm, called counting BP, that exploits such additional symmetries. Starting from a given factor graph, counting BP first constructs a compressed factor graph of clusternodes and clusterfactors, corresponding to sets of nodes and factors that are indistinguishable given the evidence. Then it runs a modified BP algorithm on the compressed graph that is equivalent to running BP on the original factor graph. Our experiments show that counting BP is applicable to a variety of important AI tasks such as (dynamic) relational models and boolean model counting, and that significant efficiency gains are obtainable, often by orders of magnitude.
Analog multivariate counting analyzers
Nikitin, A V; Armstrong, T P
2003-01-01
Characterizing rates of occurrence of various features of a signal is of great importance in numerous types of physical measurements. Such signal features can be defined as certain discrete coincidence events, e.g. crossings of a signal with a given threshold, or occurrence of extrema of a certain amplitude. We describe measuring rates of such events by means of analog multivariate counting analyzers. Given a continuous scalar or multicomponent (vector) input signal, an analog counting analyzer outputs a continuous signal with the instantaneous magnitude equal to the rate of occurrence of certain coincidence events. The analog nature of the proposed analyzers allows us to reformulate many problems of the traditional counting measurements, and cast them in a form which is readily addressed by methods of differential calculus rather than by algebraic or logical means of digital signal processing. Analog counting analyzers can be easily implemented in discrete or integrated electronic circuits, do not suffer fro...
Department of Housing and Urban Development — This report displays the data communities reported to HUD about the nature of their dedicated homeless inventory, referred to as their Housing Inventory Count (HIC)....
Allegheny County Traffic Counts
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Traffic sensors at over 1,200 locations in Allegheny County collect vehicle counts for the Pennsylvania Department of Transportation. Data included in the Health...
Carlsson, Sten
1993-01-01
In liquid scintillation counting (LSC) we use the process of luminescense to detect ionising radiation emit$ed from a radionuclide. Luminescense is emission of visible light of nonthermal origin. 1t was early found that certain organic molecules have luminescent properties and such molecules are used in LSC. Today LSC is the mostwidespread method to detect pure beta-ernitters like tritium and carbon-14. 1t has unique properties in its efficient counting geometry, deteetability and the lack of...
2015-01-01
In this paper we consider an elementary, and largely unexplored, combinatorial problem in low-dimensional topology. Consider a real 2-dimensional compact surface $S$, and fix a number of points $F$ on its boundary. We ask: how many configurations of disjoint arcs are there on $S$ whose boundary is $F$? We find that this enumerative problem, counting curves on surfaces, has a rich structure. For instance, we show that the curve counts obey an effective recursion, in the general framework of to...
Gukov, Sergei
2016-01-01
Interpreting renormalization group flows as solitons interpolating between different fixed points, we ask various questions that are normally asked in soliton physics but not in renormalization theory. Can one count RG flows? Are there different "topological sectors" for RG flows? What is the moduli space of an RG flow, and how does it compare to familiar moduli spaces of (supersymmetric) dowain walls? Analyzing these questions in a wide variety of contexts --- from counting RG walls to AdS/C...
Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo [Institut Laue Langevin, Grenoble (France)
2015-07-01
A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involved are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)
Automated vehicle counting using image processing and machine learning
Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae
2017-04-01
Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
Tamura, Takako
2015-12-01
The circulating blood volume accounts for 8% of the body weight, of which 45% comprises cellular components (blood cells) and 55% liquid components. We can measure the number and morphological features of blood cells (leukocytes, red blood cells, platelets), or count the amount of hemoglobin in a complete blood count: (CBC). Blood counts are often used to detect inflammatory diseases such as infection, anemia, a bleeding tendency, and abnormal cell screening of blood disease. This count is widely used as a basic data item of health examination. In recent years, clinical tests before consultation have become common among outpatient clinics, and the influence of laboratory values on consultation has grown. CBC, which is intended to count the number of raw cells and to check morphological features, is easily influenced by the environment, techniques, etc., during specimen collection procedures and transportation. Therefore, special attention is necessary to read laboratory data. Providing correct test values that accurately reflect a patient's condition from the laboratory to clinical side is crucial. Inappropriate medical treatment caused by erroneous values resulting from altered specimens should be avoided. In order to provide correct test values, the daily management of devices is a matter of course, and comprehending data variables and positively providing information to the clinical side are important. In this chapter, concerning sampling collection, blood collection tubes, dealing with specimens, transportation, and storage, I will discuss their effects on CBC, along with management or handling methods.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Photon counting digital holography
Demoli, Nazif; Skenderović, Hrvoje; Stipčević, Mario; Pavičić, Mladen
2016-05-01
Digital holography uses electronic sensors for hologram recording and numerical method for hologram reconstruction enabling thus the development of advanced holography applications. However, in some cases, the useful information is concealed in a very wide dynamic range of illumination intensities and successful recording requires an appropriate dynamic range of the sensor. An effective solution to this problem is the use of a photon-counting detector. Such detectors possess counting rates of the order of tens to hundreds of millions counts per second, but conditions of recording holograms have to be investigated in greater detail. Here, we summarize our main findings on this problem. First, conditions for optimum recording of digital holograms for detecting a signal significantly below detector's noise are analyzed in terms of the most important holographic measures. Second, for time-averaged digital holograms, optimum recordings were investigated for exposures shorter than the vibration cycle. In both cases, these conditions are studied by simulations and experiments.
Measurement Error Models in Astronomy
Kelly, Brandon C
2011-01-01
I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.
Soeker, H. [Deutsches Windenergie-Institut (Germany)
1996-09-01
As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)
Lee, D.; Lim, K.; Park, K.; Lee, C.; Alexander, S.; Cho, G.
2017-03-01
In this study, an innovative fast X-ray photon-counting pixel for high X-ray flux applications is proposed. A computed tomography system typically uses X-ray fluxes up to 108 photons/mm2/sec at the detector and thus a fast read-out is required in order to process individual X-ray photons. Otherwise, pulse pile-up can occur at the output of the signal processing unit. These superimposed signals can distort the number of incident X-ray photons leading to count loss. To minimize such losses, a cross detection method was implemented in the photon-counting pixel. A maximum count rate under X-ray tube voltage of 90 kV was acquired which reflect electrical test results of the proposed photon counting pixel. A maximum count of 780 kcps was achieved with a conventional photon-counting pixel at the pulse processing time of 500 ns, which is the time for a pulse to return to the baseline from the initial rise. In contrast, the maximum count of about 8.1 Mcps was achieved with the proposed photon-counting pixel. From these results, it was clear that the maximum count rate was increased by approximately a factor 10 times by adopting the cross detection method. Therefore, it is an innovative method to reduce count loss from pulse pile-up in a photon-counting pixel while maintaining the pulse processing time.
Optical People Counting for Demand Controlled Ventilation: A Pilot Study of Counter Performance
Fisk, William J.; Sullivan, Douglas
2009-12-26
This pilot scale study evaluated the counting accuracy of two people counting systems that could be used in demand controlled ventilation systems to provide control signals for modulating outdoor air ventilation rates. The evaluations included controlled challenges of the people counting systems using pre-planned movements of occupants through doorways and evaluations of counting accuracies when naive occupants (i.e., occupants unaware of the counting systems) passed through the entrance doors of the building or room. The two people counting systems had high counting accuracy accuracies, with errors typically less than 10percent, for typical non-demanding counting events. However, counting errors were high in some highly challenging situations, such as multiple people passing simultaneously through a door. Counting errors, for at least one system, can be very high if people stand in the field of view of the sensor. Both counting system have limitations and would need to be used only at appropriate sites and where the demanding situations that led to counting errors were rare.
Dougherty Stahl, Katherine A.
2014-01-01
Each disciplinary community has its own criteria for determining what counts as evidence of knowledge in their academic field. The criteria influence the ways that a community's knowledge is created, communicated, and evaluated. Situating reading, writing, and language instruction within the content areas enables teachers to explicitly…
... may be ordered when: CBC results show a decreased RBC count and/or a decreased hemoglobin and hematocrit A healthcare practitioner wants to ... and hematocrit, to help determine the degree and rate of overproduction of RBCs ... during pregnancy . Newborns have a higher percentage of reticulocytes, but ...
Stuart P. Green
2016-08-01
Full Text Available What counts, or should count, as prostitution? In the criminal law today, prostitution is understood to involve the provision of sexual services in exchange for money or other benefits. But what exactly is a ‘sexual service’? And what exactly is the nature of the required ‘exchange’? The key to answering these questions is to recognize that how we choose to define prostitution will inevitably depend on why we believe one or more aspects of prostitution are wrong or harmful, or should be criminalized or otherwise deterred, in the first place. These judgements, in turn, will often depend on an assessment of the contested empirical evidence on which they rest. This article describes a variety of real-world contexts in which the ‘what counts as prostitution’ question has arisen, surveys a range of leading rationales for deterring prostitution, and demonstrates how the answer to the definition question depends on the answer to the normative question. The article concludes with some preliminary thoughts on how analogous questions about what should count as sexual conduct arise in the context of consensual offences such as adultery and incest, as well as non-consensual offences such as sexual assault.
Preset time count rate meter using adaptive digital signal processing
Žigić Aleksandar D.
2005-01-01
Full Text Available Two presented methods were developed to improve classical preset time count rate meters by using adapt able signal processing tools. An optimized detection algorithm that senses the change of mean count rate was implemented in both methods. Three low-pass filters of various structures with adaptable parameters to implement the control of the mean count rate error by suppressing the fluctuations in a controllable way, were considered and one of them implemented in both methods. An adaptation algorithm for preset time interval calculation executed after the low-pass filter was devised and implemented in the first method. This adaptation algorithm makes it possible to obtain shorter preset time intervals for higher stationary mean count rate. The adaptation algorithm for preset time interval calculation executed before the low-pass filter was devised and implemented in the second method. That adaptation algorithm enables sensing of a rapid change of the mean count rate before fluctuations suppression is carried out. Some parameters were fixed to their optimum values after appropriate optimization procedure. Low-pass filters have variable number of stationary coefficients depending on the specified error and the mean count rate. They implement control of the mean count rate error by suppressing fluctuations in a controllable way. The simulated and realized methods, using the developed algorithms, guarantee that the response time shall not exceed 2 s for the mean count rate higher than 2 s-1 and that controllable mean count rate error shall be within the range of ±4% to ±10%.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
When Errors Count: An EEG Study on Numerical Error Monitoring under Performance Pressure
Schillinger, Frieder L.; De Smedt, Bert; Grabner, Roland H.
2016-01-01
In high-stake tests, students often display lower achievements than expected based on their skill level--a phenomenon known as choking under pressure. This imposes a serious problem for many students, especially for test-anxious individuals. Among school subjects, mathematics has been shown to be particularly vulnerable to choking. To succeed in a…
The SCUBA 8-mJy survey - I Sub-millimetre maps, sources and number counts
Scott, S; Dunlop, J; Serjeant, S; Peacock, J; Ivison, R J; Oliver, S; Mann, R; Lawrence, A; Efstathiou, A; Rowan-Robinson, M; Hughes, D; Archibald, E; Blain, A W; Longair, M
2002-01-01
We present maps, source lists, and number counts from the largest, unbiassed, extragalactic sub-mm survey so far undertaken with the SCUBA camera on the JCMT. Our maps cover 260 sq. arcmin, to a noise level S(850)=2.5 mJy/beam. We have reduced the data using both SURF, and our own pipeline which produces zero-footprint maps and noise images. The uncorrelated noise maps produced by the latter approach have allowed application of a maximum-likelihood method to measure the statistical significance of each peak, leading to properly quantified flux-density errors for all potential sources. We detect 19 sources with S/N > 4, 38 with S/N > 3.5, and 72 with S/N > 3. To assess completeness and the impact of source confusion we have applied our source extraction algorithm to a series of simulated images. The result is a new estimate of the sub-mm source counts in the flux-density range S(850)=5-15mJy, which we compare with other estimates, and with model predictions. Our estimate of the cumulative source count at S(850...
Stride Counting in Human Walking and Walking Distance Estimation Using Insole Sensors
Truong, Phuc Huu; Lee, Jinwook; Kwon, Ae-Ran; Jeong, Gu-Min
2016-01-01
This paper proposes a novel method of estimating walking distance based on a precise counting of walking strides using insole sensors. We use an inertial triaxial accelerometer and eight pressure sensors installed in the insole of a shoe to record walkers’ movement data. The data is then transmitted to a smartphone to filter out noise and determine stance and swing phases. Based on phase information, we count the number of strides traveled and estimate the movement distance. To evaluate the accuracy of the proposed method, we created two walking databases on seven healthy participants and tested the proposed method. The first database, which is called the short distance database, consists of collected data from all seven healthy subjects walking on a 16 m distance. The second one, named the long distance database, is constructed from walking data of three healthy subjects who have participated in the short database for an 89 m distance. The experimental results show that the proposed method performs walking distance estimation accurately with the mean error rates of 4.8% and 3.1% for the short and long distance databases, respectively. Moreover, the maximum difference of the swing phase determination with respect to time is 0.08 s and 0.06 s for starting and stopping points of swing phases, respectively. Therefore, the stride counting method provides a highly precise result when subjects walk. PMID:27271634
Stride Counting in Human Walking and Walking Distance Estimation Using Insole Sensors
Phuc Huu Truong
2016-06-01
Full Text Available This paper proposes a novel method of estimating walking distance based on a precise counting of walking strides using insole sensors. We use an inertial triaxial accelerometer and eight pressure sensors installed in the insole of a shoe to record walkers’ movement data. The data is then transmitted to a smartphone to filter out noise and determine stance and swing phases. Based on phase information, we count the number of strides traveled and estimate the movement distance. To evaluate the accuracy of the proposed method, we created two walking databases on seven healthy participants and tested the proposed method. The first database, which is called the short distance database, consists of collected data from all seven healthy subjects walking on a 16 m distance. The second one, named the long distance database, is constructed from walking data of three healthy subjects who have participated in the short database for an 89 m distance. The experimental results show that the proposed method performs walking distance estimation accurately with the mean error rates of 4.8% and 3.1% for the short and long distance databases, respectively. Moreover, the maximum difference of the swing phase determination with respect to time is 0.08 s and 0.06 s for starting and stopping points of swing phases, respectively. Therefore, the stride counting method provides a highly precise result when subjects walk.
The right to count does not always count
Sodemann, Morten
2013-01-01
The best prescription against illness is learning to read and to count. People who are unable to count have a harder time learning to read. People who have difficulty counting make poorer decisions, are less able to combine information and are less likely to have a strategy for life...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...
Symptoms High red blood cell count By Mayo Clinic Staff A high red blood cell count is an increase in oxygen-carrying cells in your bloodstream. Red blood cells transport oxygen from your lungs to tissues throughout ...
Counting and Topological Order
陈阳军
1997-01-01
The counting method is a simple and efficient method for processing linear recursive datalog queries.Its time complexity is bounded by O(n,e)where n and e denote the numbers the numbers of nodes and edges,respectively,in the graph representing the input.relations.In this paper,the concepts of heritage appearance function and heritage selection function are introduced,and an evaluation algorithm based on the computation of such functions in topological order is developed .This new algorithm requires only linear time in the case of non-cyclic data.
The Acquisition of Counting Skill in Preschooler
Kadir Çakır
2013-03-01
Full Text Available Abstract- The aim of this study was to find out more information on the acquisition of counting skill in preschool children. For this purpose, children’s judgment of acceptability of a counting activity was used to investigate whether children’s counting skills are governed by their implicit knowledge of a number of counting principles. Data showed that children easily recognized the violation of one or more counting principles in a one’s application of counting principles in sequences of English and Turkish count words, implying that children have the understanding of counting principles. The sessions on counting in Turkish make it very likely that the children were responding to violations of rules rather than merely violation of well-learning of count words. These results give additional support to the assumption that there are innate counting principles that rule young children’s counting. Keywords: counting principles, error-detection task, mathematical development. Özet- Okul Öncesi Çocuklarda Sayma İlkeleri. Bu çalışmada çocukların bir sayma etkinliğinin geçerli olup olmadığı hakkındaki yargıları kullanılarak, sayma becerisinin doğuştan sahip olunan bir dizi örtük ilkeler tarafından yönlendirilip yönlendirilmediği incelenmiştir. Bu amaç doğrultusunda, bir grup okul öncesi çocuklardan videodan izledikleri bir aktör çocuğun hem anadilinde (İngilizce hem de bilmedikleri bir yabancı dilde (Türkçe yaptığı farklı hatalar içeren sayma serilerinin doğru veya yanlış olup olmadığı belirtmesi istenmiştir. Elde edilen sonuçlar çocukların sayma etkinliklerine rehberlik eden doğuştan getirdikleri örtük “sayma ilkelerine” sahip olduklarına ilişkin görüşleri destekler yönündedir. Örneğin, gerek İngilizce gerekse Türkçe sayma serilerinde, “standart (doğru sayma” serisi diğer tüm serilere göre anlamlı ölçüde “doğru” bir sayma olarak değerlendirilirken, T
Effect of a biological activated carbon filter on particle counts
Su-hua WU; Bing-zhi DONG; Tie-jun QIAO; Jin-song ZHANG
2008-01-01
Due to the importance of biological safety in drinking water quality and the disadvantages which exist in traditional methods of detecting typical microorganisms such as Cryptosporidium and Giardia,it is necessary to develop an alternative.Particle counts is a qualitative measurement of the amount of dissolved solids in water.The removal rate of particle counts was previously used as an indicator of the effectiveness of a biological activated carbon(BAC)filter in removing Cryptosporidium and Giardia.The particle counts in a BAC filter effluent over one operational period and the effects of BAC filter construction and operational parameters were investigated with a 10 m3/h pilot plant.The results indicated that the maximum particle count in backwash remnant water was as high as 1296 count/ml and it needed about 1.5 h to reduce from the maximum to less than 50 count/ml.During the standard filtration period,particle counts stay constant at less than 50 count/ml for 5 d except when influ-enced by sand filter backwash remnant water.The removal rates of particle counts in the BAC filter are related to characteristics of the carbon.For example,a columned carbon and a sand bed removed 33.3% and 8.5% of particles,respectively,while the particle counts in effluent from a cracked BAC filter was higher than that of the influent.There is no significant difference among particle removal rates with different filtration rates.High post-ozone dosage(>2 mg/L)plays an important role in particle count removal;when the dosage was 3 mg/L,the removal rates by carbon layers and sand beds decreased by 17.5% and increased by 9.5%,respectively,compared with a 2 mg/L dosage.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
High quantum efficiency S-20 photocathodes for photon counting applications
Orlov, Dmitry A; Pinto, Serge Duarte; Glazenborg, Rene; Kernen, Emilie
2016-01-01
Based on conventional S-20 processes, a new series of high quantum efficiency (QE) photocathodes has been developed that can be specifically tuned for use in the ultraviolet, blue or green regions of the spectrum. The QE values exceed 30% at maximum response, and the dark count rate is found to be as low as 30 Hz/cm2 at room temperature. This combination of properties along with a fast temporal response makes these photocathodes ideal for application in photon counting detectors.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
[Survey in hospitals. Nursing errors, error culture and error management].
Habermann, Monika; Cramer, Henning
2010-09-01
Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.
Counting coalescent histories.
Rosenberg, Noah A
2007-04-01
Given a species tree and a gene tree, a valid coalescent history is a list of the branches of the species tree on which coalescences in the gene tree take place. I develop a recursion for the number of valid coalescent histories that exist for an arbitrary gene tree/species tree pair, when one gene lineage is studied per species. The result is obtained by defining a concept of m-extended coalescent histories, enumerating and counting these histories, and taking the special case of m = 1. As a sum over valid coalescent histories appears in a formula for the probability that a random gene tree evolving along the branches of a fixed species tree has a specified labeled topology, the enumeration of valid coalescent histories can considerably reduce the effort required for evaluating this formula.
Oscillations in counting statistics
Wilk, Grzegorz
2016-01-01
The very large transverse momenta and large multiplicities available in present LHC experiments on pp collisions allow a much closer look at the corresponding distributions. Some time ago we discussed a possible physical meaning of apparent log-periodic oscillations showing up in p_T distributions (suggesting that the exponent of the observed power-like behavior is complex). In this talk we concentrate on another example of oscillations, this time connected with multiplicity distributions P(N). We argue that some combinations of the experimentally measured values of P(N) (satisfying the recurrence relations used in the description of cascade-stochastic processes in quantum optics) exhibit distinct oscillatory behavior, not observed in the usual Negative Binomial Distributions used to fit data. These oscillations provide yet another example of oscillations seen in counting statistics in many different, apparently very disparate branches of physics further demonstrating the universality of this phenomenon.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Galaxy Counts at 24 Microns in the SWIRE Fields
Shupe, David L; Lonsdale, Carol J; Masci, Frank; Evans, Tracey; Fang, Fan; Oliver, Sebastian; Vaccari, Mattia; Rodighiero, Giulia; Padgett, Deborah; Surace, Jason A; Xu, C Kevin; Berta, Stefano; Pozzi, Francesca; Franceschini, Alberto; Babbedge, Thomas; Gonzales-Solares, Eduardo; Siana, Brian D; Farrah, Duncan; Frayer, David T; Smith, H E; Polletta, Maria; Owen, Frazer; Perez-Fournon, Ismael
2007-01-01
This paper presents galaxy source counts at 24 microns in the six Spitzer Wide-field InfraRed Extragalactic (SWIRE) fields. The source counts are compared to counts in other fields, and to model predictions that have been updated since the launch of Spitzer. This analysis confirms a very steep rise in the Euclidean-normalized differential number counts between 2 mJy and 0.3 mJy. Variations in the counts between fields show the effects of sample variance in the flux range 0.5-10 mJy, up to 100% larger than Poisson errors. Nonetheless, a "shoulder" in the normalized counts persists at around 3 mJy. The peak of the normalized counts at 0.3 mJy is higher and narrower than most models predict. In the ELAIS N1 field, the 24 micron data are combined with Spitzer-IRAC data and five-band optical imaging, and these bandmerged data are fit with photometric redshift templates. Above 1 mJy the counts are dominated by galaxies at z less than 0.3. By 300 microJy, about 25% are between z ~ 0.3-0.8, and a significant fraction...
Mixture Models for the Analysis of Repeated Count Data.
van Duijn, M.A.J.; Böckenholt, U
1995-01-01
Repeated count data showing overdispersion are commonly analysed by using a Poisson model with varying intensity parameter. resulting in a mixed model. A mixed model with a gamma distribution for the Poisson parameter does not adequately fit a data set on 721 children's spelling errors. An
Beam positioning error budget in ICF driver
Shi Zhi Quan; Su Jing Qin
2002-01-01
The author presents the method of linear weight sum to beam positioning budget on the basis of ICF request on targeting, the approach of equal or unequal probability to allocate errors to each optical element. Based on the relationship between the motion of the optical components and beam position on target, the position error of the optical components was evaluated, which was referred to as the maximum range. Lots of ray trace were performed, the position error budget were modified by law of the normal distribution. An overview of position error budget of the components is provided
Application of neutron multiplicity counting to waste assay
Pickrell, M.M.; Ensslin, N. [Los Alamos National Lab., NM (United States); Sharpe, T.J. [North Carolina State Univ., Raleigh, NC (United States)
1997-11-01
This paper describes the use of a new figure of merit code that calculates both bias and precision for coincidence and multiplicity counting, and determines the optimum regions for each in waste assay applications. A {open_quotes}tunable multiplicity{close_quotes} approach is developed that uses a combination of coincidence and multiplicity counting to minimize the total assay error. An example is shown where multiplicity analysis is used to solve for mass, alpha, and multiplication and tunable multiplicity is shown to work well. The approach provides a method for selecting coincidence, multiplicity, or tunable multiplicity counting to give the best assay with the lowest total error over a broad spectrum of assay conditions. 9 refs., 6 figs.
Jia-Yu Tang; Zu-Hui Fan
2003-01-01
We study the counts of resolved SZE (Sunyaev-Zel'dovich effect) clus-ters expected from an interferometric survey in different cosmological models underdifferent conditions. The self-similar universal gas model and Press-Schechter massfunction are used. We take the observing frequency to be 90 GHz, and consider twodish diameters, 1.2 m and 2.5 m. We calculate the number density of the galaxyclusters dN/(dΩdz) at a high flux limit Slimv = 100mJy and at a relative lowSlimv = 10 mJy. The total numbers of SZE clusters N in two low-Ω0 models arecompared. The results show that the influence of the resolved effect depends notonly on D, but also on Slimv: at a given D, the effect is more significant for a highthan for a low Slim Also, the resolved effect for a flat universe is more impressivethan that for an open universe. For D = 1.2m and Slimv= 10mJy, the resolvedeffect is very weak. Considering the designed interferometers which will be used tosurvey SZE clusters, we find that the resolved effect is insignificant when estimatingthe expected yield of the SZE cluster surveys.
Multivariate ultrametric root counting
Avendano, Martin
2011-01-01
Let $K$ be a field, complete with respect to a discrete non-archimedian valuation and let $k$ be the residue field. Consider a system $F$ of $n$ polynomial equations in $K\\vars$. Our first result is a reformulation of the classical Hensel's Lemma in the language of tropical geometry: we show sufficient conditions (semiregularity at $w$) that guarantee that the first digit map $\\delta:(K^\\ast)^n\\to(k^\\ast)^n$ is a one to one correspondence between the solutions of $F$ in $(K^\\ast)^n$ with valuation $w$ and the solutions in $(k^\\ast)^n$ of the initial form system ${\\rm in}_w(F)$. Using this result, we provide an explicit formula for the number of solutions in $(K^\\ast)^n$ of a certain class of systems of polynomial equations (called regular), characterized by having finite tropical prevariety, by having initial forms consisting only of binomials, and by being semiregular at any point in the tropical prevariety. Finally, as a consequence of the root counting formula, we obtain the expected number of roots in $(K...
Making environmental DNA count.
Kelly, Ryan P
2016-01-01
The arc of reception for a new technology or method--like the reception of new information itself--can pass through predictable stages, with audiences' responses evolving from 'I don't believe it', through 'well, maybe' to 'yes, everyone knows that' to, finally, 'old news'. The idea that one can sample a volume of water, sequence DNA out of it, and report what species are living nearby has experienced roughly this series of responses among biologists, beginning with the microbial biologists who developed genetic techniques to reveal the unseen microbiome. 'Macrobial' biologists and ecologists--those accustomed to dealing with species they can see and count--have been slower to adopt such molecular survey techniques, in part because of the uncertain relationship between the number of recovered DNA sequences and the abundance of whole organisms in the sampled environment. In this issue of Molecular Ecology Resources, Evans et al. (2015) quantify this relationship for a suite of nine vertebrate species consisting of eight fish and one amphibian. Having detected all of the species present with a molecular toolbox of six primer sets, they consistently find DNA abundances are associated with species' biomasses. The strength and slope of this association vary for each species and each primer set--further evidence that there is no universal parameter linking recovered DNA to species abundance--but Evans and colleagues take a significant step towards being able to answer the next question audiences tend to ask: 'Yes, but how many are there?'
LAWRENCE RADIATION LABORATORY COUNTING HANDBOOK
Group, Nuclear Instrumentation
1966-10-01
The Counting Handbook is a compilation of operational techniques and performance specifications on counting equipment in use at the Lawrence Radiation Laboratory, Berkeley. Counting notes have been written from the viewpoint of the user rather than that of the designer or maintenance man. The only maintenance instructions that have been included are those that can easily be performed by the experimenter to assure that the equipment is operating properly.
Counting Frequencies from Zotero Items
Spencer Roberts
2013-04-01
Full Text Available In Counting Frequencies you learned how to count the frequency of specific words in a list using python. In this lesson, we will expand on that topic by showing you how to get information from Zotero HTML items, save the content from those items, and count the frequencies of words. It may be beneficial to look over the previous lesson before we begin.
Social Security Administration — Staging Instance for all SUMs Counts related projects including: Redeterminations/Limited Issue, Continuing Disability Resolution, CDR Performance Measures, Initial...
Optimized tomography of continuous variable systems using excitation counting
Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang
2016-11-01
We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.
The Space Complexity of 2-Dimensional Approximate Range Counting
Wei, Zhewei; Yi, Ke
2013-01-01
with additive error εn. A well-known solution for this problem is the ε-approximation. Informally speaking, an ε-approximation of P is a subset A ⊆ P that allows us to estimate the number of points in P ∩ R by counting the number of points in A ∩ R. It is known that an ε-approximation of size exists for any P......We study the problem of 2-dimensional orthogonal range counting with additive error. Given a set P of n points drawn from an n × n grid and an error parameter ε, the goal is to build a data structure, such that for any orthogonal range R, the data structure can return the number of points in P ∩ R...
Counting Closed String States in a Box
Meana, M L; Peñalba, J P; Meana, Marco Laucelli; Peñalba, Jesús Puente
1997-01-01
The computation of the microcanonical density of states for a string gas in a finite volume needs a one by one count because of the discrete nature of the spectrum. We present a way to do it using geometrical arguments in phase space. We take advantage of this result in order to obtain the thermodynamical magnitudes of the system. We show that the results for an open universe exactly coincide with the infinite volume limit of the expression obtained for the gas in a box. For any finite volume the Hagedorn temperature is a maximum one, and the specific heat is always positive. We also present a definition of pressure compatible with R-duality seen as an exact symmetry, which allows us to make a study on the physical phase space of the system. Besides a maximum temperature the gas presents an asymptotic pressure.
Murray, Mayan; Hill, Melissa L.; Liu, Kela; Mainprize, James G.; Yaffe, Martin J.
2016-03-01
Whole-mount pathology imaging has the potential to revolutionize clinical practice by preserving context lost when tissue is cut to fit onto conventional slides. Whole-mount digital images are very large, ranging from 4GB to greater than 50GB, making concurrent processing infeasible. Block-processing is a method commonly used to divide the image into smaller blocks and process them individually. This approach is useful for certain tasks, but leads to over-counting objects located on the seams between blocks. This issue is exaggerated as the block size decreases. In this work we apply a novel technique to enumerate vessels, a clinical task that would benefit from automation in whole-mount images. Whole-mount sections of rabbit VX2 tumors were digitized. Color thresholding was used to segment the brown CD31- DAB stained vessels. This vessel enumeration was applied to the entire whole-mount image in two distinct phases of block-processing. The first (whole-processing) phase used a basic grid and only counted objects that did not intersect the block's borders. The second (seam-processing) phase used a shifted grid to ensure all blocks captured the block-seam regions from the original grid. Only objects touching this seam-intersection were counted. For validation, segmented vessels were randomly embedded into a whole-mount image. The technique was tested on the image using 24 different block-widths. Results indicated that the error reaches a minimum at a block-width equal to the maximum vessel length, with no improvement as the block-width increases further. Object-density maps showed very good correlation between the vessel-dense regions and the pathologist outlined tumor regions.
Error handling strategies in multiphase inverse modeling
Finsterle, S.; Zhang, Y.
2010-12-01
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Counting independent sets using the Bethe approximation
Chertkov, Michael [Los Alamos National Laboratory; Chandrasekaran, V [MIT; Gamarmik, D [MIT; Shah, D [MIT; Sin, J [MIT
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
Generalized Gaussian Error Calculus
Grabe, Michael
2010-01-01
For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...
System Measures Errors Between Time-Code Signals
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
Reference counting for reversible languages
Mogensen, Torben Ægidius
2014-01-01
deallocation. This requires the language to be linear: A pointer can not be copied and it can only be eliminated by deallocating the node to which it points. We overcome this limitation by adding reference counts to nodes: Copying a pointer to a node increases the reference count of the node and eliminating...
Coinductive counting with weighted automata
Rutten, J.J.M.M.
2002-01-01
A general methodology is developed to compute the solution of a wide variety of basic counting problems in a uniform way: (1) the objects to be counted are enumerated by means of an infinite weighted automaton; (2) the automaton is reduced by means of the quantitative notion of stream bisimulation;
Symptoms Low white blood cell count By Mayo Clinic Staff A low white blood cell count (leukopenia) is a decrease in disease-fighting cells ( ... a decrease in a certain type of white blood cell (neutrophil). The definition of low white blood cell ...
Classification of Spreadsheet Errors
Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian
2008-01-01
This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...
Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model
无
2006-01-01
In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.
Hanford whole body counting manual
Palmer, H.E.; Brim, C.P.; Rieksts, G.A.; Rhoads, M.C.
1987-05-01
This document, a reprint of the Whole Body Counting Manual, was compiled to train personnel, document operation procedures, and outline quality assurance procedures. The current manual contains information on: the location, availability, and scope of services of Hanford's whole body counting facilities; the administrative aspect of the whole body counting operation; Hanford's whole body counting facilities; the step-by-step procedure involved in the different types of in vivo measurements; the detectors, preamplifiers and amplifiers, and spectroscopy equipment; the quality assurance aspect of equipment calibration and recordkeeping; data processing, record storage, results verification, report preparation, count summaries, and unit cost accounting; and the topics of minimum detectable amount and measurement accuracy and precision. 12 refs., 13 tabs.
Live-timer method of automatic dead-time correction for precision counting
Porges, K. G.; Rudnick, S. J.
1969-01-01
Automatic correction for dead time losses in nuclear counting experiments is implemented by a simple live timer arrangement in which each counting interval is extended for compensation for the dead time during that interval. this method eliminates repetitious manual calculations, source of error, and dependence upon paralysis shifts.
Barnaby Young
2014-11-01
Full Text Available Introduction: The clinical value of monitoring CD4 counts in immune reconstituted, virologically suppressed HIV-infected patients is limited. We investigated if absolute lymphocyte counts (ALC from an automated blood counting machine could accurately estimate CD4 counts. Materials and Methods: CD4 counts, ALC and HIV viral load (VL were extracted from an electronic laboratory database for all patients in HIV care at the Communicable Diseases Centre, Tan Tock Seng Hospital, Singapore (2008–13. Virologic suppression was defined as consecutive HIV VLs 300 cells/mm3. CD4 counts were estimated using the CD4% from the first value >300 and an ALC 181–540 days later. Results: A total of 1215 periods of virologic suppression were identified from 1183 patients, with 2227 paired CD4-ALCs available for analysis. 98.3% of CD4 estimates were within 50% of the actual value. 83.3% within 25% and 40.5% within 10%. The error pattern was approximately symmetrically distributed around a mean of −6.5%, but significant peaked and with mild positive skew (kurtosis 4.45, skewness 1.07. Causes for these errors were explored. Variability between lymphocyte counts measured by ALC and flow cytometry did not follow an apparent pattern, and contributed to 32% of the total error (median absolute error 5.5%, IQR 2.6–9.3. The CD4% estimate was significantly lower than the actual value (t-test, p<0.0001. The magnitude of this difference was greater for lower values, and above 25%, there was no significant difference. Precision of the CD4 estimate was similar as baseline CD4% increased, however accuracy improved significantly: from a median 16% underestimation to 0% as baseline CD4% increased from 12 to 30. Above a CD4% baseline of 25, estimates of CD4 were within 25% of the actual value 90.2% of the time with a median 2% underestimation. A robust (bisqaure linear regression model was developed to correct for the rise in CD4% with time, when baseline was 14–24
James O Lloyd-Smith
Full Text Available BACKGROUND: The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored. METHODOLOGY: This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks. CONCLUSIONS: Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.
The origins of counting algorithms.
Cantlon, Jessica F; Piantadosi, Steven T; Ferrigno, Stephen; Hughes, Kelly D; Barnard, Allison M
2015-06-01
Humans' ability to count by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that nonhuman primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. First, they saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a time. At the point when the second set was approximately equal to the first set, the monkeys spontaneously moved to choose the second set even before that cache was completely baited. Using a novel Bayesian analysis, we show that the monkeys used an approximate counting algorithm for comparing quantities in sequence that is incremental, iterative, and condition controlled. This proto-counting algorithm is structurally similar to formal counting in humans and thus may have been an important evolutionary precursor to human counting. © The Author(s) 2015.
High Count Rate Single Photon Counting Detector Array Project
National Aeronautics and Space Administration — An optical communications receiver requires efficient and high-rate photon-counting capability so that the information from every photon, received at the aperture,...
Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator
Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl
2011-01-01
In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....
Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator
Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl
2011-01-01
In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....
Vote Counting as Mathematical Proof
Schürmann, Carsten; Pattinson, Dirk
2015-01-01
Trust in the correctness of an election outcome requires proof of the correctness of vote counting. By formalising particular voting protocols as rules, correctness of vote counting amounts to verifying that all rules have been applied correctly. A proof of the outcome of any particular election......-based formalisation of voting protocols inside a theorem prover, we synthesise vote counting programs that are not only provably correct, but also produce independently verifiable certificates. These programs are generated from a (formal) proof that every initial set of ballots allows to decide the election winner...
A New Method of Error Compensation for Numerical Control System
夏蔚军; 吴智铭; 李济顺; 张洛平
2003-01-01
This paper presents a method of rapid machine tool error modeling, separation, and compensation using grating ruler. A robust modeling procedure for geometric errors is developed and a fast data processing algorithm is designed by using the error separation technique. After compensation with the new method, the maximum position error of the experiment workbench can be reduced from 400μm to 15μm. The experimental results show the effectiveness and accuracy of this method.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
Quality control methods in accelerometer data processing: identifying extreme counts.
Carly Rich
Full Text Available BACKGROUND: Accelerometers are designed to measure plausible human activity, however extremely high count values (EHCV have been recorded in large-scale studies. Using population data, we develop methodological principles for establishing an EHCV threshold, propose a threshold to define EHCV in the ActiGraph GT1M, determine occurrences of EHCV in a large-scale study, identify device-specific error values, and investigate the influence of varying EHCV thresholds on daily vigorous PA (VPA. METHODS: We estimated quantiles to analyse the distribution of all accelerometer positive count values obtained from 9005 seven-year old children participating in the UK Millennium Cohort Study. A threshold to identify EHCV was derived by differentiating the quantile function. Data were screened for device-specific error count values and EHCV, and a sensitivity analysis conducted to compare daily VPA estimates using three approaches to accounting for EHCV. RESULTS: Using our proposed threshold of ≥ 11,715 counts/minute to identify EHCV, we found that only 0.7% of all non-zero counts measured in MCS children were EHCV; in 99.7% of these children, EHCV comprised < 1% of total non-zero counts. Only 11 MCS children (0.12% of sample returned accelerometers that contained negative counts; out of 237 such values, 211 counts were equal to -32,768 in one child. The medians of daily minutes spent in VPA obtained without excluding EHCV, and when using a higher threshold (≥19,442 counts/minute were, respectively, 6.2% and 4.6% higher than when using our threshold (6.5 minutes; p<0.0001. CONCLUSIONS: Quality control processes should be undertaken during accelerometer fieldwork and prior to analysing data to identify monitors recording error values and EHCV. The proposed threshold will improve the validity of VPA estimates in children's studies using the ActiGraph GT1M by ensuring only plausible data are analysed. These methods can be applied to define appropriate EHCV
Relationship of blood and milk cell counts with mastitic pathogens in Murrah buffaloes
C. Singh
2010-02-01
Full Text Available The present study was undertaken to see the effect of mastitic pathogens on the blood and milk counts of Murrah buffaloes. Milk and blood samples were collected from 9 mastitic Murrah buffaloes. The total leucocyte Counts (TLC and Differential leucocyte counts (DLC in blood were within normal range and there was a non-significant change in blood counts irrespective of different mastitic pathogens. Normal milk quarter samples had significantly (P<0.01 less Somatic cell counts (SCC. Lymphocytes were significantly higher in normal milk samples, whereas infected samples had a significant increase (P<0.01 in milk neutrophils. S. aureus infected buffaloes had maximum milk SCC, followed by E. coli and S. agalactiae. Influx of neutrophils in the buffalo mammary gland was maximum for S. agalactiae, followed by E.cli and S. aureus. The study indicated that level of mastitis had no affect on blood counts but it influenced the milk SCC of normal quarters.
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — The Make My Trip Count (MMTC) commuter survey, conducted in September and October 2015 by GBA, the Pittsburgh 2030 District, and 10 other regional transportation...
Nute, Christine
2014-11-25
Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.
Comparison of Prediction-Error-Modelling Criteria
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....
Chanier, Thomas
2015-01-01
The Maya were known for their astronomical proficiency. This is demonstrated in the Mayan codices where ritual practices were related to astronomical events/predictions. Whereas Mayan mathematics were based on a vigesimal system, they used a different base when dealing with long periods of time, the Long Count Calendar (LCC), composed of different Long Count Periods: the Tun of 360 days, the Katun of 7200 days and the Baktun of 144000 days. There were two other calendars used in addition to t...
Counting Word Frequencies with Python
William J. Turkel
2012-07-01
Full Text Available Your list is now clean enough that you can begin analyzing its contents in meaningful ways. Counting the frequency of specific words in the list can provide illustrative data. Python has an easy way to count frequencies, but it requires the use of a new type of variable: the dictionary. Before you begin working with a dictionary, consider the processes used to calculate frequencies in a list.
Mackie, Peter; Nellthorp, John; Laird, James
2005-01-01
Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...
Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.
2009-01-01
For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br
Glosup, J.G.; Axelrod, M.C.
1996-08-05
The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.
Predicting energy expenditure from accelerometry counts in adolescent girls.
Schmitz, Kathryn H; Treuth, Margarita; Hannan, Peter; McMurray, Robert; Ring, Kimberly B; Catellier, Diane; Pate, Russ
2005-01-01
Calibration of accelerometer counts against oxygen consumption to predict energy expenditure has not been conducted in middle school girls. We concurrently assessed energy expenditure and accelerometer counts during physical activities on adolescent girls to develop an equation to predict energy expenditure. Seventy-four girls aged 13-14 yr performed 10 activities while wearing an Actigraph accelerometer and a portable metabolic measurement unit (Cosmed K4b2). The activities were resting, watching television, playing a computer game, sweeping, walking 2.5 and 3.5 mph, performing step aerobics, shooting a basketball, climbing stairs, and running 5 mph. Height and weight were also assessed. Mixed-model regression was used to develop an equation to predict energy expenditure (EE) (kJ.min(-1)) from accelerometer counts. Age (mean [SD] = 14 yr [0.34]) and body-weight-adjusted correlations of accelerometer counts with EE (kJ.min(-1)) for individual activities ranged from -0.14 to 0.59. Higher intensity activities with vertical motion were best correlated. A regression model that explained 85% of the variance of EE was developed: [EE (kJ.min(-1)) = 7.6628 + 0.1462 [(Actigraph counts per minute - 3000)/100] + 0.2371 (body weight in kilograms) - 0.00216 [(Actigraph counts per minute - 3000)/100](2) + 0.004077 [((Actigraph counts per minute - 3000)/100) x (body weight in kilograms)]. The MCCC = 0.85, with a standard error of estimate = 5.61 kJ.min(-1). We developed a prediction equation for kilojoules per minute of energy expenditure from Actigraph accelerometer counts. This equation may be most useful for predicting energy expenditure in groups of adolescent girls over a period of time that will include activities of broad-ranging intensity, and may be useful to intervention researchers interested in objective measures of physical activity.
Analyzing Rotor Rotating Error by Using Fractal Theory
WANG Kai; LI Yan
2004-01-01
Based on the judgement of fractional Brownian motion, this paper analyzes the radial rotating error of a precision rotor. The results indicate that the rotating error motion of the precision rotor is characterized by basic fractional Brownian motions, i. e. randomicity, non-sequencity, and self-simulation insinuation to some extent. Also, this paper calculates the fractal box counting dimension of radial rotating error and judges that the rotor error motion is of stability, indicating that the motion range of the future track of the axes is relatively stable.
High quantum efficiency S-20 photocathodes in photon counting detectors
Orlov, D. A.; DeFazio, J.; Duarte Pinto, S.; Glazenborg, R.; Kernen, E.
2016-04-01
Based on conventional S-20 processes, a new series of high quantum efficiency (QE) photocathodes has been developed that can be specifically tuned for use in the ultraviolet, blue or green regions of the spectrum. The QE values exceed 30% at maximum response, and the dark count rate is found to be as low as 30 Hz/cm2 at room temperature. This combination of properties along with a fast temporal response makes these photocathodes ideal for application in photon counting detectors, which is demonstrated with an MCP photomultiplier tube for single and multi-photoelectron detection.
Power Counting and Wilsonian Renormalization in Nuclear Effective Field Theory
Valderrama, Manuel Pavon
2016-01-01
Effective field theories are the most general tool for the description of low energy phenomena. They are universal and systematic: they can be formulated for any low energy systems we can think of and offer a clear guide on how to calculate predictions with reliable error estimates, a feature that is called power counting. These properties can be easily understood in Wilsonian renormalization, in which effective field theories are the low energy renormalization group evolution of a more fundamental ---perhaps unknown or unsolvable--- high energy theory. In nuclear physics they provide the possibility of a theoretically sound derivation of nuclear forces without having to solve quantum chromodynamics explicitly. However there is the problem of how to organize calculations within nuclear effective field theory: the traditional knowledge about power counting is perturbative but nuclear physics is not. Yet power counting can be derived in Wilsonian renormalization and there is already a fairly good understanding ...
Asymmetry in the effect of magnetic field on photon detection and dark counts in bended nanostrips
Semenov, A; Lusche, R; Ilin, K; Siegel, M; Hubers, H -W; Bralovic, N; Dopf, K; Vodolazov, D Yu
2015-01-01
Current crowding in the bends of superconducting nano-structures not only restricts measurable critical current in such structures but also redistributes local probabilities for dark and light counts to appear. Using structures from strips in the form of a square spiral which contain bends with the very same curvature with respect to the directions of bias current and external magnetic field, we have shown that dark counts as well as light count at small photon energies originate from areas around the bends. The minimum in the rate of dark counts reproduces the asymmetry of the maximum critical current density as function of the magnetic field. Contrary, the minimum in the rate of light counts demonstrate opposite asymmetry. The rate of light counts become symmetric at large currents and fields. Comparing locally computed absorption probabilities for photons and the simulated threshold detection current we found approximate location of areas near bends which deliver asymmetric light counts. Any asymmetry is a...
Submillimeter Number Counts From Statistical Analysis of BLAST Maps
Patanchon, Guillaume; Bock, James J; Chapin, Edward L; Devlin, Mark J; Dicker, Simon R; Griffin, Matthew; Gundersen, Joshua O; Halpern, Mark; Hargrave, Peter C; Hughes, David H; Klein, Jeff; Marsden, Gaelen; Mauskopf, Philip; Moncelsi, Lorenzo; Netterfield, Calvin B; Olmi, Luca; Pascale, Enzo; Rex, Marie; Scott, Douglas; Semisch, Christopher; Thomas, Nicholas; Truch, Matthew D P; Tucker, Carole; Tucker, Gregory S; Viero, Marco P; Wiebe, Donald V
2009-01-01
We describe the application of a statistical method to estimate submillimeter galaxy number counts from the confusion limited observations of the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Our method is based on a maximum likelihood fit to the pixel histogram, sometimes called 'P(D)', an approach which has been used before to probe faint counts, the difference being that here we advocate its use even for sources with relatively high signal-to-noise ratios. This method has an advantage over standard techniques of source extraction in providing an unbiased estimate of the counts from the bright end down to flux densities well below the confusion limit. We specifically analyse BLAST observations of a roughly 10 sq. deg map centered on the Great Observatories Origins Deep Survey South field. We provide estimates of number counts at the three BLAST wavelengths, 250, 350, and 500 microns, instead of counting sources in flux bins we estimate the counts at several flux density nodes connected with ...
Hanford whole body counting manual
Palmer, H.E.; Rieksts, G.A.; Lynch, T.P.
1990-06-01
This document describes the Hanford Whole Body Counting Program as it is administered by Pacific Northwest Laboratory (PNL) in support of the US Department of Energy--Richland Operations Office (DOE-RL) and its Hanford contractors. Program services include providing in vivo measurements of internally deposited radioactivity in Hanford employees (or visitors). Specific chapters of this manual deal with the following subjects: program operational charter, authority, administration, and practices, including interpreting applicable DOE Orders, regulations, and guidance into criteria for in vivo measurement frequency, etc., for the plant-wide whole body counting services; state-of-the-art facilities and equipment used to provide the best in vivo measurement results possible for the approximately 11,000 measurements made annually; procedures for performing the various in vivo measurements at the Whole Body Counter (WBC) and related facilities including whole body counts; operation and maintenance of counting equipment, quality assurance provisions of the program, WBC data processing functions, statistical aspects of in vivo measurements, and whole body counting records and associated guidance documents. 16 refs., 48 figs., 22 tabs.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Assessing Rotation-Invariant Feature Classification for Automated Wildebeest Population Counts.
Colin J Torney
Full Text Available Accurate and on-demand animal population counts are the holy grail for wildlife conservation organizations throughout the world because they enable fast and responsive adaptive management policies. While the collection of image data from camera traps, satellites, and manned or unmanned aircraft has advanced significantly, the detection and identification of animals within images remains a major bottleneck since counting is primarily conducted by dedicated enumerators or citizen scientists. Recent developments in the field of computer vision suggest a potential resolution to this issue through the use of rotation-invariant object descriptors combined with machine learning algorithms. Here we implement an algorithm to detect and count wildebeest from aerial images collected in the Serengeti National Park in 2009 as part of the biennial wildebeest count. We find that the per image error rates are greater than, but comparable to, two separate human counts. For the total count, the algorithm is more accurate than both manual counts, suggesting that human counters have a tendency to systematically over or under count images. While the accuracy of the algorithm is not yet at an acceptable level for fully automatic counts, our results show this method is a promising avenue for further research and we highlight specific areas where future research should focus in order to develop fast and accurate enumeration of aerial count data. If combined with a bespoke image collection protocol, this approach may yield a fully automated wildebeest count in the near future.
The Origins of Counting Algorithms
Cantlon, Jessica F.; Piantadosi, Steven T.; Ferrigno, Stephen; Hughes, Kelly D.; Allison M Barnard
2015-01-01
Humans’ ability to ‘count’ by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that non-human primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. Monkeys saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a...
Tree modules and counting polynomials
Kinser, Ryan
2011-01-01
We give a formula for counting tree modules for the quiver S_g with g loops and one vertex in terms of tree modules on its universal cover. This formula, along with work of Helleloid and Rodriguez-Villegas, is used to show that the number of d-dimensional tree modules for S_g is polynomial in g with the same degree and leading coefficient as the counting polynomial A_{S_g}(d, q) for absolutely indecomposables over F_q, evaluated at q=1.
Probabilistic quantum error correction
Fern, J; Fern, Jesse; Terilla, John
2002-01-01
There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.
Koop, G.; Dik, N; Nielen, M; Lipman, L. J. A.
2010-01-01
The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC ...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
KIDS COUNT New Hampshire, 2000.
Shemitz, Elllen, Ed.
This Kids Count report presents statewide trends in the well-being of New Hampshire's children. The statistical report is based on 22 indicators of child well-being in 5 interrelated areas: (1) children and families (including child population, births, children living with single parent, and children experiencing parental divorce); (2) economic…
Counting a Culture of Mealworms
Ashbrook, Peggy
2007-01-01
Math is not the only topic that will be discussed when young children are asked to care for and count "mealworms," a type of insect larvae (just as caterpillars are the babies of butterflies, these larvae are babies of beetles). The following activity can take place over two months as the beetles undergo metamorphosis from larvae to adults. As the…
Verbal Counting in Bilingual Contexts
Donevska-Todorova, Ana
2015-01-01
Informal experiences in mathematics often include playful competitions among young children in counting numbers in as many as possible different languages. Can these enjoyable experiences result with excellence in the formal processes of education? This article discusses connections between mathematical achievements and natural languages within…
Shakespeare Live! and Character Counts.
Brookshire, Cathy A.
This paper discusses a live production of Shakespeare's "Macbeth" (in full costume but with no sets) for all public middle school and high school students in Harrisonburg and Rockingham, Virginia. The paper states that the "Character Counts" issues that are covered in the play are: decision making, responsibility and…
On Counting the Rational Numbers
Almada, Carlos
2010-01-01
In this study, we show how to construct a function from the set N of natural numbers that explicitly counts the set Q[superscript +] of all positive rational numbers using a very intuitive approach. The function has the appeal of Cantor's function and it has the advantage that any high school student can understand the main idea at a glance…
Counting problems for number rings
Brakenhoff, Johannes Franciscus
2009-01-01
In this thesis we look at three counting problems connected to orders in number fields. First we study the probability that for a random polynomial f in Z[X] the ring Z[X]/f is the maximal order in Q[X]/f. Connected to this is the probability that a random polynomial has a squarefree discriminant. T
Counting a Culture of Mealworms
Ashbrook, Peggy
2007-01-01
Math is not the only topic that will be discussed when young children are asked to care for and count "mealworms," a type of insect larvae (just as caterpillars are the babies of butterflies, these larvae are babies of beetles). The following activity can take place over two months as the beetles undergo metamorphosis from larvae to adults. As the…
Teaching Emotionally Disturbed Students to Count Feelings.
Bartels, Cynthia S.; Calkin, Abigail B.
The paper describes a program to teach high school students with emotional and behavior problems to count their feelings, thereby improving their self concept. To aid in instruction, a hierarchy was developed which involved four phases: counting tasks completed and tasks not completed, counting independent actions in class, counting perceptions of…
Characterization of the count rate performance of modern gamma cameras
Silosky, M.; Johnson, V.; Beasley, C.; Cheenu Kappadath, S.
2013-01-01
between the estimates of τ using the decay or dual source methods under identical experimental conditions (p = 0.13). Estimates of τ increased as a power-law function with decreasing ratio of counts in the photopeak to the total counts. Also, estimates of τ increased linearly as spectral effective energy decreased. No significant difference was observed between the dependences of τ on energy window definition or incident spectrum between the decay and dual source methods. Estimates of τ using the dual source method varied as a quadratic on the ratio of the single source to combined source activities and linearly with total activity. Conclusions: The CRP curves for three modern gamma camera models have been characterized, demonstrating unexpected behavior that necessitates the determination of both τ and maximum count rate to fully characterize the CRP curve. τ was estimated under a variety of experimental conditions, based on which guidelines for the performance of CRP testing in a clinical setting have been proposed. PMID:23464339
Correction for quadrature errors
Netterstrøm, A.; Christensen, Erik Lintz
1994-01-01
In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...
1998-01-01
To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.
ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL
1994-01-01
Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)
Errors on errors - Estimating cosmological parameter covariance
Joachimi, Benjamin
2014-01-01
Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.
Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif
2012-04-01
Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.
Graphite Nodule and Cell Count in Cast Iron
E Fraś
2007-07-01
Full Text Available In this work, a model is proposed for heterogeneous nucleation on substrates whose size distribution can be described by the Weibull statistics. It is found that the nuclei density, Nnuc can be given in terms of the maximum undercooling, ΔTm by Nnuc = Ns exp(-b/ΔTm; where Ns is the density of nucleation sites in the melt and b is the nucleation coefficient (b > 0 . When nucleation occurs on all the possible substrates, the graphite nodule density, NV,n or eutectic cell density NV after solidification equals Ns. In this work, measurements of NV,n and NV values were carried out on experimental nodular and flake graphite iron castings processed under various inoculation conditions. The volumetric nodule NV,,n or graphite eutectic cell NV count, were estimated from the area nodule count, NA,n or eutectic cell count NA on polished cast iron surface sections by stereological means. In addition, maximum undercoolings, ΔTm were measured using thermal analysis. The experimental outcome indicates that volumetric nodule NV,n or graphite eutectic cell NV count can be properly described by the proposed expression NV,,n = NV = Ns exp(-b/ΔTm. Moreover, the Ns and b values were experimentally determined. In particular, the proposed model suggests that the size distribution of nucleation sites is exponential in nature.
Cosmic error and the statistics of large scale structure
Szapudi, I; Szapudi, Istvan; Colombi, Stephane
1995-01-01
We examine the errors on counts in cells extracted from galaxy surveys. The measurement error, related to the finite number of sampling cells, is disentangled from the ``cosmic error'', due to the finiteness of the survey. Using the hierarchical model and assuming locally Poisson behavior, we identified three contributions to the cosmic error: The finite volume effect is proportional to the average of the two-point correlation function over the whole survey. It accounts for possible fluctuations of the density field at scales larger than the sample size. The edge effect is related to the geometry of the survey. It accounts for the fact that objects near the boundary carry less statistical weight than those further away from it. The discreteness effect is due to the fact that the underlying smooth random field is sampled with finite number of objects. This is the ``shot noise'' error. Measurements of errors in artificial hierarchical samples showed excellent agreement with our predictions. The probability dist...
Experimental reconstruction of photon statistics without photon counting.
Zambra, Guido; Andreoni, Alessandra; Bondani, Maria; Gramegna, Marco; Genovese, Marco; Brida, Giorgio; Rossi, Andrea; Paris, Matteo G A
2005-08-05
Experimental reconstructions of photon number distributions of both continuous-wave and pulsed light beams are reported. Our scheme is based on on/off avalanche photo-detection assisted by maximum-likelihood estimation and does not involve photon counting. Reconstructions of the distribution for both semiclassical and quantum states of light are reported for single-mode as well as for multi-mode beams.
Predictive Model Assessment for Count Data
2007-09-05
critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002
PV Maximum Power-Point Tracking by Using Artificial Neural Network
Farzad Sedaghati; Ali Nahavandi; Mohammad Ali Badamchizadeh; Sehraneh Ghaemi; Mehdi Abedinpour Fallah
2012-01-01
In this paper, using artificial neural network (ANN) for tracking of maximum power point is discussed. Error back propagation method is used in order to train neural network. Neural network has advantages of fast and precisely tracking of maximum power point. In this method neural network is used to specify the reference voltage of maximum power point under different atmospheric conditions. By properly controling of dc-dc boost converter, tracking of maximum power point is feasible. To verify...
Optics Technologies for LUVOIR & HabEx: Polarization & Mirror Count
Breckinridge, James B.
2017-01-01
We show that polarization aberrations and mirror count will limit the optical system performance of LUVOIR and HabEx and thus both their exoplanet science yield and their UV science. In addition we show how increased mirror count reduces optical system transmittance and increases cost in large aperture telescopes. We make the observation that orthogonally polarized light does not interfere to form an intensity image. We show how the two polarization aberrations (diattenuation & and retardance) distort the system PSF, decrease transmittance, and increase the unwanted background above that predicted using scalar models. An optical system corrected for geometric path difference errors is a necessary but not sufficient condition for the perfect image formation needed to directly image terrestrial exoplanets. Geometric (trigonometric) path difference errors are controlled using adaptive optics (tip-tilt & wavefront), active metrology and precision pointing. However, image quality is also determined by several physical optics factors: diffraction, polarization, partial coherence, and chromatism all of which degrade image quality and are not corrected through the control of geometric path difference. The source of physical optics errors lies in the opto-mechanical packaging of optical elements, masks, stops and the thin film coatings needed to obtain high transmittance. Adaptive optics corrects wavefront errors described by geometric or optical path length errors but not those wavefront errors introduced by physical optics. We show that for large telescopes each reflection costs over $100 million to increase the collecting area in order to recover lost SNR. Examples will be shown. The LUVOIR and HabEx systems will need fewer optical surfaces than current systems
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Study of thin-film resistor resistance error
Spirin V. G.
2009-10-01
Full Text Available A relationship between a thin-film resistor resistance error and mask misalignment with a substrate conductive layer at the second photolithography stage for a thin-film resistor design in which the resistive element does not overlap conductor pads is studied. The error value is at a maximum when the resistor aspect ratio is equal to 1.0.
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Errors in Radiologic Reporting
Esmaeel Shokrollahi
2010-05-01
Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological
Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca
2015-09-01
Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
Chanier, Thomas
2013-01-01
The Maya had a very elaborate and accurate calendar. First, the Mayan Long Count Calendar (LCC) was used to point historical events from a selected "beginning of time". It is also characterized by the existence of a religious month Tzolk'in of 260 days and a civic year Haab' of 365 days. The LCC is supposed to begin on 11 August -3114 BC known as the Goodman-Martinez-Thompson (GMT) correlation to the Gregorian calendar based on historical facts and end on 21 December 2012 corresponding to a period of approximately 5125 years or 13 Baktun. We propose here to explain the origin the 13 Baktun cycle, the Long Count Periods and the religious month Tzolk'in.
Hopkins, Sarah; Bayliss, Donna
2017-01-01
In this research, we examined how 200 students in seventh grade (around 12 years old) solved simple addition problems. A cluster approach revealed that less than half of the cohort displayed proficiency with simple addition: 35% predominantly used min-counting and were accurate, and 16% frequently made min-counting errors. Students who frequently…
Hopkins, Sarah; Bayliss, Donna
2017-01-01
In this research, we examined how 200 students in seventh grade (around 12 years old) solved simple addition problems. A cluster approach revealed that less than half of the cohort displayed proficiency with simple addition: 35% predominantly used min-counting and were accurate, and 16% frequently made min-counting errors. Students who frequently…
Counting Irreducible Double Occurrence Words
Burns, Jonathan
2011-01-01
A double occurrence word $w$ over a finite alphabet $\\Sigma$ is a word in which each alphabet letter appears exactly twice. Such words arise naturally in the study of topology, graph theory, and combinatorics. Recently, double occurrence words have been used for studying DNA recombination events. We develop formulas for counting and enumerating several elementary classes of double occurrence words such as palindromic, irreducible, and strongly-irreducible words.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.
Approximate Counting of Graphical Realizations.
Péter L Erdős
Full Text Available In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007, for regular directed graphs (by Greenhill, 2011 and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013. Several heuristics on counting the number of possible realizations exist (via sampling processes, and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS for counting of all realizations.
Optimal experimental design for nano-particle atom-counting from high-resolution STEM images
De Backer, A.; De wael, A.; Gonnissen, J.; Van Aert, S., E-mail: sandra.vanaert@uantwerpen.be
2015-04-15
In the present paper, the principles of detection theory are used to quantify the probability of error for atom-counting from high resolution scanning transmission electron microscopy (HR STEM) images. Binary and multiple hypothesis testing have been investigated in order to determine the limits to the precision with which the number of atoms in a projected atomic column can be estimated. The probability of error has been calculated when using STEM images, scattering cross-sections or peak intensities as a criterion to count atoms. Based on this analysis, we conclude that scattering cross-sections perform almost equally well as images and perform better than peak intensities. Furthermore, the optimal STEM detector design can be derived for atom-counting using the expression for the probability of error. We show that for very thin objects LAADF is optimal and that for thicker objects the optimal inner detector angle increases.
Siman, W.; Mawlawi, O. R.; Mikell, J. K.; Mourtada, F.; Kappadath, S. C.
2017-01-01
The aims of this study were to evaluate the effects of noise, motion blur, and motion compensation using quiescent-period gating (QPG) on the activity concentration (AC) distribution—quantified using the cumulative AC volume histogram (ACVH)—in count-limited studies such as 90Y-PET/CT. An International Electrotechnical Commission phantom filled with low 18F activity was used to simulate clinical 90Y-PET images. PET data were acquired using a GE-D690 when the phantom was static and subject to 1-4 cm periodic 1D motion. The static data were down-sampled into shorter durations to determine the effect of noise on ACVH. Motion-degraded PET data were sorted into multiple gates to assess the effect of motion and QPG on ACVH. Errors in ACVH at AC90 (minimum AC that covers 90% of the volume of interest (VOI)), AC80, and ACmean (average AC in the VOI) were characterized as a function of noise and amplitude before and after QPG. Scan-time reduction increased the apparent non-uniformity of sphere doses and the dispersion of ACVH. These effects were more pronounced in smaller spheres. Noise-related errors in ACVH at AC20 to AC70 were smaller (15%). The accuracy of ACmean was largely independent of the total count. Motion decreased the observed AC and skewed the ACVH toward lower values; the severity of this effect depended on motion amplitude and tumor diameter. The errors in AC20 to AC80 for the 17 mm sphere were -25% and -55% for motion amplitudes of 2 cm and 4 cm, respectively. With QPG, the errors in AC20 to AC80 of the 17 mm sphere were reduced to -15% for motion amplitudes 0.5, QPG was effective at reducing errors in ACVH despite increases in image non-uniformity due to increased noise. ACVH is believed to be more relevant than mean or maximum AC to calculate tumor control and normal tissue complication probability. However, caution needs to be exercised when using ACVH in post-therapy 90Y imaging because of its susceptibility to image
Photon counting spectroscopic CT with dynamic beam attenuator
Atak, Haluk
2016-01-01
Purpose: Photon counting (PC) computed tomography (CT) can provide material selective CT imaging at lowest patient dose but it suffers from suboptimal count rate. A dynamic beam attenuator (DBA) can help with count rate by modulating x-ray beam intensity such that the low attenuating areas of the patient receive lower exposure, and detector behind these areas is not overexposed. However, DBA may harden the beam and cause artifacts and errors. This work investigates positive and negative effects of using DBA in PCCT. Methods: A simple PCCT with single energy bin, spectroscopic PCCT with 2 and 5 energy bins, and conventional energy integrating CT with and without DBA were simulated and investigated using 120kVp tube voltage and 14mGy air dose. The DBAs were modeled as made from soft tissue (ST) equivalent material, iron (Fe), and holmium (Ho) K-edge material. A cylindrical CT phantom and chest phantom with iodine and CaCO3 contrast elements were used. Image artifacts and quantification errors in general and mat...
Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently
2013-01-01
Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Effects of lek count protocols on greater sage-grouse population trend estimates
Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.
2016-01-01
Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
QuorUM: An Error Corrector for Illumina Reads
Guillaume Marçais; Yorke, James A.; Aleksey Zimin
2015-01-01
Motivation Illumina Sequencing data can provide high coverage of a genome by relatively short (most often 100 bp to 150 bp) reads at a low cost. Even with low (advertised 1%) error rate, 100 × coverage Illumina data on average has an error in some read at every base in the genome. These errors make handling the data more complicated because they result in a large number of low-count erroneous k-mers in the reads. However, there is enough information in the reads to correct most of the sequenc...
Inpatients’ medical prescription errors
Aline Melo Santos Silva
2009-09-01
Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Photon counting arrays for AO wavefront sensors
Vallerga, J; McPhate, J; Mikulec, Bettina; Clark, Allan G; Siegmund, O; CERN. Geneva
2005-01-01
Future wavefront sensors for AO on large telescopes will require a large number of pixels and must operate at high frame rates. Unfortunately for CCDs, there is a readout noise penalty for operating faster, and this noise can add up rather quickly when considering the number of pixels required for the extended shape of a sodium laser guide star observed with a large telescope. Imaging photon counting detectors have zero readout noise and many pixels, but have suffered in the past with low QE at the longer wavelengths (>500 nm). Recent developments in GaAs photocathode technology, CMOS ASIC readouts and FPGA processing electronics have resulted in noiseless WFS detector designs that are competitive with silicon array detectors, though at ~40% the QE of CCDs. We review noiseless array detectors and compare their centroiding performance with CCDs using the best available characteristics of each. We show that for sub-aperture binning of 6x6 and greater that noiseless detectors have a smaller centroid error at flu...
Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.
2010-01-01
The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms,
Energy intake estimation from counts of chews and swallows.
Fontana, Juan M; Higgins, Janine A; Schuckers, Stephanie C; Bellisle, France; Pan, Zhaoxing; Melanson, Edward L; Neuman, Michael R; Sazonov, Edward
2015-02-01
Current, validated methods for dietary assessment rely on self-report, which tends to be inaccurate, time-consuming, and burdensome. The objective of this work was to demonstrate the suitability of estimating energy intake using individually-calibrated models based on Counts of Chews and Swallows (CCS models). In a laboratory setting, subjects consumed three identical meals (training meals) and a fourth meal with different content (validation meal). Energy intake was estimated by four different methods: weighed food records (gold standard), diet diaries, photographic food records, and CCS models. Counts of chews and swallows were measured using wearable sensors and video analysis. Results for the training meals demonstrated that CCS models presented the lowest reporting bias and a lower error as compared to diet diaries. For the validation meal, CCS models showed reporting errors that were not different from the diary or the photographic method. The increase in error for the validation meal may be attributed to differences in the physical properties of foods consumed during training and validation meals. However, this may be potentially compensated for by including correction factors into the models. This study suggests that estimation of energy intake from CCS may offer a promising alternative to overcome limitations of self-report. Copyright © 2014 Elsevier Ltd. All rights reserved.
Constructing biological pathways by a two-step counting approach.
Hsiuying Wang
Full Text Available Networks are widely used in biology to represent the relationships between genes and gene functions. In Boolean biological models, it is mainly assumed that there are two states to represent a gene: on-state and off-state. It is typically assumed that the relationship between two genes can be characterized by two kinds of pairwise relationships: similarity and prerequisite. Many approaches have been proposed in the literature to reconstruct biological relationships. In this article, we propose a two-step method to reconstruct the biological pathway when the binary array data have measurement error. For a pair of genes in a sample, the first step of this approach is to assign counting numbers for every relationship and select the relationship with counting number greater than a threshold. The second step is to calculate the asymptotic p-values for hypotheses of possible relationships and select relationships with a large p-value. This new method has the advantages of easy calculation for the counting numbers and simple closed forms for the p-value. The simulation study and real data example show that the two-step counting method can accurately reconstruct the biological pathway and outperform the existing methods. Compared with the other existing methods, this two-step method can provide a more accurate and efficient alternative approach for reconstructing the biological network.
Automatic counting of microglial cell activation and its applications
Beatriz I Gallego Collado; Pablo de Gracia
2016-01-01
Glaucoma is a multifactorial optic neuropathy characterized by the damage and death of the retinal gan-glion cells. This disease results in vision loss and blindness. Any vision loss resulting from the disease cannot be restored and nowadays there is no available cure for glaucoma;however an early detection and treatment, could offer neuronal protection and avoid later serious damages to the visual function. A full understanding of the etiology of the disease will still require the contribution of many scientiifc efforts. Glial activation has been observed in glaucoma, being microglial proliferation a hallmark in this neuro-degenerative disease. A typical project studying these cellular changes involved in glaucoma often needs thousands of images-from several animals-covering different layers and regions of the retina. The gold standard to evaluate them is the manual count. This method requires a large amount of time from special-ized personnel. It is a tedious process and prone to human error. We present here a new method to count microglial cells by using a computer algorithm. It counts in one hour the same number of images that a researcher counts in four weeks, with no loss of reliability.
Power counting and Wilsonian renormalization in nuclear effective field theory
Valderrama, Manuel Pavón
2016-05-01
Effective field theories are the most general tool for the description of low energy phenomena. They are universal and systematic: they can be formulated for any low energy systems we can think of and offer a clear guide on how to calculate predictions with reliable error estimates, a feature that is called power counting. These properties can be easily understood in Wilsonian renormalization, in which effective field theories are the low energy renormalization group evolution of a more fundamental — perhaps unknown or unsolvable — high energy theory. In nuclear physics they provide the possibility of a theoretically sound derivation of nuclear forces without having to solve quantum chromodynamics explicitly. However there is the problem of how to organize calculations within nuclear effective field theory: the traditional knowledge about power counting is perturbative but nuclear physics is not. Yet power counting can be derived in Wilsonian renormalization and there is already a fairly good understanding of how to apply these ideas to non-perturbative phenomena and in particular to nuclear physics. Here we review a few of these ideas, explain power counting in two-nucleon scattering and reactions with external probes and hint at how to extend the present analysis beyond the two-body problem.
Automatic counting of microglial cell activation and its applications
Beatriz I Gallego
2016-01-01
Full Text Available Glaucoma is a multifactorial optic neuropathy characterized by the damage and death of the retinal ganglion cells. This disease results in vision loss and blindness. Any vision loss resulting from the disease cannot be restored and nowadays there is no available cure for glaucoma; however an early detection and treatment, could offer neuronal protection and avoid later serious damages to the visual function. A full understanding of the etiology of the disease will still require the contribution of many scientific efforts. Glial activation has been observed in glaucoma, being microglial proliferation a hallmark in this neurodegenerative disease. A typical project studying these cellular changes involved in glaucoma often needs thousands of images - from several animals - covering different layers and regions of the retina. The gold standard to evaluate them is the manual count. This method requires a large amount of time from specialized personnel. It is a tedious process and prone to human error. We present here a new method to count microglial cells by using a computer algorithm. It counts in one hour the same number of images that a researcher counts in four weeks, with no loss of reliability.
Clemens eMaidhof
2013-07-01
Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.
An error assessment of the kriging based approximation model using a mean square error
Ju, Byeong Hyeon; Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)
2006-08-15
A Kriging model is a sort of approximation model and used as a deterministic model of a computationally expensive analysis or simulation. Although it has various advantages, it is difficult to assess the accuracy of the approximated model. It is generally known that a Mean Square Error (MSE) obtained from the kriging model can't calculate statistically exact error bounds contrary to a response surface method, and a cross validation is mainly used. But the cross validation also has many uncertainties. Moreover, the cross validation can't be used when a maximum error is required in the given region. For solving this problem, we first proposed a modified mean square error which can consider relative errors. Using the modified mean square error, we developed the strategy of adding a new sample to the place that the MSE has the maximum when the MSE is used for the assessment of the kriging model. Finally, we offer guidelines for the use of the MSE which is obtained from the kriging model. Four test problems show that the proposed strategy is a proper method which can assess the accuracy of the kriging model. Based on the results of four test problems, a convergence coefficient of 0.01 is recommended for an exact function approximation.
Chen, Jincan; Yan, Zijun; Wu, Liqing
1996-06-01
Considering a thermoelectric generator as a heat engine cycle, the general differential equations of the temperature field inside thermoelectric elements are established by means of nonequilibrium thermodynamics. These equations are used to study the influence of heat leak, Joule's heat, and Thomson heat on the performance of the thermoelectric generator. New expressions are derived for the power output and the efficiency of the thermoelectric generator. The maximum power output is calculated and the optimal matching condition of load is determined. The maximum efficiency is discussed by a representative numerical example. The aim of this research is to provide some novel conclusions and redress some errors existing in a related investigation.
Maximum-likelihood estimation prevents unphysical Mueller matrices
Aiello, A; Voigt, D; Woerdman, J P
2005-01-01
We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.
Alaska Steller Sea Lion Pup Count Database
National Oceanic and Atmospheric Administration, Department of Commerce — This database contains counts of Steller sea lion pups on rookeries in Alaska made between 1961 and 2015. Pup counts are conducted in late June-July. Pups are...
CalCOFI Egg Counts Positive Tows
National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...
CalCOFI Larvae Counts Positive Tows
National Oceanic and Atmospheric Administration, Department of Commerce — Fish larvae counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets],...
von Clarmann, T.
2014-09-01
The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Maximum Likelihood Position Location with a Limited Number of References
D. Munoz-Rodriguez
2011-04-01
Full Text Available A Position Location (PL scheme for mobile users on the outskirts of coverage areas is presented. The proposedmethodology makes it possible to obtain location information with only two land-fixed references. We introduce ageneral formulation and show that maximum-likelihood estimation can provide adequate PL information in thisscenario. The Root Mean Square (RMS error and error-distribution characterization are obtained for differentpropagation scenarios. In addition, simulation results and comparisons to another method are provided showing theaccuracy and the robustness of the method proposed. We study accuracy limits of the proposed methodology fordifferent propagation environments and show that even in the case of mismatch in the error variances, good PLestimation is feasible.
Maximum likelihood identification of aircraft stability and control derivatives
Mehra, R. K.; Stepner, D. E.; Tyler, J. S.
1974-01-01
Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.
Galaxy evolution at low redshift?; 1, optical counts
Dennefeld, M
1996-01-01
We present bright galaxy number counts in the blue (16
Pharmaceutical Pill Counting and Inspection Using a Capacitive Sensor
Ganesan LETCHUMANAN
2008-01-01
Full Text Available A capacitive sensor for high-speed counting and inspection of pharmaceutical products is proposed and evaluated. The sensor is based on a patented Electrostatic Field Sensor (EFS device, previously developed by Sparc Systems Limited. However, the sensor head proposed in this work has a significantly different geometry and has been designed with a rectangular inspection aperture of 160mm × 21mm, which best meets applications where a larger count throughput is required with a single sensor. Finite element modelling has been used to simulate the electrostatic fields generated within the sensor, and as a design tool for optimising the sensor head configuration. The actual and simulated performance of the sensor is compared and analysed in terms of the sensor performance at discriminating between damaged products or detection of miscount errors.
Distinct counting with a self-learning bitmap
Chen, Aiyou; Shepp, Larry; Nguyen, Tuan
2011-01-01
Counting the number of distinct elements (cardinality) in a dataset is a fundamental problem in database management. In recent years, due to many of its modern applications, there has been significant interest to address the distinct counting problem in a data stream setting, where each incoming data can be seen only once and cannot be stored for long periods of time. Many probabilistic approaches based on either sampling or sketching have been proposed in the computer science literature, that only require limited computing and memory resources. However, the performances of these methods are not scale-invariant, in the sense that their relative root mean square estimation errors (RRMSE) depend on the unknown cardinalities. This is not desirable in many applications where cardinalities can be very dynamic or inhomogeneous and many cardinalities need to be estimated. In this paper, we develop a novel approach, called self-learning bitmap (S-bitmap) that is scale-invariant for cardinalities in a specified range....
Dr. Grace Zhang
2000-01-01
Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.
Development of transmission error tester for face gears
Shi, Zhao-yao; Lu, Xiao-ning; Chen, Chang-he; Lin, Jia-chun
2013-10-01
A tester for measuring face gears' transmission error was developed based on single-flank rolling principle. The mechanical host was of hybrid configuration of the vertical and horizontal structures. The tester is mainly constituted by base, precision spindle, grating measurement system and control unit. The structure of precision spindles was designed, and rotation accuracy of the spindleswas improved. The key techniques, such as clamping, positioning and adjustment of the gears were researched. In order to collect the data of transmission error, high-frequency clock pulse subdivision count method with higher measurement resolution was proposed. The developed tester can inspect the following errors, such as transmission error of the pair, tangential composite deviation for the measured face gear, pitch deviation, eccentricity error, and so on. The results of measurement can be analyzed by the tester; The tester can meet face gear quality testing requirements for accuracy of grade 5.
Conically scanning lidar error in complex terrain
Ferhat Bingöl
2009-05-01
Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.
Efficient Image Transmission Through Analog Error Correction
Liu, Yang; Li,; Xie, Kai
2011-01-01
This paper presents a new paradigm for image transmission through analog error correction codes. Conventional schemes rely on digitizing images through quantization (which inevitably causes significant bandwidth expansion) and transmitting binary bit-streams through digital error correction codes (which do not automatically differentiate the different levels of significance among the bits). To strike a better overall performance in terms of transmission efficiency and quality, we propose to use a single analog error correction code in lieu of digital quantization, digital code and digital modulation. The key is to get analog coding right. We show that this can be achieved by cleverly exploiting an elegant "butterfly" property of chaotic systems. Specifically, we demonstrate a tail-biting triple-branch baker's map code and its maximum-likelihood decoding algorithm. Simulations show that the proposed analog code can actually outperform digital turbo code, one of the best codes known to date. The results and fin...
DC KIDS COUNT e-Databook Indicators
DC Action for Children, 2012
2012-01-01
This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…
Monte Carlo Simulation of Counting Experiments.
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Digital coincidence counting - initial results
Butcher, K. S. A.; Watt, G. C.; Alexiev, D.; van der Gaast, H.; Davies, J.; Mo, Li; Wyllie, H. A.; Keightley, J. D.; Smith, D.; Woods, M. J.
2000-08-01
Digital Coincidence Counting (DCC) is a new technique in radiation metrology, based on the older method of analogue coincidence counting. It has been developed by the Australian Nuclear Science and Technology Organisation (ANSTO), in collaboration with the National Physical Laboratory (NPL) of the United Kingdom, as a faster more reliable means of determining the activity of ionising radiation samples. The technique employs a dual channel analogue-to-digital converter acquisition system for collecting pulse information from a 4π beta detector and an NaI(Tl) gamma detector. The digitised pulse information is stored on a high-speed hard disk and timing information for both channels is also stored. The data may subsequently be recalled and analysed using software-based algorithms. In this letter we describe some recent results obtained with the new acquistion hardware being tested at ANSTO. The system is fully operational and is now in routine use. Results for 60Co and 22Na radiation activity calibrations are presented, initial results with 153Sm are also briefly mentioned.
Antonio Boldrini
2013-06-01
Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research
Cosmic Ray Spectral Deformation Caused by Energy Determination Errors
Carlson, Per J; Carlson, Per; Wannemark, Conny
2005-01-01
Using simulation methods, distortion effects on energy spectra caused by errors in the energy determination have been investigated. For cosmic ray proton spectra, falling steeply with kinetic energy E as E-2.7, significant effects appear. When magnetic spectrometers are used to determine the energy, the relative error increases linearly with the energy and distortions with a sinusoidal form appear starting at an energy that depends significantly on the error distribution but at an energy lower than that corresponding to the Maximum Detectable Rigidity of the spectrometer. The effect should be taken into consideration when comparing data from different experiments, often having different error distributions.
Influence of Ephemeris Error on GPS Single Point Positioning Accuracy
Lihua, Ma; Wang, Meng
2013-09-01
The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.
1985-01-01
A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.
LIBERTARISMO & ERROR CATEGORIAL
Carlos G. Patarroyo G.
2009-01-01
Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.
How much do women count if they not counted?
Federica Taddia
2006-01-01
Full Text Available The condition of women throughout the world is marked by countless injustices and violations of the most fundamental rights established by the Universal Declaration of human rights and every culture is potentially prone to commit discrimination against women in various forms. Women are worse fed, more exposed to physical violence, more exposed to diseases and less educated; they have less access to, or are excluded from, vocational training paths; they are the most vulnerable among prisoners of conscience, refugees and immigrants and the least considered within ethnic minorities; from their very childhood, women are humiliated, undernourished, sold, raped and killed; their work is generally less paid compared to men’s work and in some countries they are victims of forced marriages. Such condition is the result of old traditions that implicit gender-differentiated education has long promoted through cultural models based on theories, practices and policies marked by discrimination and structured differentially for men and women. Within these cultural models, the basic educational institutions have played and still play a major role in perpetuating such traditions. Nevertheless, if we want to overcome inequalities and provide women with empowerment, we have to start right from the educational institutions and in particular from school, through the adoption of an intercultural approach to education: an approach based on active pedagogy and on methods of analysis, exchange and enhancement typical of socio-educational animation. The intercultural approach to education is attentive to promote the realisation of each individual and the dignity and right of everyone to express himself/herself in his/her own way. Such an approach will give women the opportunity to become actual agents of collective change and to get the strength and wellbeing necessary to count and be counted as human beings entitled to freedom and equality, and to have access to all
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Neutron multiplication error in TRU waste measurements
Veilleux, John [Los Alamos National Laboratory; Stanfield, Sean B [CCP; Wachter, Joe [CCP; Ceo, Bob [CCP
2009-01-01
Total Measurement Uncertainty (TMU) in neutron assays of transuranic waste (TRU) are comprised of several components including counting statistics, matrix and source distribution, calibration inaccuracy, background effects, and neutron multiplication error. While a minor component for low plutonium masses, neutron multiplication error is often the major contributor to the TMU for items containing more than 140 g of weapons grade plutonium. Neutron multiplication arises when neutrons from spontaneous fission and other nuclear events induce fissions in other fissile isotopes in the waste, thereby multiplying the overall coincidence neutron response in passive neutron measurements. Since passive neutron counters cannot differentiate between spontaneous and induced fission neutrons, multiplication can lead to positive bias in the measurements. Although neutron multiplication can only result in a positive bias, it has, for the purpose of mathematical simplicity, generally been treated as an error that can lead to either a positive or negative result in the TMU. While the factors that contribute to neutron multiplication include the total mass of fissile nuclides, the presence of moderating material in the matrix, the concentration and geometry of the fissile sources, and other factors; measurement uncertainty is generally determined as a function of the fissile mass in most TMU software calculations because this is the only quantity determined by the passive neutron measurement. Neutron multiplication error has a particularly pernicious consequence for TRU waste analysis because the measured Fissile Gram Equivalent (FGE) plus twice the TMU error must be less than 200 for TRU waste packaged in 55-gal drums and less than 325 for boxed waste. For this reason, large errors due to neutron multiplication can lead to increased rejections of TRU waste containers. This report will attempt to better define the error term due to neutron multiplication and arrive at values that are
Automatic cell counting with ImageJ.
Grishagin, Ivan V
2015-03-15
Cell counting is an important routine procedure. However, to date there is no comprehensive, easy to use, and inexpensive solution for routine cell counting, and this procedure usually needs to be performed manually. Here, we report a complete solution for automatic cell counting in which a conventional light microscope is equipped with a web camera to obtain images of a suspension of mammalian cells in a hemocytometer assembly. Based on the ImageJ toolbox, we devised two algorithms to automatically count these cells. This approach is approximately 10 times faster and yields more reliable and consistent results compared with manual counting.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Julian, Liam
2009-01-01
In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…
Challenge and Error: Critical Events and Attention-Related Errors
Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel
2011-01-01
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…
Discrete calculus methods for counting
Mariconda, Carlo
2016-01-01
This book provides an introduction to combinatorics, finite calculus, formal series, recurrences, and approximations of sums. Readers will find not only coverage of the basic elements of the subjects but also deep insights into a range of less common topics rarely considered within a single book, such as counting with occupancy constraints, a clear distinction between algebraic and analytical properties of formal power series, an introduction to discrete dynamical systems with a thorough description of Sarkovskii’s theorem, symbolic calculus, and a complete description of the Euler-Maclaurin formulas and their applications. Although several books touch on one or more of these aspects, precious few cover all of them. The authors, both pure mathematicians, have attempted to develop methods that will allow the student to formulate a given problem in a precise mathematical framework. The aim is to equip readers with a sound strategy for classifying and solving problems by pursuing a mathematically rigorous yet ...
Photon counting compressive depth mapping
Howland, Gregory A; Ware, Matthew R; Howell, John C
2013-01-01
We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 x 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 x 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second.
Count rate performance study of the Lausanne ClearPET scanner demonstrator
Rey, M. [LPHE, Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland)]. E-mail: martin.rey@epfl.ch; Jan, S. [Service Hospitalier Frederic Joliot, CEA, F-91401 Orsay (France); Vieira, J.-M. [LPHE, Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Mosset, J.-B. [LPHE, Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Krieguer, M. [IIHE, Vrije Universiteit Brussel, B-1050 Brussels (Belgium); Comtat, C. [Service Hospitalier Frederic Joliot, CEA, F-91401 Orsay (France); Morel, C. [CPPM, CNRS-IN2P3, Universite de la Mediterranee Aix-Marseille II, F-13288 Marseille (France)
2007-02-01
This paper presents the count rate measurements obtained with the Lausanne partial ring ClearPET scanner demonstrator and compares them against GATE Monte Carlo simulations. For the present detector setup, a maximum single event count rate of 1.1 Mcps is measured or a 250-750 keV energy window. This corresponds to a coincidence count rate of approximately 22 kcps. Good agreements are observed between measured and simulated data. Count rate performance, including Noise Equivalent Count (NEC) curves, are determined and extrapolated for a full ring ClearPET design using GATE Monte Carlo simulations. For a full ring design with three rings of detector modules, NEC is peaking at about 70 kcps for 20 MBq.
Expectation Maximization for Hard X-ray Count Modulation Profiles
Benvenuto, Federico; Piana, Michele; Massone, Anna Maria
2013-01-01
This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI)} instrument. Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized for the analysis of count modulation profiles in solar hard X-ray imaging based on Rotating Modulation Collimators. The algorithm described in this paper solves the maximum likelihood problem iteratively and encoding a positivity constraint into the iterative optimization scheme. The result is therefore a classical Expectation Maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, ...
Undercooling and nodule count in thin walled ductile iron castings
Pedersen, Karl Martin; Tiedje, Niels Skat
2007-01-01
Casting experiments have been performed with eutectic and hypereutectic castings with plate thick¬nesses from 2 to 8 mm involving both temperature measurements during solidification and micro¬structural examination afterwards. The nodule count was the same for the eutectic and hypereutectic...... castings in the thin plates ( 4.3 mm) while in the 8 mm plate the nodule count was higher in the hypereutectic than in the eutectic castings. The minimum temperature prior to the eutectic recalescence (Tmin) was 15 to 20C lower for the eutectic than the hypereutectic castings. This is due to nucleation...... of graphite nodules which begins at a lover temperature in the eutectic than in the hypereutectic castings The recalescence (Trec) was however also larger for the eutectic casting and in the thin plates the maximum temperature after recalescence (Tmax) was the same in the eutectic and hypereutectic plates...
Undercooling and nodule count in thin walled ductile iron castings
Pedersen, Karl Martin; Tiedje, Niels Skat
2007-01-01
Casting experiments have been performed with eutectic and hypereutectic castings with plate thicknesses from 2 to 8 mm involving both temperature measurements during solidification and microstructural examination afterwards. The nodule count was the same for the eutectic and hypereutectic castings...... in the thin plates (≤4.3 mm) while in the 8 mm plate the nodule count was higher in the hypereutectic than in the eutectic castings. The minimum temperature before the eutectic recalescence (Tmin) was 15 to 20ºC lower for the eutectic than for the hypereutectic castings. This is due to nucleation of graphite...... nodules which begins at a lower temperature in the eutectic than in the hypereutectic castings. The recalescence ∆Trec was however also larger for the eutectic casting and in the thin plates the maximum temperature after recalescence (Tmax) was the same in the eutectic and hypereutectic plates...
Maximum entropy method for solving operator equations of the first kind
金其年; 侯宗义
1997-01-01
The maximum entropy method for linear ill-posed problems with modeling error and noisy data is considered and the stability and convergence results are obtained. When the maximum entropy solution satisfies the "source condition", suitable rates of convergence can be derived. Considering the practical applications, an a posteriori choice for the regularization parameter is presented. As a byproduct, a characterization of the maximum entropy regularized solution is given.
Refractive error sensing from wavefront slopes.
Navarro, Rafael
2010-01-01
The problem of measuring the objective refractive error with an aberrometer has shown to be more elusive than expected. Here, the formalism of differential geometry is applied to develop a theoretical framework of refractive error sensing. At each point of the pupil, the local refractive error is given by the wavefront curvature, which is a 2 × 2 symmetric matrix, whose elements are directly related to sphere, cylinder, and axis. Aberrometers usually measure the local gradient of the wavefront. Then refractive error sensing consists of differentiating the gradient, instead of integrating as in wavefront sensing. A statistical approach is proposed to pass from the local to the global (clinically meaningful) refractive error, in which the best correction is assumed to be the maximum likelihood estimation. In the practical implementation, this corresponds to the mode of the joint histogram of the 3 different elements of the curvature matrix. Results obtained both in computer simulations and with real data provide a close agreement and consistency with the main optical image quality metrics such as the Strehl ratio.
Patient error: a preliminary taxonomy.
Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.
2009-01-01
PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca
Automatic Error Analysis Using Intervals
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Rieger, Martina; Martinez, Fanny; Wenke, Dorit
2011-01-01
Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Error bars in experimental biology.
Cumming, Geoff; Fidler, Fiona; Vaux, David L
2007-04-09
Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.
Combining cluster number counts and galaxy clustering
Lacasa, Fabien; Rosenfeld, Rogerio
2016-08-01
The abundance of clusters and the clustering of galaxies are two of the important cosmological probes for current and future large scale surveys of galaxies, such as the Dark Energy Survey. In order to combine them one has to account for the fact that they are not independent quantities, since they probe the same density field. It is important to develop a good understanding of their correlation in order to extract parameter constraints. We present a detailed modelling of the joint covariance matrix between cluster number counts and the galaxy angular power spectrum. We employ the framework of the halo model complemented by a Halo Occupation Distribution model (HOD). We demonstrate the importance of accounting for non-Gaussianity to produce accurate covariance predictions. Indeed, we show that the non-Gaussian covariance becomes dominant at small scales, low redshifts or high cluster masses. We discuss in particular the case of the super-sample covariance (SSC), including the effects of galaxy shot-noise, halo second order bias and non-local bias. We demonstrate that the SSC obeys mathematical inequalities and positivity. Using the joint covariance matrix and a Fisher matrix methodology, we examine the prospects of combining these two probes to constrain cosmological and HOD parameters. We find that the combination indeed results in noticeably better constraints, with improvements of order 20% on cosmological parameters compared to the best single probe, and even greater improvement on HOD parameters, with reduction of error bars by a factor 1.4-4.8. This happens in particular because the cross-covariance introduces a synergy between the probes on small scales. We conclude that accounting for non-Gaussian effects is required for the joint analysis of these observables in galaxy surveys.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Video Error Correction Using Steganography
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
A Characterization of Prediction Errors
Meek, Christopher
2016-01-01
Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.
Error Analysis and Its Implication
崔蕾
2007-01-01
Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.
Error bars in experimental biology
2007-01-01
Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...
Bard, D; Chang, C; May, M; Kahn, S M; AlSayyad, Y; Ahmad, Z; Bankert, J; Connolly, A; Gibson, R R; Gilmore, K; Grace, E; Haiman, Z; Hannel, M; Huffenberger, K M; Jernigan, J G; Jones, L; Krughoff, S; Lorenz, S; Marshall, S; Meert, A; Nagarajan, S; Peng, E; Peterson, J; Rasmussen, A P; Shmakova, M; Sylvestre, N; Todd, N; Young, M
2013-01-01
The statistics of peak counts in reconstructed shear maps contain information beyond the power spectrum, and can improve cosmological constraints from measurements of the power spectrum alone if systematic errors can be controlled. We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST image simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.
Hallas, Gary; Monis, Paul
2015-01-01
The enumeration of bacteria using plate-based counts is a core technique used by food and water microbiology testing laboratories. However, manual counting of bacterial colonies is both time and labour intensive, can vary between operators and also requires manual entry of results into laboratory information management systems, which can be a source of data entry error. An alternative is to use automated digital colony counters, but there is a lack of peer-reviewed validation data to allow incorporation into standards. We compared the performance of digital counting technology (ProtoCOL3) against manual counting using criteria defined in internationally recognized standard methods. Digital colony counting provided a robust, standardized system suitable for adoption in a commercial testing environment. The digital technology has several advantages:•Improved measurement of uncertainty by using a standard and consistent counting methodology with less operator error.•Efficiency for labour and time (reduced cost).•Elimination of manual entry of data onto LIMS.•Faster result reporting to customers.
Atom-counting in High Resolution Electron Microscopy:TEM or STEM - That's the question.
Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S
2016-10-27
In this work, a recently developed quantitative approach based on the principles of detection theory is used in order to determine the possibilities and limitations of High Resolution Scanning Transmission Electron Microscopy (HR STEM) and HR TEM for atom-counting. So far, HR STEM has been shown to be an appropriate imaging mode to count the number of atoms in a projected atomic column. Recently, it has been demonstrated that HR TEM, when using negative spherical aberration imaging, is suitable for atom-counting as well. The capabilities of both imaging techniques are investigated and compared using the probability of error as a criterion. It is shown that for the same incoming electron dose, HR STEM outperforms HR TEM under common practice standards, i.e. when the decision is based on the probability function of the peak intensities in HR TEM and of the scattering cross-sections in HR STEM. If the atom-counting decision is based on the joint probability function of the image pixel values, the dependence of all image pixel intensities as a function of thickness should be known accurately. Under this assumption, the probability of error may decrease significantly for atom-counting in HR TEM and may, in theory, become lower as compared to HR STEM under the predicted optimal experimental settings. However, the commonly used standard for atom-counting in HR STEM leads to a high performance and has been shown to work in practice.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Counting pairs of faint galaxies
Woods, D; Richer, H B; Woods, David; Fahlman, Gregory G; Richer, Harvey B
1995-01-01
The number of close pairs of galaxies observed to faint magnitude limits, when compared to nearby samples, determines the interaction or merger rate as a function of redshift. The prevalence of mergers at intermediate redshifts is fundamental to understanding how galaxies evolve and the relative population of galaxy types. Mergers have been used to explain the excess of galaxies in faint blue counts above the numbers expected from no-evolution models. Using deep CFHT (I\\leq24) imaging of a ``blank'' field we find a pair fraction which is consistent with the galaxies in our sample being randomly distributed with no significant excess of ``physical'' close pairs. This is contrary to the pair fraction of 34\\%\\pm9\\% found by Burkey {\\it et al.} for similar magnitude limits and using an identical approach to the pair analysis. Various reasons for this discrepancy are discussed. Colors and morphologies of our close pairs are consistent with the bulk of them being random superpositions although, as indicators of int...
Complete Blood Count and Retinal Vessel Calibers
Gerald Liew; Jie Jin Wang; Elena Rochtchina; Tien Yin Wong; Paul Mitchell
2014-01-01
OBJECTIVE: The influence of hematological indices such as complete blood count on microcirculation is poorly understood. Retinal microvasculature can be directly visualized and vessel calibers are associated with a range of ocular and systemic diseases. We examined the association of complete blood count with retinal vessel calibers. METHODS: Cross-sectional population-based Blue Mountains Eye Study, n = 3009, aged 49+ years. Complete blood count was measured from fasting blood samples taken ...
Improvement of Delayed Neutron Counting System
YUAN; Guo-jun; XIAO; Cai-jin; YANG; Wei; ZHANG; Gui-ying; JIN; Xiang-chun; WANG; Ping-sheng; NI; Bang-fa
2012-01-01
<正>A new delayed neutron counting system, which is good at qualitative and quantitative analysis of fissionable nuclide mixture, will be established at China Advanced Research Reactor (CARR). We use 3 He proportional counters to count the delayed neutrons after the samples irradiated by reactor neutrons, including U3O8-stantard, uranium ore and enriched uranium. Then, the counting efficiency and limit of this system were calculated.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.
Vector perturbations of galaxy number counts
Durrer, Ruth; Tansella, Vittorio
2016-07-01
We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxy number counts.
Vector perturbations of galaxy number counts
Durrer, Ruth
2016-01-01
We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxy number counts.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
An alternative calibration method for counting P-32 reactor monitors
Quirk, T.J. [Applied Nuclear Technologies, Sandia National Laboratories, MS 1143, PO Box 5800, Albuquerque, NM 87185-1143 (United States); Vehar, D.W. [Sandia National Laboratories, Albuquerque, NM 87185-1143 (United States)
2011-07-01
Radioactivation of sulfur is a common technique used to measure fast neutron fluences in test and research reactors. Elemental sulfur can be pressed into pellets and used as monitors. The {sup 32}S(n, p) {sup 32}P reaction has a practical threshold of about 3 MeV and its cross section and associated uncertainties are well characterized [1]. The product {sup 32P} emits a beta particle with a maximum energy of 1710 keV [2]. This energetic beta particle allows pellets to be counted intact. ASTM Standard Test Method for Measuring Reaction Rates and Fast-Neutron Fluences by Radioactivation of Sulfur-32 (E265) [3] details a method of calibration for counting systems and subsequent analysis of results. This method requires irradiation of sulfur monitors in a fast-neutron field whose spectrum and intensity are well known. The resultant decay-corrected count rate is then correlated to the known fast neutron fluence. The Radiation Metrology Laboratory (RML) at Sandia has traditionally performed calibration irradiations of sulfur pellets using the {sup 252}Cf spontaneous fission neutron source at the National Inst. of Standards and Technology (NIST) [4] as a transfer standard. However, decay has reduced the intensity of NIST's source; thus lowering the practical upper limits of available fluence. As of May 2010, neutron emission rates have decayed to approximately 3 e8 n/s. In practice, this degradation of capabilities precludes calibrations at the highest fluence levels produced at test reactors and limits the useful range of count rates that can be measured. Furthermore, the reduced availability of replacement {sup 252}Cf threatens the long-term viability of the NIST {sup 252}Cf facility for sulfur pellet calibrations. In lieu of correlating count rate to neutron fluence in a reference field the total quantity of {sup 32}P produced in a pellet can be determined by absolute counting methods. This offers an attractive alternative to extended {sup 252}Cf exposures because
Diagnostic errors in pediatric radiology
Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)
2011-03-15
Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)
Common errors in disease mapping
Ricardo Ocaña-Riola
2010-05-01
Full Text Available Many morbid-mortality atlases and small-area studies have been carried out over the last decade. However, the methods used to draw up such research, the interpretation of results and the conclusions published are often inaccurate. Often, the proliferation of this practice has led to inefficient decision-making, implementation of inappropriate health policies and negative impact on the advancement of scientific knowledge. This paper reviews the most frequent errors in the design, analysis and interpretation of small-area epidemiological studies and proposes a diagnostic evaluation test that should enable the scientific quality of published papers to be ascertained. Nine common mistakes in disease mapping methods are discussed. From this framework, and following the theory of diagnostic evaluation, a standardised test to evaluate the scientific quality of a small-area epidemiology study has been developed. Optimal quality is achieved with the maximum score (16 points, average with a score between 8 and 15 points, and low with a score of 7 or below. A systematic evaluation of scientific papers, together with an enhanced quality in future research, will contribute towards increased efficacy in epidemiological surveillance and in health planning based on the spatio-temporal analysis of ecological information.
Applications of non-standard maximum likelihood techniques in energy and resource economics
Moeltner, Klaus
Two important types of non-standard maximum likelihood techniques, Simulated Maximum Likelihood (SML) and Pseudo-Maximum Likelihood (PML), have only recently found consideration in the applied economic literature. The objective of this thesis is to demonstrate how these methods can be successfully employed in the analysis of energy and resource models. Chapter I focuses on SML. It constitutes the first application of this technique in the field of energy economics. The framework is as follows: Surveys on the cost of power outages to commercial and industrial customers usually capture multiple observations on the dependent variable for a given firm. The resulting pooled data set is censored and exhibits cross-sectional heterogeneity. We propose a model that addresses these issues by allowing regression coefficients to vary randomly across respondents and by using the Geweke-Hajivassiliou-Keane simulator and Halton sequences to estimate high-order cumulative distribution terms. This adjustment requires the use of SML in the estimation process. Our framework allows for a more comprehensive analysis of outage costs than existing models, which rely on the assumptions of parameter constancy and cross-sectional homogeneity. Our results strongly reject both of these restrictions. The central topic of the second Chapter is the use of PML, a robust estimation technique, in count data analysis of visitor demand for a system of recreation sites. PML has been popular with researchers in this context, since it guards against many types of mis-specification errors. We demonstrate, however, that estimation results will generally be biased even if derived through PML if the recreation model is based on aggregate, or zonal data. To countervail this problem, we propose a zonal model of recreation that captures some of the underlying heterogeneity of individual visitors by incorporating distributional information on per-capita income into the aggregate demand function. This adjustment
Note: A high count rate real-time digital processing method for PGNAA data acquisition system
Liu, Yuzhe; Chen, Lian; Li, Feng; Liang, Futian; Jin, Ge
2017-07-01
The prompt gamma neutron activation analysis (PGNAA) technique is a real-time online method to analyze the composition of industrial materials. This paper presents a data acquisition system with a high count rate and real-time digital processing method for PGNAA. Limited by the decay time of the detector, the ORTEC multi-channel analyzer (MCA) can normally achieve an average count rate of 100 kcps. However, this system uses an electrical technique to increase the average count rate and reduce dead time, and guarantees good accuracy. Since the measuring time is usually limited to about 120 s, in order to accelerate the accumulation rate of spectrum and reduce the statistical error, the average count rate is expected to reach more than 500 kcps.
Aerial measurement error with a dot planimeter: Some experimental estimates
Yuill, R. S.
1971-01-01
A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.
Transient Error Data Analysis.
1979-05-01
Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Menlove, Howard Olsen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzlova, Daniela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-25
This informal report presents the measurement data and information to document the performance of the advanced Precision Data Technology, Inc. (PDT) sealed cell boron-10 plate neutron detector that makes use of the advanced coating materials and procedures. In 2015, PDT changed the boron coating materials and application procedures to significantly increase the efficiency of their basic corrugated plate detector performance. A prototype sealed cell unit was supplied to LANL for testing and comparison with prior detector cells. Also, LANL had reference detector slabs from the original neutron collar (UNCL) and the new Antech UNCL with the removable 3He tubes. The comparison data is presented in this report.
Errors Due to Counting Statistics in the Triaxial Strain (Stress) Tensor Determined by Diffraction.
2014-09-26
Distribution of this docimnt Reproduction in whole or in part in unlimited is permitted for any purpose of the United States Government L" __j a m...1973). 3) S. Taira, T. Abe and T. Ehiro, X-ray Study of Surface Residual Stress Produced in Fatigue Process of Annealed Metals, Bull J.S.M.E., 12:53
Analytical maximum likelihood estimation of stellar magnetic fields
González, M J Martínez; Ramos, A Asensio; Belluzzi, L
2011-01-01
The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.
Trilisky, Igor; Ward, Emily; Dachman, Abraham H
2015-10-01
CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.
Bias Expansion of Spatial Statistics and Approximation of Differenced Lattice Point Counts
Daniel J Nordman; Soumendra N Lahiri
2011-05-01
Investigations of spatial statistics, computed from lattice data in the plane, can lead to a special lattice point counting problem. The statistical goal is to expand the asymptotic expectation or large-sample bias of certain spatial covariance estimators, where this bias typically depends on the shape of a spatial sampling region. In particular, such bias expansions often require approximating a difference between two lattice point counts, where the counts correspond to a set of increasing domain (i.e., the sampling region) and an intersection of this set with a vector translate of itself. Non-trivially, the approximation error needs to be of smaller order than the spatial region’s perimeter length. For all convex regions in 2-dimensional Euclidean space and certain unions of convex sets, we show that a difference in areas can approximate a difference in lattice point counts to this required accuracy, even though area can poorly measure the lattice point count of any single set involved in the difference. When investigating large-sample properties of spatial estimators, this approximation result facilitates direct calculation of limiting bias, because, unlike counts, differences in areas are often tractable to compute even with non-rectangular regions. We illustrate the counting approximations with two statistical examples.
Error correction method and apparatus for electronic timepieces
Davidson, J. R.; Heyman, J. S. (Inventor)
1983-01-01
A method and apparatus for correcting errors in an electronic digital timepiece that includes an oscillator which has a 2 in. frequency output, an n-stage frequency divider for reducing the oscillator output frequency to a time keeping frequency, and means for displaying the count of the time keeping frequency. In first and second embodiments of the invention the timepiece is synchronized with a time standard at the beginning of the period of time T. In the first embodiment of the invention the timepiece user observes E (the difference between the time standard and the timepiece time at the end of the period T) and then operates a switch to correct the time of the timepiece and to obtain a count for E. In the second embodiment of the invention, the user operates a switch at the beginning of T and at the end of T and a count for E is obtained electronically.
Counting losses due to saturation effects of scintillation counters at high count rates
Hashimoto, K
1999-01-01
The counting statistics of a scintillation counter, with a preamplifier saturated by an overloading input, are investigated. First, the formulae for the variance and the mean number of counts, accumulated within a given gating time, are derived by considering counting-loss effects originating from the saturation and a finite resolving time of the electronic circuit. Numerical examples based on the formulae indicate that the saturation makes a positive contribution to the variance-to-mean ratio and that the contribution increases with count rate. Next the ratios are measured under high count rates when the preamplifier saturation can be observed. By fitting the present formula to the measured data, the counting-loss parameters can be evaluated. Corrections based on the parameters are made for various count rates measured in a nuclear reactor. As a result of the corrections, the linearity between count rate and reactor power can be restored.
Novel Photon-Counting Detectors for Free-Space Communication
Krainak, Michael A.; Yang, Guan; Sun, Xiaoli; Lu, Wei; Merritt, Scott; Beck, Jeff
2016-01-01
We present performance data for novel photon counting detectors for free space optical communication. NASA GSFC is testing the performance of three novel photon counting detectors 1) a 2x8 mercury cadmium telluride avalanche array made by DRS Inc. 2) a commercial 2880 silicon avalanche photodiode array and 3) a prototype resonant cavity silicon avalanche photodiode array. We will present and compare dark count, photon detection efficiency, wavelength response and communication performance data for these detectors. We discuss system wavelength trades and architectures for optimizing overall communication link sensitivity, data rate and cost performance. The HgCdTe APD array has photon detection efficiencies of greater than 50 were routinely demonstrated across 5 arrays, with one array reaching a maximum PDE of 70. High resolution pixel-surface spot scans were performed and the junction diameters of the diodes were measured. The junction diameter was decreased from 31 m to 25 m resulting in a 2x increase in e-APD gain from 470 on the 2010 array to 1100 on the array delivered to NASA GSFC. Mean single photon SNRs of over 12 were demonstrated at excess noise factors of 1.2-1.3.The commercial silicon APD array has a fast output with rise times of 300ps and pulse widths of 600ps. Received and filtered signals from the entire array are multiplexed onto this single fast output. The prototype resonant cavity silicon APD array is being developed for use at 1 micron wavelength.
Genetic Regulatory Networks that count to 3
Lehmann, Martin; Sneppen, K.
2013-01-01
that contain repressive links, which we model by Michaelis-Menten terms. Interestingly, we find that counting to 3 does not require a hierarchy in Hill coefficients, in contrast to counting to 2, which is known from lambda phage. Furthermore, we find two main circuit architectures: one design also found...
Correcting Finger Counting to Snellen Acuity.
Karanjia, Rustum; Hwang, Tiffany Jean; Chen, Alexander Francis; Pouw, Andrew; Tian, Jack J; Chu, Edward R; Wang, Michelle Y; Tran, Jeffrey Show; Sadun, Alfredo A
2016-10-01
In this paper, the authors describe an online tool with which to convert and thus quantify count finger measurements of visual acuity into Snellen equivalents. It is hoped that this tool allows for the re-interpretation of retrospectively collected data that provide visual acuity in terms of qualitative count finger measurements.
Is It Counting, or Is It Adding?
Eisenhardt, Sara; Fisher, Molly H.; Thomas, Jonathan; Schack, Edna O.; Tassell, Janet; Yoder, Margaret
2014-01-01
The Common Core State Standards for Mathematics (CCSSI 2010) expect second grade students to "fluently add and subtract within 20 using mental strategies" (2.OA.B.2). Most children begin with number word sequences and counting approximations and then develop greater skill with counting. But do all teachers really understand how this…
It Is Time to Count Learning Communities
Henscheid, Jean M.
2015-01-01
As the modern learning community movement turns 30, it is time to determine just how many, and what type, of these programs exist at America's colleges and universities. This article first offers a rationale for counting learning communities followed by a description of how disparate counts and unclear definitions hamper efforts to embed these…
Is It Counting, or Is It Adding?
Eisenhardt, Sara; Fisher, Molly H.; Thomas, Jonathan; Schack, Edna O.; Tassell, Janet; Yoder, Margaret
2014-01-01
The Common Core State Standards for Mathematics (CCSSI 2010) expect second grade students to "fluently add and subtract within 20 using mental strategies" (2.OA.B.2). Most children begin with number word sequences and counting approximations and then develop greater skill with counting. But do all teachers really understand how this…
Leiva Lopez, Josue Nahun
In general, the nursery industry lacks an automated inventory control system. Object-based image analysis (OBIA) software and aerial images could be used to count plants in nurseries. The objectives of this research were: 1) to evaluate the effect of an unmanned aerial vehicle (UAV) flight altitude and plant canopy separation of container-grown plants on count accuracy using aerial images and 2) to evaluate the effect of plant canopy shape, presence of flowers, and plant status (living and dead) on counting accuracy of container-grown plants using remote sensing images. Images were analyzed using Feature AnalystRTM (FA) and an algorithm trained using MATLABRTM. Total count error, false positives and unidentified plants were recorded from output images using FA; only total count error was reported for the MATLAB algorithm. For objective 1, images were taken at 6, 12 and 22 m above the ground using a UAV. Plants were placed on black fabric and gravel, and spaced as follows: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. In general, when both methods were considered, total count error was smaller [ranging from -5 (undercount) to 4 (over count)] when plants were fully separated with the exception of images taken at 22 m. FA showed a smaller total count error (-2) than MATLAB (-5) when plants were placed on black fabric than those placed on gravel. For objective 2, the plan was to continue using the UAV, however, due to the unexpected disruption of the GPS-based navigation by heightened solar flare activity in 2013, a boom lift that could provide images on a more reliable basis was used. When images obtained using a boom lift were analyzed using FA there was no difference between variables measured when an algorithm trained with an image displaying regular or irregular plant canopy shape was applied to images displaying both plant canopy shapes even though the canopy shape of 'Sea Green' juniper is less compact than 'Plumosa Compacta
Error Analysis in Mathematics Education.
Rittner, Max
1982-01-01
The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)
Payment Error Rate Measurement (PERM)
U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...
PV Maximum Power-Point Tracking by Using Artificial Neural Network
Farzad Sedaghati
2012-01-01
Full Text Available In this paper, using artificial neural network (ANN for tracking of maximum power point is discussed. Error back propagation method is used in order to train neural network. Neural network has advantages of fast and precisely tracking of maximum power point. In this method neural network is used to specify the reference voltage of maximum power point under different atmospheric conditions. By properly controling of dc-dc boost converter, tracking of maximum power point is feasible. To verify theory analysis, simulation result is obtained by using MATLAB/SIMULINK.
Bayesian and maximum likelihood estimation of genetic maps
York, Thomas L.; Durrett, Richard T.; Tanksley, Steven;
2005-01-01
There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances...
Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion
Zongze Wu
2015-10-01
Full Text Available The maximum correntropy criterion (MCC has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE criterion, the proposed algorithm can achieve better convergence performance especially in the presence of impulsive non-Gaussian (e.g., α-stable noises. Additionally, some theoretical results concerning the convergence behavior are also obtained. Simulation examples are presented to confirm the superior performance of the new algorithm.
Use of Maximum Entropy Modeling in Wildlife Research
Roger A. Baldwin
2009-11-01
Full Text Available Maximum entropy (Maxent modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful models, and performs better than other presence-only modeling approaches. Further advances are needed to better define model thresholds, to test model significance, and to address model selection. Additionally, development of modeling approaches is needed when using repeated sampling of known individuals to assess habitat selection. These advancements would strengthen the utility of Maxent for wildlife research and management.
Maximum Power Point Tracking Based on Sliding Mode Control
Nimrod Vázquez
2015-01-01
Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.
Probabilistic maximum-value wind prediction for offshore environments
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
, and probabilistic forecasts result in greater value to the end-user. The models outperform traditional baseline forecast methods and achieve low predictive errors on the order of 1–2 m s−1. We show the results of their predictive accuracy for different lead times and different training methodologies....... statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...
Error bounds for set inclusions
ZHENG; Xiyin(郑喜印)
2003-01-01
A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.
Uncertainty quantification and error analysis
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Feature Referenced Error Correction Apparatus.
A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)
PRESAGE: Protecting Structured Address Generation against Soft Errors
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
2016-12-28
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Periodic sequences with stable $k$-error linear complexity
Zhou, Jianqin
2011-01-01
The linear complexity of a sequence has been used as an important measure of keystream strength, hence designing a sequence which possesses high linear complexity and $k$-error linear complexity is a hot topic in cryptography and communication. Niederreiter first noticed many periodic sequences with high $k$-error linear complexity over GF(q). In this paper, the concept of stable $k$-error linear complexity is presented to study sequences with high $k$-error linear complexity. By studying linear complexity of binary sequences with period $2^n$, the method using cube theory to construct sequences with maximum stable $k$-error linear complexity is presented. It is proved that a binary sequence with period $2^n$ can be decomposed into some disjoint cubes. The cube theory is a new tool to study $k$-error linear complexity. Finally, it is proved that the maximum $k$-error linear complexity is $2^n-(2^l-1)$ over all $2^n$-periodic binary sequences, where $2^{l-1}\\le k<2^{l}$.
Firewall Configuration Errors Revisited
Wool, Avishai
2009-01-01
The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).
1984-01-01
The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.
Catalytic quantum error correction
Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-01-01
We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Nuclear counting filter based on a centered Skellam test and a double exponential smoothing
Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan; Rohee, Emmanuel; Normand Stephane [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette, (France)
2015-07-01
Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activity occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)
Haitao Zhao; Yuning Dong; Hui Zhang; Nanjie Liu; Hongbo Zhu
2013-01-01
This paper proposes an environment-aware best-retransmission count selected optimization control scheme over IEEE 802.11 multi-hop wireless networks. The proposed scheme predicts the wireless resources by using statistical channel state and provides maximum retransmission count optimization based on wireless channel environment state to improve the packet delivery success ratio. The media access control (MAC) layer selects the best-retransmission count by perceiving the types of packet loss in wireless link and using the wireless channel charac-teristics and environment information, and adjusts the packet for-warding adaptively aiming at improving the packet retransmission probability. Simulation results show that the best-retransmission count selected scheme achieves a higher packet successful de-livery percentage and a lower packet col ision probability than the corresponding traditional MAC transmission control protocols.
Digital Counts of Maize Plants by Unmanned Aerial Vehicles (UAVs
Friederike Gnädinger
2017-05-01
Full Text Available Precision phenotyping, especially the use of image analysis, allows researchers to gain information on plant properties and plant health. Aerial image detection with unmanned aerial vehicles (UAVs provides new opportunities in precision farming and precision phenotyping. Precision farming has created a critical need for spatial data on plant density. The plant number reflects not only the final field emergence but also allows a more precise assessment of the final yield parameters. The aim of this work is to advance UAV use and image analysis as a possible high-throughput phenotyping technique. In this study, four different maize cultivars were planted in plots with different seeding systems (in rows and equidistantly spaced and different nitrogen fertilization levels (applied at 50, 150 and 250 kg N/ha. The experimental field, encompassing 96 plots, was overflown at a 50-m height with an octocopter equipped with a 10-megapixel camera taking a picture every 5 s. Images were recorded between BBCH 13–15 (it is a scale to identify the phenological development stage of a plant which is here the 3- to 5-leaves development stage when the color of young leaves differs from older leaves. Close correlations up to R2 = 0.89 were found between in situ and image-based counted plants adapting a decorrelation stretch contrast enhancement procedure, which enhanced color differences in the images. On average, the error between visually and digitally counted plants was ≤5%. Ground cover, as determined by analyzing green pixels, ranged between 76% and 83% at these stages. However, the correlation between ground cover and digitally counted plants was very low. The presence of weeds and blurry effects on the images represent possible errors in counting plants. In conclusion, the final field emergence of maize can rapidly be assessed and allows more precise assessment of the final yield parameters. The use of UAVs and image processing has the potential to
Ingham, S C; Hu, Y; Ané, C
2011-08-01
The objective of this study was to evaluate possible claims by advocates of small-scale dairy farming that milk from smaller Wisconsin farms is of higher quality than milk from larger Wisconsin farms. Reported bulk tank standard plate count (SPC) and somatic cell count (SCC) test results for Wisconsin dairy farms were obtained for February to December, 2008. Farms were sorted into 3 size categories using available size-tracking criteria: small (≤118 cows; 12,866 farms), large (119-713 cattle; 1,565 farms), and confined animal feeding operations (≥714 cattle; 160 farms). Group means were calculated (group=farm size category) for the farms' minimum, median, mean, 90th percentile, and maximum SPC and SCC. Statistical analysis showed that group means for median, mean, 90th percentile, and maximum SPC and SCC were almost always significantly higher for the small farm category than for the large farm and confined animal feeding operations farm categories. With SPC and SCC as quality criteria and the 3 farm size categories of ≤118, 119 to 713, and ≥714 cattle, the claim of Wisconsin smaller farms producing higher quality milk than Wisconsin larger farms cannot be supported.
Experimental repetitive quantum error correction.
Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer
2011-05-27
The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Controlling errors in unidosis carts
Inmaculada Díaz Fernández
2010-01-01
Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.
Prediction of discretization error using the error transport equation
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren
2016-11-01
The performance of digital image correlation (DIC) is influenced by the quality of speckle patterns significantly. Thus, it is crucial to present a valid and practical method to assess the quality of speckle patterns. However, existing assessment methods either lack a solid theoretical foundation or fail to consider the errors due to interpolation. In this work, it is proposed to assess the quality of speckle patterns by estimating the root mean square error (RMSE) of DIC, which is the square root of the sum of square of systematic error and random error. Two performance evaluation parameters, respectively the maximum and the quadratic mean of RMSE, are proposed to characterize the total error. An efficient algorithm is developed to estimate these parameters, and the correctness of this algorithm is verified by numerical experiments for both 1 dimensional signal and actual speckle images. The influences of correlation criterion, shape function order, and sub-pixel registration algorithm are briefly discussed. Compared to existing methods, method presented by this paper is more valid due to the consideration of both measurement accuracy and precision.
Prioritising interventions against medication errors
Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard
2011-01-01
Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...
Sex-Specific Equations to Estimate Maximum Oxygen Uptake in Cycle Ergometry
Christina G. de Souza e Silva; Araújo,Claudio Gil S.
2015-01-01
Abstract Background: Aerobic fitness, assessed by measuring VO2max in maximum cardiopulmonary exercise testing (CPX) or by estimating VO2max through the use of equations in exercise testing, is a predictor of mortality. However, the error resulting from this estimate in a given individual can be high, affecting clinical decisions. Objective: To determine the error of estimate of VO2max in cycle ergometry in a population attending clinical exercise testing laboratories, and to propose sex-spec...
Noun Countability; Count Nouns and Non-count Nouns, What are the Syntactic Differences Between them?
Azhar A. Alkazwini
2016-11-01
Full Text Available Words that function as the subjects of verbs, objects of verbs or prepositions and which can have a plural form and possessive ending are known as nouns. They are described as referring to persons, places, things, states, or qualities and might also be used as an attributive modifier. In this paper, classes and subclasses of nouns shall be presented, then, noun countability branching into count and non-count nous shall be discussed. A number of present examples illustrating differences between count and non-count nouns and this includes determiner-head-co-occurrence restrictions of number, subject-verb agreement, in addition to some exceptions to this agreement rule shall be discussed. Also, the lexically inherent number in nouns and how inherently plural nouns are classified in terms of (+/- count are illustrated. This research will discuss partitive construction of count and non-count nouns, nouns as attributive modifier and, finally, conclude with the fact that there are syntactic difference between count and non-count in the English Language. Keywords: English Language, Nouns, Count, Non-count, Syntactic Differences, Proper Nouns
Count response model for the CMB spots
Giovannini, Massimo
2010-01-01
The statistics of the curvature quanta generated during a stage of inflationary expansion is used to derive a count response model for the large-scale phonons determining, in the concordance lore, the warmer and the cooler spots of the large-scale temperature inhomogeneities. The multiplicity distributions for the counting statistics are shown to be generically overdispersed in comparison with conventional Poissonian regressions. The generalized count response model deduced hereunder accommodates an excess of correlations in the regime of high multiplicities and prompts dedicated analyses with forthcoming data collected by instruments of high angular resolution and high sensitivity to temperature variations per pixel.
The theory and practice of scintillation counting
Birks, John Bettely
1964-01-01
The Theory and Practice of Scintillation Counting is a comprehensive account of the theory and practice of scintillation counting. This text covers the study of the scintillation process, which is concerned with the interactions of radiation and matter; the design of the scintillation counter; and the wide range of applications of scintillation counters in pure and applied science. The book is easy to read despite the complex nature of the subject it attempts to discuss. It is organized such that the first five chapters illustrate the fundamental concepts of scintillation counting. Chapters 6
McGregor, Grant Duncan
2008-12-16
In this thesis we examine the method of counting B{bar B} events produced in the BABAR experiment. The original method was proposed in 2000, but improvements to track reconstruction and our understanding of the detector since that date make it appropriate to revisit the B Counting method. We propose a new set of cuts designed to minimize the sensitivity to time-varying backgrounds. We find the new method counts B{bar B} events with an associated systematic uncertainty of {+-} 0.6%.
Photon counting for quantum key distribution with Peltier cooled InGaAs/InP APD's
Stucki, D; Stefanov, A; Zbinden, H; Rarity, J G; Wall, T; Stucki, Damien; Ribordy, Gr\\'{e}goire; Stefanov, Andr\\'{e}; Zbinden, Hugo; Rarity, John G.; Wall, Tom
2001-01-01
The performance of three types of InGaAs/InP avalanche photodiodes is investigated for photon counting at 1550 nm in the temperature range of thermoelectric cooling. The best one yields a dark count probability of $% 2.8\\cdot 10^{-5}$ per gate (2.4 ns) at a detection efficiency of 10% and a temperature of -60C. The afterpulse probability and the timing jitter are also studied. The results obtained are compared with those of other papers and applied to the simulation of a quantum key distribution system. An error rate of 10% would be obtained after 54 kilometers.
Suppression of dark-count effects in practical quantum key-distribution
Khalique, A; Alber, G; Khalique, Aeysha; Nikolopoulos, Georgios M.; Alber, Gernot
2006-01-01
The influence of imperfections on achievable secret-key generation rates of quantum key distribution protocols is investigated. As examples of relevant imperfections, we consider tagging of Alice's qubits and dark counts at Bob's detectors. It is demonstrated that error correction and privacy amplification based on a combination of a two-way classical communication protocol and asymmetric Calderbank-Shor-Steane codes may significantly suppress the disastrous influence of dark counts. As a result, the distances are increased considerably over which a secret key can be distributed in optical fibres reliably. Results are presented for the four-state, the six-state, and the decoy-state protocols.
Garment Counting in a Textile Warehouse by Means of a Laser Imaging System
Alejandro Santos Martínez-Sala
2013-04-01
Full Text Available Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.
Mini-corpus Based Analysis of Errors in Higher Vocational College Students ’Writing
查静
2014-01-01
Errors are of significance to language learners in that they are unavoidable and necessary part of learning. We collect 120 HVC students’in-class compositions. Writing errors are identified, marked and annotated in line with the error tagging sys-tem used by Gui in CLEC. A mini-corpus is created and tokens are counted and analyzed with SPSS. A factor analysis together with follow-up interview is made to figure out if common factors can account for certain types of errors.
Real-time detection and elimination of nonorthogonality error in interference fringe processing.
Hu, Haijiang; Zhang, Fengdeng
2011-05-20
In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moiré fringe and other optical instruments.
Improved Error Thresholds for Measurement-Free Error Correction
Crow, Daniel; Joynt, Robert; Saffman, M.
2016-09-01
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Cylindricity Error Measuring and Evaluating for Engine Cylinder Bore in Manufacturing Procedure
Qiang Chen
2016-01-01
Full Text Available On-line measuring device of cylindricity error is designed based on two-point method error separation technique (EST, which can separate spindle rotation error from measuring error. According to the principle of measuring device, the mathematical model of the minimum zone method for cylindricity error evaluating is established. Optimized parameters of objective function decrease to four from six by assuming that c is equal to zero and h is equal to one. Initial values of optimized parameters are obtained from least square method and final values are acquired by the genetic algorithm. The ideal axis of cylinder is fitted in MATLAB. Compared to the error results of the least square method, the minimum circumscribed cylinder method, and the maximum inscribed cylinder method, the error result of the minimum zone method conforms to the theory of error evaluation. The results indicate that the method can meet the requirement of engine cylinder bore cylindricity error measuring and evaluating.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
Error Immune Logic for Low-Power Probabilistic Computing
Bo Marr
2010-01-01
design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.
Application of Joint Error Maximal Mutual Compensation to hexapod robots
Veryha, Yauheni; Petersen, Henrik Gordon
2008-01-01
A good practice to ensure high-positioning accuracy in industrial robots is to use joint error maximum mutual compensation (JEMMC). This paper presents an application of JEMMC for positioning of hexapod robots to improve end-effector positioning accuracy. We developed an algorithm and simulation ...
PREVENTABLE ERRORS: NEVER EVENTS
Narra Gopal
2014-07-01
Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.
Comparison of analytical error and sampling error for contaminated soil.
Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders
2006-11-16
Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.
Low white blood cell count and cancer
... this page: //medlineplus.gov/ency/patientinstructions/000675.htm Low white blood cell count and cancer To use ... high blood pressure, or seizures Continue Reading How Low is too Low? When your blood is tested, ...
Mourning Dove Call-count Survey
US Fish and Wildlife Service, Department of the Interior — The Mourning Dove (Zenaida macroura) Call-Count Survey was developed to provide an index to population size and to detect annual changes in mourning dove breeding...
Furbearer track count index testing and development
US Fish and Wildlife Service, Department of the Interior — Indices of abundance can be useful in monitoring furbearer populations where actual counts of individual animals are difficult. I sampled marten and snowshoe hare...
Calorie count - sodas and energy drinks
... ency/patientinstructions/000888.htm Calorie count - sodas and energy drinks To use the sharing features on this page, ... to have a few servings of soda or energy drinks a day without thinking about it. Like other ...
VSRR Provisional Drug Overdose Death Counts
U.S. Department of Health & Human Services — This data contains provisional counts for drug overdose deaths based on a current flow of mortality data in the National Vital Statistics System. National...
Uranium Determination by Delayed Neutron Counting
2008-01-01
<正>Uranium is a very important resource in nuclear industry, especially in the exploiture of nuclear energy. Determination of uranium using delayed neutron counting (DNC) is simple, non-destructive, and
Four square mile survey pair count instructions
US Fish and Wildlife Service, Department of the Interior — This standard operating procedure (SOP) provides guidance for conducting bird pair count measurements on wetlands for the HAPETs Four-Square-Mile survey. This set of...
CoC Housing Inventory Count Reports
Department of Housing and Urban Development — Continuum of Care (CoC) Homeless Assistance Programs Housing Inventory Count Reports are a snapshot of a CoC’s housing inventory, available at the national and state...
2012 bobwhite whistle count : performance report
US Fish and Wildlife Service, Department of the Interior — Performance report for the 2012 spring whistle count to monitor northern bobwhite abundance in Kansas state. This survey was initiated in 1998, and is preformed on...
2013 bobwhite whistle count : performance report
US Fish and Wildlife Service, Department of the Interior — Performance report for the 2013 spring whistle count to monitor northern bobwhite abundance in Kansas state. This survey was initiated in 1998, and is preformed on...
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Variance estimation in neutron coincidence counting using the bootstrap method
Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)
2015-09-11
In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.
Statistical modelling for falls count data.
Ullah, Shahid; Finch, Caroline F; Day, Lesley
2010-03-01
Falls and their injury outcomes have count distributions that are highly skewed toward the right with clumping at zero, posing analytical challenges. Different modelling approaches have been used in the published literature to describe falls count distributions, often without consideration of the underlying statistical and modelling assumptions. This paper compares the use of modified Poisson and negative binomial (NB) models as alternatives to Poisson (P) regression, for the analysis of fall outcome counts. Four different count-based regression models (P, NB, zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB)) were each individually fitted to four separate fall count datasets from Australia, New Zealand and United States. The finite mixtures of P and NB regression models were also compared to the standard NB model. Both analytical (F, Vuong and bootstrap tests) and graphical approaches were used to select and compare models. Simulation studies assessed the size and power of each model fit. This study confirms that falls count distributions are over-dispersed, but not dispersed due to excess zero counts or heterogeneous population. Accordingly, the P model generally provided the poorest fit to all datasets. The fit improved significantly with NB and both zero-inflated models. The fit was also improved with the NB model, compared to finite mixtures of both P and NB regression models. Although there was little difference in fit between NB and ZINB models, in the interests of parsimony it is recommended that future studies involving modelling of falls count data routinely use the NB models in preference to the P or ZINB or finite mixture distribution. The fact that these conclusions apply across four separate datasets from four different samples of older people participating in studies of different methodology, adds strength to this general guiding principle.
How to count an introduction to combinatorics
Allenby, RBJT
2010-01-01
What's It All About? What Is Combinatorics? Classic Problems What You Need to Know Are You Sitting Comfortably? Permutations and Combinations The Combinatorial Approach Permutations CombinationsApplications to Probability Problems The Multinomial Theorem Permutations and Cycles Occupancy Problems Counting the Solutions of Equations New Problems from Old A ""Reduction"" Theorem for the Stirling Numbers The Inclusion-Exclusion Principle Double Counting Derangements A Formula for the Stirling NumbersStirling and Catalan Numbers Stirling Numbers Permutations and Stirling Numbers Catalan Numbers Pa
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Data Analysis & Statistical Methods for Command File Errors
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
The Complexity of Approximately Counting Stable Matchings
Chebolu, Prasad; Martin, Russell
2010-01-01
We investigate the complexity of approximately counting stable matchings in the $k$-attribute model, where the preference lists are determined by dot products of "preference vectors" with "attribute vectors", or by Euclidean distances between "preference points" and "attribute points". Irving and Leather proved that counting the number of stable matchings in the general case is $#P$-complete. Counting the number of stable matchings is reducible to counting the number of downsets in a (related) partial order and is interreducible, in an approximation-preserving sense, to a class of problems that includes counting the number of independent sets in a bipartite graph ($#BIS$). It is conjectured that no FPRAS exists for this class of problems. We show this approximation-preserving interreducibilty remains even in the restricted $k$-attribute setting when $k \\geq 3$ (dot products) or $k \\geq 2$ (Euclidean distances). Finally, we show it is easy to count the number of stable matchings in the 1-attribute dot-product ...
SXDF-ALMA 2 arcmin$^2$ Deep Survey: 1.1-mm Number Counts
Hatsukade, Bunyo; Umehata, Hideki; Aretxaga, Itziar; Caputi, Karina I; Dunlop, James S; Ikarashi, Soh; Iono, Daisuke; Ivison, Rob J; Lee, Minju; Makiya, Ryu; Matsuda, Yuichi; Motohara, Kentaro; Nakanishi, Kouichiro; Ohta, Kouji; Tadaki, Ken-ich; Tamura, Yoichi; Wang, Wei-Hao; Wilson, Grant W; Yamaguchi, Yuki; Yun, Min S
2016-01-01
We report 1.1 mm number counts revealed with the Atacama Large Millimeter/submillimeter Array (ALMA) in the Subaru/XMM-Newton Deep Survey Field (SXDF). The advent of ALMA enables us to reveal millimeter-wavelength number counts down to the faint end without source confusion. However, previous studies are based on the ensemble of serendipitously-detected sources in fields originally targeting different sources and could be biased due to the clustering of sources around the targets. We derive number counts in the flux range of 0.2-2 mJy by using 23 (>=4sigma) sources detected in a continuous 2.0 arcmin$^2$ area of the SXDF. The number counts are consistent with previous results within errors, suggesting that the counts derived from serendipitously-detected sources are not significantly biased, although there could be field-to-field variation due to the small survey area. By using the best-fit function of the number counts, we find that ~40% of the extragalactic background light at 1.1 mm is resolved at S(1.1mm)...
Development of an automated asbestos counting software based on fluorescence microscopy.
Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio
2015-01-01
An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.
2013-01-01
ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....
Nested Quantum Error Correction Codes
Wang, Zhuo; Fan, Hen; Vedral, Vlatko
2009-01-01
The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.
Density Reconstructions with Errors in the Data
Erika Gomes-Gonçalves
2014-06-01
Full Text Available The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the method is its potential to incorporate errors in the data. Here, we examine two possible ways of doing that. The two approaches have different intuitive interpretations, and one of them allows for error estimation. Our motivating example comes from the field of risk analysis, but the statement of the problem might as well come from any branch of applied sciences. We apply the methodology to a problem consisting of the determination of a probability density from a few values of its numerically-determined Laplace transform. This problem can be mapped onto a problem consisting of the determination of a probability density on [0, 1] from the knowledge of a few of its fractional moments up to some measurement errors stemming from insufficient data.
SYSTEMATIC ERROR FROM Th/U RATION IN LUMINESCENCE AND ESR DATING
LiShenghua; Man-Yin; 等
1995-01-01
In luminescence and ESR dating methods,total count rate from thick source alpha counting is commonly used for estimating annual dose with assumption of equal activities for both uranium and thorium decay chains.This is equal to a Th/U weight ratio of 3.2. The systematic error in total dose rate due to uncertainty of the ratio is calculated.It is found that the error is insignificant for uniformly distributed samples such as sediment,but can be significant for some extreme circumstances.
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
Bouchard, Jean-Pierre; Veilleux, Israël; Jedidi, Rym; Noiseux, Isabelle; Fortin, Michel; Mermut, Ozzy
2010-05-24
Development, production quality control and calibration of optical tissue-mimicking phantoms require a convenient and robust characterization method with known absolute accuracy. We present a solid phantom characterization technique based on time resolved transmittance measurement of light through a relatively small phantom sample. The small size of the sample enables characterization of every material batch produced in a routine phantoms production. Time resolved transmittance data are pre-processed to correct for dark noise, sample thickness and instrument response function. Pre-processed data are then compared to a forward model based on the radiative transfer equation solved through Monte Carlo simulations accurately taking into account the finite geometry of the sample. The computational burden of the Monte-Carlo technique was alleviated by building a lookup table of pre-computed results and using interpolation to obtain modeled transmittance traces at intermediate values of the optical properties. Near perfect fit residuals are obtained with a fit window using all data above 1% of the maximum value of the time resolved transmittance trace. Absolute accuracy of the method is estimated through a thorough error analysis which takes into account the following contributions: measurement noise, system repeatability, instrument response function stability, sample thickness variation refractive index inaccuracy, time correlated single photon counting system time based inaccuracy and forward model inaccuracy. Two sigma absolute error estimates of 0.01 cm(-1) (11.3%) and 0.67 cm(-1) (6.8%) are obtained for the absorption coefficient and reduced scattering coefficient respectively.
Cosmological constraints from Subaru weak lensing cluster counts
Hamana, Takashi; Koike, Michitaro; Miller, Lance
2015-01-01
We present results of weak lensing cluster counts obtained from 11 sq.deg SuprimeCam data. Although the area is much smaller than previous work dealing with weak lensing peak statistics, the number density of galaxies usable for weak lensing analysis is about twice as large as those. The higher galaxy number density reduces the noise in the weak lensing mass maps, and thus increases the signal-to-noise ratio of peaks of the lensing signal due to massive clusters. This enables us to construct a weak lensing selected cluster sample by adopting a high threshold S/N, such that the contamination rate due to false signals is small. We find 6 peaks with S/N>5. For all the peaks, previously identified clusters of galaxies are matched within a separation of 1 arcmin, demonstrating good correspondence between the peaks and clusters of galaxies. We evaluate the statistical error using mock weak lensing data, and find Npeak=6+/-3.1 in an effective area of 9.0 sq.deg. We compare the measured weak lensing cluster counts wi...
Stephen J. Stanislav
2010-06-01
Full Text Available In this study, we review various methods of estimating detection probabilities for avian point counts: distance sampling, multiple observer methods, and recently proposed time-of-detection methods. Both distance and multiple observer methods require the sometimes unrealistic assumption that all birds in the population sing during the count interval. We provide a general model of detection where the total probability of detection is made up of the probability of a bird singing, i.e., availability, and the probability of detecting a bird, conditional on its having sung. We show that the time-of-detection method provides an estimate of the total probability, whereas combining the time-of-detection method with a multiple observer method enables estimation of the two components of the detection process separately. Our approach is shown to be a special case of Pollock's robust capture-recapture design where the probability that a bird does not sing is equivalent to the probability that an animal is a temporary emigrant. We estimate Hooded Warbler and Ovenbird population size, through maximum likelihood estimation, using experimentally simulated field data for which the true population sizes were known. The method performs well when singing rates and detection probabilities are high, and when observers are able to accurately localize individual birds. Population sizes are underestimated when there is heterogeneity of singing rates among individual birds, especially when singing rates are close to zero. Despite the additional expense and the potential for counting and matching errors, we encourage field ornithologists to consider using this combined method in their field studies to better understand the detection process, and to obtain better abundance estimates.
2013-01-01
ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....
Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?
Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.
2007-01-01
This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…
Measurement Error and Equating Error in Power Analysis
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Platelet counting with the BD Accuri(TM) C6 flow cytometer.
Masters, Andrew; Harrison, Paul
2014-01-01
The Accuri™ C6 is a compact flow cytometer that uses a peristaltic pump with a laminar flow fluidic system and can measure absolute cell counts. In this study we have evaluated this method with the International Reference Method (IRM) simultaneously measured on both the Accuri™ C6 and a reference flow cytometer. After optimisation of sample labelling conditions, final dilutions and flow cytometer settings, a comparison of the absolute fluorescent platelet count with the RBC/platelet ratio on the C6 and the IRM was then performed in 144 patient samples with a full range of platelet counts (range 2-650 × 10(9)/l). The platelet/RBC ratio method determined on the Accuri™ agreed well with the IRM (R(2)=0.99, bias=2.3 (Bland Altman) and R(2)=0.96, bias=1.02 at counts <50 × 10(9)/l). The absolute platelet count also agreed well with the IRM (R(2)=0.97, bias=-0.16 and R(2)=0.91, bias=3.7 at <50 × 10(9)/l). The C6 absolute platelet count and RBC/platelet ratio methods also agreed well (R(2)=0.99, bias=-2.5 and R(2)=0.95, bias=2.71 at counts <50 × 10(9)/l). Reproducibility studies on the C6 gave CVs of <5% for the RB/platelet ratio and <12% for the absolute cell counts. The C6 also demonstrated excellent linearity on diluted samples with both volume and ratio methods (R(2)=0.99). As one might expect, the absolute platelet count is therefore slightly more inaccurate than the RBC/platelet ratio particularly at platelet counts <50 × 10(9)/l as it is likely to be more sensitive to pipetting error. The Accuri™ C6 provides a simple, rapid and reliable method for measuring platelet counts by either the RBC/platelet or direct volume methods. The direct volume method can also be used to determine platelet counts within purified platelet preparations or concentrates in the absence of RBC.
Evaluation of DAPI direct count, computer assisted and plate count methods
Chivu, Bogdan
2010-01-01
The feasibility of using automatic counting of bacteria stained with highly specific and sensitive fluorescing DNA stain DAPI, 4',6-diamidino-2-phenylindole, and direct manual counting to enumerate both pure culture of Pseudomonas putida overnight culture and sea water enhanced culture, was tested in correlation with plate direct counting, turbidity and absorbance at 600nm, to obtain cross validation. Six diluted samples from overnight pure culture of Pseudomonas putida and sea water culture ...
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Weigel, Ralf; Hermann, Markus; Curtius, Joachim; Voigt, Christiane; Walter, Saskia; Böttger, Thomas; Lepukhov, Boris; Belyaev, Gennady; Borrmann, Stephan
2008-01-01
his study aims at a detailed characterization of an ultra-fine aerosol particle counting system for operation on board the Russian high altitude research aircraft M-55 "Geophysica" (maximum ceiling of 21 km). The COndensation PArticle counting Systems (COPAS) consists of an aerosol inlet and two dual-channel continuous flow Condensation Particle Counters (CPCs). The aerosol inlet, adapted for COPAS measurements on board the M-55 "Geophysica", is described concerning aspiration, transmissio...
Acconcia, G.; Labanca, I.; Rech, I.; Gulinatti, A.; Ghioni, M.
2017-02-01
The minimization of Single Photon Avalanche Diodes (SPADs) dead time is a key factor to speed up photon counting and timing measurements. We present a fully integrated Active Quenching Circuit (AQC) able to provide a count rate as high as 100 MHz with custom technology SPAD detectors. The AQC can also operate the new red enhanced SPAD and provide the timing information with a timing jitter Full Width at Half Maximum (FWHM) as low as 160 ps.
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.
Spatial frequency domain error budget
Hauschildt, H; Krulewich, D
1998-08-27
The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine
Reducing errors in emergency surgery.
Watters, David A K; Truskett, Philip G
2013-06-01
Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.
Benjamin Thompson, Count Rumford Count Rumford on the nature of heat
Brown, Sanborn C
1967-01-01
Men of Physics: Benjamin Thompson - Count Rumford: Count Rumford on the Nature of Heat covers the significant contributions of Count Rumford in the fields of physics. Count Rumford was born with the name Benjamin Thompson on March 23, 1753, in Woburn, Massachusetts. This book is composed of two parts encompassing 11 chapters, and begins with a presentation of Benjamin Thompson's biography and his interest in physics, particularly as an advocate of an """"anti-caloric"""" theory of heat. The subsequent chapters are devoted to his many discoveries that profoundly affected the physical thought
General expression of double ellipsoidal heat source model and its error analysis
无
2008-01-01
In order to analyze the maximum power density error with different heat flux distribution parameter values for double ellipsoidal heat source model, a general expression of double ellipsoidal heat source model was derived from Goldak double ellipsoidal heat source model, and the error of maximum power density was analyzed under this foundation. The calculation error of thermal cycling parameters caused by the maximum power density error was compared quantitatively by numerical simulation. The results show that for guarantee the accuracy of welding numerical simulation, it is better to introduce an error correction coefficient into the Goldak double ellipsoidal heat source model expression. And, heat flux distribution parameter should get higher value for the higher power density welding methods.
Khavane Karna
2012-11-01
Full Text Available Medication error can increase the cost, prolong hospital stay and increase the risk of death almost two fold. Several studies have already demonstrated that pharmacist can play major role in detection and prevention of medication errors. Present study was aimed to detect and evaluate the incidence, types of medication errors and to assess the severity of medication errors in the medicine wards of Basaveshwar teaching and general hospital, Gulbarga. Prospective study was carried out from July 2011 to January 2012.Inpatients records of patients from six units of medicine department were reviewed during their stay in hospital. Detected medication errors were documented and evaluated. A total of 500 cases of the patients were selected, among them 77.2% were male and 25.8% were females. 37.5% of them were in the age group of 40 to 60 years.118 medication errors were detected in 72 patients. Maximum medication errors (27 were detected in the month of December 2011. The overall incidence of medication error was found to be 23.6%. A total of 118 medication errors were observed, among them 29.6% were errors in medication ordering and transcription, 24.5% were errors in medication dispensing and 45.7% were nursing errors in medication administration. The causes of medication error were 61.1% were due to nurses, 17.7% were due to Pharmacists and 16.1% errors were due to physicians. Majority of medication errors were belonging to CVS drug class (20.3%.On evaluation of severity, majority of medication errors 85.5% were classified as category Error, No harm, followed by 14.4% were in category No Error. This study concluded that 23.6% medication errors were detected during study period and revealed that pharmacist can play a major role in preventing these errors by early detection.
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Scheike, Thomas Harder
2002-01-01
We use the additive risk model of Aalen (Aalen, 1980) as a model for the rate of a counting process. Rather than specifying the intensity, that is the instantaneous probability of an event conditional on the entire history of the relevant covariates and counting processes, we present a model...... for the rate function, i.e., the instantaneous probability of an event conditional on only a selected set of covariates. When the rate function for the counting process is of Aalen form we show that the usual Aalen estimator can be used and gives almost unbiased estimates. The usual martingale based variance...... estimator is incorrect and an alternative estimator should be used. We also consider the semi-parametric version of the Aalen model as a rate model (McKeague and Sasieni, 1994) and show that the standard errors that are computed based on an assumption of intensities are incorrect and give a different...
Error Analysis in English Language Learning
杜文婷
2009-01-01
Errors in English language learning are usually classified into interlingual errors and intralin-gual errors, having a clear knowledge of the causes of the errors will help students learn better English.
Error Analysis And Second Language Acquisition
王惠丽
2016-01-01
Based on the theories of error and error analysis, the article is trying to explore the effect of error and error analysis on SLA. Thus give some advice to the language teachers and language learners.
Sunde, Peter; Jessen, Lonnie
2013-01-01
Spotlight surveys conducted by volunteers is a promising method to assess the abundance of nocturnally active mammals, but estimates are subject to bias if different observer groups differ in their ability to detect animals in the dark. We quantified the variation amongst volunteer spotlight...... with non-hunters and decreased as function of age but were independent of sex or educational background. If observer-specific detection probabilities were applied to real counting routes, point count estimates from inexperienced observers without a hunting background would only be 43 % (95 % CI, 39...
Quantifying error distributions in crowding.
Hanus, Deborah; Vul, Edward
2013-03-22
When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.
Discretization error of Stochastic Integrals
Fukasawa, Masaaki
2010-01-01
Asymptotic error distribution for approximation of a stochastic integral with respect to continuous semimartingale by Riemann sum with general stochastic partition is studied. Effective discretization schemes of which asymptotic conditional mean-squared error attains a lower bound are constructed. Two applications are given; efficient delta hedging strategies with transaction costs and effective discretization schemes for the Euler-Maruyama approximation are constructed.
Dual Processing and Diagnostic Errors
Norman, Geoff
2009-01-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…
Barriers to medical error reporting
Jalal Poorolajal
2015-01-01
Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.
Tightness of the recentered maximum of the two-dimensional discrete Gaussian Free Field
Bramson, Maury
2010-01-01
We consider the maximum of the discrete two dimensional Gaussian free field (GFF) in a box, and prove that its maximum, centered at its mean, is tight, settling a long-standing conjecture. The proof combines a recent observation of Bolthausen, Deuschel and Zeitouni with elements from (Bramson 1978) and comparison theorems for Gaussian fields. An essential part of the argument is the precise evaluation, up to an error of order 1, of the expected value of the maximum of the GFF in a box. Related Gaussian fields, such as the GFF on a two-dimensional torus, are also discussed.
Application of LIDAR to forest inventory for tree count in stands of Eucalyptus sp
Fausto Weimar Acerbi Junior
2012-06-01
Full Text Available Light Detection and Ranging, or LIDAR, has become an effective ancillary tool to extract forest inventory data and for use in other forest studies. This work was aimed at establishing an effective methodology for using LIDAR for tree count in a stand of Eucalyptus sp. located in southern Bahia state. Information provided includes in-flight gross data processing to final tree count. Intermediate processing steps are of critical importance to the quality of results and include the following stages: organizing point clouds, creating a canopy surface model (CSM through TIN and IDW interpolation and final automated tree count with a local maximum algorithm with 5 x 5 and 3 x 3 windows. Results were checked against manual tree count using Quickbird images, for verification of accuracy. Tree count using IDW interpolation with a 5x5 window for the count algorithm was found to be accurate to 97.36%. This result demonstrates the effectiveness of the methodology and its use potential for future applications.
Photon-counting spaceborne altimeter simulator
Blazej, Josef
2004-11-01
We are presenting of a photon counting laser altimeter simulator. The simulator is designed to be a theoretical and numerical complement for a Technology Demonstrator of the space born laser altimeter for planetary studies built on our university. The European Space Agency has nominated the photon counting altimeter as one of the attractive devices for planetary research. The device should provide altimetry in the range 400 to 1400 km with one meter range resolution under rough conditions - Sun illumination, radiation, etc. The general altimeter concept expects the photon counting principle laser radar. According to this concept, the simulator is based on photon counting radar simulation, which has been enhanced to handle planetary surface roughness, vertical terrain profile and its reflectivity. The simulator is useful complement for any photon counting altimeter both for altimeter design and for measured data analysis. Our simulator enables to model the orbital motion, range, terrain profile, reflectivity, and their influence on the over all energy budget and the ultimate signal to noise ratio acceptable for the altimetry. The simulator can be adopted for various air or space born application.
Energy harvesting using AC machines with high effective pole count
Geiger, Richard Theodore
In this thesis, ways to improve the power conversion of rotating generators at low rotor speeds in energy harvesting applications were investigated. One method is to increase the pole count, which increases the generator back-emf without also increasing the I2R losses, thereby increasing both torque density and conversion efficiency. One machine topology that has a high effective pole count is a hybrid "stepper" machine. However, the large self inductance of these machines decreases their power factor and hence the maximum power that can be delivered to a load. This effect can be cancelled by the addition of capacitors in series with the stepper windings. A circuit was designed and implemented to automatically vary the series capacitance over the entire speed range investigated. The addition of the series capacitors improved the power output of the stepper machine by up to 700%. At low rotor speeds, with the addition of series capacitance, the power output of the hybrid "stepper" was more than 200% that of a similarly sized PMDC brushed motor. Finally, in this thesis a hybrid lumped parameter / finite element model was used to investigate the impact of number, shape and size of the rotor and stator teeth on machine performance. A typical off-the-shelf hybrid stepper machine has significant cogging torque by design. This cogging torque is a major problem in most small energy harvesting applications. In this thesis it was shown that the cogging and ripple torque can be dramatically reduced. These findings confirm that high-pole-count topologies, and specifically the hybrid stepper configuration, are an attractive choice for energy harvesting applications.
Imaging by photon counting with 256x256 pixel matrix
Tlustos, Lukas; Campbell, Michael; Heijne, Erik H. M.; Llopart, Xavier
2004-09-01
Using 0.25µm standard CMOS we have developed 2-D semiconductor matrix detectors with sophisticated functionality integrated inside each pixel of a hybrid sensor module. One of these sensor modules is a matrix of 256x256 square 55µm pixels intended for X-ray imaging. This device is called 'Medipix2' and features a fast amplifier and two-level discrimination for signals between 1000 and 100000 equivalent electrons, with overall signal noise ~150 e- rms. Signal polarity and comparator thresholds are programmable. A maximum count rate of nearly 1 MHz per pixel can be achieved, which corresponds to an average flux of 3x10exp10 photons per cm2. The selected signals can be accumulated in each pixel in a 13-bit register. The serial readout takes 5-10 ms. A parallel readout of ~300 µs could also be used. Housekeeping functions such as local dark current compensation, test pulse generation, silencing of noisy pixels and threshold tuning in each pixel contribute to the homogeneous response over a large sensor area. The sensor material can be adapted to the energy of the X-rays. Best results have been obtained with high-resistivity silicon detectors, but also CdTe and GaAs detectors have been used. The lowest detectable X-ray energy was about 4 keV. Background measurements have been made, as well as measurements of the uniformity of imaging by photon counting. Very low photon count rates are feasible and noise-free at room temperature. The readout matrix can be used also with visible photons if an energy or charge intensifier structure is interposed such as a gaseous amplification layer or a microchannel plate or acceleration field in vacuum.
Onorbit IMU alignment error budget
Corson, R. W.
1980-01-01
The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.
Binary Error Correcting Network Codes
Wang, Qiwen; Li, Shuo-Yen Robert
2011-01-01
We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.
Error Propagation in the Hypercycle
Campos, P R A; Stadler, P F
1999-01-01
We study analytically the steady-state regime of a network of n error-prone self-replicating templates forming an asymmetric hypercycle and its error tail. We show that the existence of a master template with a higher non-catalyzed self-replicative productivity, a, than the error tail ensures the stability of chains in which m
Somatic cell and factors which affect their count in milk
Zrinka Čačić
2003-01-01
Full Text Available Milk quality is determined by chemical composition, physical characteristics and hygienic parameters. The main indicators of hygienic quality of milk are total number of microorganisms and somatic cell count (SCC. Environmental factors have the greatest influence on increasing SCC. The most important environmental parameters are status of udder infection, age of cow, stage of lactation, number of lactation, breed, housing, geographicalarea and seasons, herd size, stress, heavy physical activity and, milking. A farmer (milk producer himself can control a great number of environmental factors using good management practise and permanent education. Since SCC participate in creating the price of milk, it is necessary to inform milk producers how to organise their production so that they would produce maximum quantity of good hygienic quality milk.
Photon-Counting Arrays for Time-Resolved Imaging
I. Michel Antolovic
2016-06-01
Full Text Available The paper presents a camera comprising 512 × 128 pixels capable of single-photon detection and gating with a maximum frame rate of 156 kfps. The photon capture is performed through a gated single-photon avalanche diode that generates a digital pulse upon photon detection and through a digital one-bit counter. Gray levels are obtained through multiple counting and accumulation, while time-resolved imaging is achieved through a 4-ns gating window controlled with subnanosecond accuracy by a field-programmable gate array. The sensor, which is equipped with microlenses to enhance its effective fill factor, was electro-optically characterized in terms of sensitivity and uniformity. Several examples of capture of fast events are shown to demonstrate the suitability of the approach.
FPU-Supported Running Error Analysis
T. Zahradnický; R. Lórencz
2010-01-01
A-posteriori forward rounding error analyses tend to give sharper error estimates than a-priori ones, as they use actual data quantities. One of such a-posteriori analysis – running error analysis – uses expressions consisting of two parts; one generates the error and the other propagates input errors to the output. This paper suggests replacing the error generating term with an FPU-extracted rounding error estimate, which produces a sharper error bound.
Whales from space: counting southern right whales by satellite.
Peter T Fretwell
Full Text Available We describe a method of identifying and counting whales using very high resolution satellite imagery through the example of southern right whales breeding in part of the Golfo Nuevo, Península Valdés in Argentina. Southern right whales have been extensively hunted over the last 300 years and although numbers have recovered from near extinction in the early 20(th century, current populations are fragmented and are estimated at only a small fraction of pre-hunting total. Recent extreme right whale calf mortality events at Península Valdés, which constitutes the largest single population, have raised fresh concern for the future of the species. The WorldView2 satellite has a maximum 50 cm resolution and a water penetrating coastal band in the far-blue part of the spectrum that allows it to see deeper into the water column. Using an image covering 113 km², we identified 55 probable whales and 23 other features that are possibly whales, with a further 13 objects that are only detected by the coastal band. Comparison of a number of classification techniques, to automatically detect whale-like objects, showed that a simple thresholding technique of the panchromatic and coastal band delivered the best results. This is the first successful study using satellite imagery to count whales; a pragmatic, transferable method using this rapidly advancing technology that has major implications for future surveys of cetacean populations.
Whales from space: counting southern right whales by satellite.
Fretwell, Peter T; Staniland, Iain J; Forcada, Jaume
2014-01-01
We describe a method of identifying and counting whales using very high resolution satellite imagery through the example of southern right whales breeding in part of the Golfo Nuevo, Península Valdés in Argentina. Southern right whales have been extensively hunted over the last 300 years and although numbers have recovered from near extinction in the early 20(th) century, current populations are fragmented and are estimated at only a small fraction of pre-hunting total. Recent extreme right whale calf mortality events at Península Valdés, which constitutes the largest single population, have raised fresh concern for the future of the species. The WorldView2 satellite has a maximum 50 cm resolution and a water penetrating coastal band in the far-blue part of the spectrum that allows it to see deeper into the water column. Using an image covering 113 km², we identified 55 probable whales and 23 other features that are possibly whales, with a further 13 objects that are only detected by the coastal band. Comparison of a number of classification techniques, to automatically detect whale-like objects, showed that a simple thresholding technique of the panchromatic and coastal band delivered the best results. This is the first successful study using satellite imagery to count whales; a pragmatic, transferable method using this rapidly advancing technology that has major implications for future surveys of cetacean populations.
An Empirical Investigation of Predicting Fault Count, Fix Cost and Effort Using Software Metrics
Raed Shatnawi
2016-02-01
Full Text Available Software fault prediction is important in software engineering field. Fault prediction helps engineers manage their efforts by identifying the most complex parts of the software where errors concentrate. Researchers usually study the fault-proneness in modules because most modules have zero faults, and a minority have the most faults in a system. In this study, we present methods and models for the prediction of fault-count, fault-fix cost, and fault-fix effort and compare the effectiveness of different prediction models. This research proposes using a set of procedural metrics to predict three fault measures: fault count, fix cost and fix effort. Five regression models are used to predict the three fault measures. The study reports on three data sets published by NASA. The models for each fault are evaluated using the Root Mean Square Error. A comparison amongst fault measures is conducted using the Relative Absolute Error. The models show promising results to provide a practical guide to help software engineers in allocating resources during software testing and maintenance. The cost fix models show equal or better performance than fault count and effort models.
Moraes, D; Nygård, E
2008-01-01
This ASIC is a counting mode front-end electronic optimized for the readout of CdZnTe/CdTe and silicon sensors, for possible use in applications where the flux of ionizing radiation is high. The chip is implemented in 0.25 μm CMOS technology. The circuit comprises 128 channels equipped with a transimpedance amplifier followed by a gain shaper stage with 21 ns peaking time, two discriminators and two 18-bit counters. The channel architecture is optimized for the detector characteristics in order to achieve the best energy resolution at counting rates of up to 5 M counts/second. The amplifier shows a linear sensitivity of 118 mV/fC and an equivalent noise charge of about 711 e−, for a detector capacitance of 5 pF. Complete evaluation of the circuit is presented using electronic pulses and pixel detectors.
Eccentric error and compensation in rotationally symmetric laser triangulation
Wang Lei; Gao Jun; Wang Xiaojia; Johannes Eckstein; Peter Ott
2007-01-01
Rotationally symmetric triangulation (RST) sensor has more flexibility and less uncertainty limits becauseof the abaxial rotationally symmetric optical system.But if the incident laser is eccentric,the symmetry of the imagewill descend,and it will result in the eccentric error especially when some part of the imaged ring is blocked.Themodel of rotationally symmetric triangulation that meets the Schimpflug condition is presented in this paper.The errorfrom eccentric incident 1aser is analysed.It iS pointed out that the eccentric error is composed of two parts.one is acosine in circumference and proportional to the eccentric departure factor,and the other is a much smaller quadricfactor of the departure.When the ring is complete,the first error factor is zero because it is integrated in whole ring,but if some part of the ring iS blocked,the first factor will be the main error.Simulation verifies the result of the a-nalysis.At last,a compensation method to the error when some part of the ring is lost is presented based on neuralnetwork.The results of experiment show that the compensation will make the absolute maximum error descend tohalf,and the standard deviation of error descends to 1/3.
Measuring worst-case errors in a robot workcell
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot`s model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors.
High Count Rate Electron Probe Microanalysis
Geller, Joseph D.; Herrington, Charles
2002-01-01
Reducing the measurement uncertainty of quantitative analyses made using electron probe microanalyzers (EPMA) requires a careful study of the individual uncertainties from each definable step of the measurement. Those steps include measuring the incident electron beam current and voltage, knowing the angle between the electron beam and the sample (takeoff angle), collecting the emitted x rays from the sample, comparing the emitted x-ray flux to known standards (to determine the k-ratio) and transformation of the k-ratio to concentration using algorithms which includes, as a minimum, the atomic number, absorption, and fluorescence corrections. This paper discusses the collection and counting of the emitted x rays, which are diffracted into the gas flow or sealed proportional x-ray detectors. The representation of the uncertainty in the number of collected x rays collected reduces as the number of counts increase. The uncertainty of the collected signal is fully described by Poisson statistics. Increasing the number of x rays collected involves either counting longer or at a higher counting rate. Counting longer means the analysis time increases and may become excessive to get to the desired uncertainty. Instrument drift also becomes an issue. Counting at higher rates has its limitations, which are a function of the detector physics and the detecting electronics. Since the beginning of EPMA analysis, analog electronics have been used to amplify and discriminate the x-ray induced ionizations within the proportional counter. This paper will discuss the use of digital electronics for this purpose. These electronics are similar to that used for energy dispersive analysis of x rays with either Si(Li) or Ge(Li) detectors except that the shaping time constants are much smaller. PMID:27446749
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Counting dyons in N = 4 string theory
Dijkgraaf, Robbert; Verlinde, Erik; Verlinde, Herman
1997-02-01
We present a microscopic index formula for the degeneracy of dyons in four-dimensional N = 4 string theory. This counting formula is manifestly symmetric under the duality group, and its asymptotic growth reproduces the macroscopic Bekenstein-Hawking entropy. We give a derivation of this result in terms of the type 11 five-brane compactified on K3, by assuming that its fluctuations are described by a closed string theory on its world-volume. We find that the degeneracies are given in terms of the denominator of a generalized super Kac-Moody algebra. We also discuss the correspondence of this result with the counting of D-brane states.
Counting dyons in N=4 string theory
Dijkgraaf, R; Verlinde, Herman L
1997-01-01
We present a microscopic index formula for the degeneracy of dyons in four-dimensional N=4 string theory. This counting formula is manifestly symmetric under the duality group, and its asymptotic growth reproduces the macroscopic Bekenstein-Hawking entropy. We give a derivation of this result in terms of the type II five-brane compactified on K3, by assuming that its fluctuations are described by a closed string theory on its world-volume. We find that the degeneracies are given in terms of the denominator of a generalized super Kac-Moody algebra. We also discuss the correspondence of this result with the counting of D-brane states.
Vernstrom, Tessa; Wall, Jasper; Scott, Douglas
2014-05-01
We describe an analysis of 3-GHz confusion-limited data from the Karl J. Jansky Very Large Array (VLA). We show that with minimal model assumptions, P(D), Bayesian and Markov-Chain Mone-Carlo (MCMC) methods can define the source count to levels some 10 times fainter than the conventional confusion limit. Our verification process includes a full realistic simulation that considers known information on source angular extent and clustering. It appears that careful analysis of the statistical properties of an image is more effective than counting individual objects.
The Borda count and agenda manipulation
Michael Dummett
1998-01-01
A standard objection to the Borda count, as an actual voting procedure, is that it is subject to agenda manipulation. The classical example is the introduction, in order to favour a candidate or option y, of a new option z ranked on every voter's preference scale immediately below y; y may as a result obtain the highest Borda count, although, if z had not been introduced, a different option would have done so. Strategic use of this device is not greatly to be feared, but it does point to a de...
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Szöcs, Eduard; Schäfer, Ralf B
2015-09-01
Ecotoxicologists often encounter count and proportion data that are rarely normally distributed. To meet the assumptions of the linear model, such data are usually transformed or non-parametric methods are used if the transformed data still violate the assumptions. Generalized linear models (GLMs) allow to directly model such data, without the need for transformation. Here, we compare the performance of two parametric methods, i.e., (1) the linear model (assuming normality of transformed data), (2) GLMs (assuming a Poisson, negative binomial, or binomially distributed response), and (3) non-parametric methods. We simulated typical data mimicking low replicated ecotoxicological experiments of two common data types (counts and proportions from counts). We compared the performance of the different methods in terms of statistical power and Type I error for detecting a general treatment effect and determining the lowest observed effect concentration (LOEC). In addition, we outlined differences on a real-world mesocosm data set. For count data, we found that the quasi-Poisson model yielded the highest power. The negative binomial GLM resulted in increased Type I errors, which could be fixed using the parametric bootstrap. For proportions, binomial GLMs performed better than the linear model, except to determine LOEC at extremely low sample sizes. The compared non-parametric methods had generally lower power. We recommend that counts in one-factorial experiments should be analyzed using quasi-Poisson models and proportions from counts by binomial GLMs. These methods should become standard in ecotoxicology.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Reyes, Mayra I; Pérez, Cynthia M; Negrón, Edna L
2008-03-01
Consumers increasingly use bottled water and home water treatment systems to avoid direct tap water. According to the International Bottled Water Association (IBWA), an industry trade group, 5 billion gallons of bottled water were consumed by North Americans in 2001. The principal aim of this study was to assess the microbial quality of in-house and imported bottled water for human consumption, by measurement and comparison of the concentration of bacterial endotoxin and standard cultivable methods of indicator microorganisms, specifically, heterotrophic and fecal coliform plate counts. A total of 21 brands of commercial bottled water, consisting of 10 imported and 11 in-house brands, selected at random from 96 brands that are consumed in Puerto Rico, were tested at three different time intervals. The Standard Limulus Amebocyte Lysate test, gel clot method, was used to measure the endotoxin concentrations. The minimum endotoxin concentration in 63 water samples was less than 0.0625 EU/mL, while the maximum was 32 EU/mL. The minimum bacterial count showed no growth, while the maximum was 7,500 CFU/mL. Bacterial isolates like P. fluorescens, Corynebacterium sp. J-K, S. paucimobilis, P. versicularis, A. baumannii, P. chlororaphis, F. indologenes, A. faecalis and P. cepacia were identified. Repeated measures analysis of variance demonstrated that endotoxin concentration did not change over time, while there was a statistically significant (p count over time. In addition, multiple linear regression analysis demonstrated that a unit change in the concentration of endotoxin across time was associated with a significant (p count. This analysis evidenced a significant time effect in the average log bacteriological cell count. Although bacterial growth was not detected in some water samples, endotoxin was present. Measurement of Gram-negative bacterial endotoxins is one of the methods that have been suggested as a rapid way of determining bacteriological water quality.
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
Domire, Zachary J; Challis, John H
2010-12-01
The maximum velocity of shortening of a muscle is an important parameter in musculoskeletal models. The most commonly used values are derived from animal studies; however, these values are well above the values that have been reported for human muscle. The purpose of this study was to examine the sensitivity of simulations of maximum vertical jumping performance to the parameters describing the force-velocity properties of muscle. Simulations performed with parameters derived from animal studies were similar to measured jump heights from previous experimental studies. While simulations performed with parameters derived from human muscle were much lower than previously measured jump heights. If current measurements of maximum shortening velocity in human muscle are correct, a compensating error must exist. Of the possible compensating errors that could produce this discrepancy, it was concluded that reduced muscle fibre excursion is the most likely candidate.
On the Performance of Maximum Likelihood Inverse Reinforcement Learning
Ratia, Héctor; Martinez-Cantin, Ruben
2012-01-01
Inverse reinforcement learning (IRL) addresses the problem of recovering a task description given a demonstration of the optimal policy used to solve such a task. The optimal policy is usually provided by an expert or teacher, making IRL specially suitable for the problem of apprenticeship learning. The task description is encoded in the form of a reward function of a Markov decision process (MDP). Several algorithms have been proposed to find the reward function corresponding to a set of demonstrations. One of the algorithms that has provided best results in different applications is a gradient method to optimize a policy squared error criterion. On a parallel line of research, other authors have presented recently a gradient approximation of the maximum likelihood estimate of the reward signal. In general, both approaches approximate the gradient estimate and the criteria at different stages to make the algorithm tractable and efficient. In this work, we provide a detailed description of the different metho...
Cosmology with velocity dispersion counts: an alternative to measuring cluster halo masses
Caldwell, C. E.; McCarthy, I. G.; Baldry, I. K.; Collins, C. A.; Schaye, J.; Bird, S.
2016-11-01
The evolution of galaxy cluster counts is a powerful probe of several fundamental cosmological parameters. A number of recent studies using this probe have claimed tension with the cosmology preferred by the analysis of the Planck primary cosmic microwave background (CMB) data, in the sense that there are fewer clusters observed than predicted based on the primary CMB cosmology. One possible resolution to this problem is systematic errors in the absolute halo mass calibration in cluster studies, which is required to convert the standard theoretical prediction (the halo mass function) into counts as a function of the observable (e.g. X-ray luminosity, Sunyaev-Zel'dovich flux, and optical richness). Here we propose an alternative strategy, which is to directly compare predicted and observed cluster counts as a function of the one-dimensional velocity dispersion of the cluster galaxies. We argue that the velocity dispersion of groups/clusters can be theoretically predicted as robustly as mass but, unlike mass, it can also be directly observed, thus circumventing the main systematic bias in traditional cluster counts studies. With the aid of the BAHAMAS suite of cosmological hydrodynamical simulations, we demonstrate the potential of the velocity dispersion counts for discriminating even similar Λ cold dark matter models. These predictions can be compared with the results from existing redshift surveys such as the highly complete Galaxy And Mass Assembly survey, and upcoming wide-field spectroscopic surveys such as the Wide Area Vista Extragalactic Survey and the Dark Energy Survey Instrument.
Flexible models for spike count data with both over- and under- dispersion.
Stevenson, Ian H
2016-08-01
A key observation in systems neuroscience is that neural responses vary, even in controlled settings where stimuli are held constant. Many statistical models assume that trial-to-trial spike count variability is Poisson, but there is considerable evidence that neurons can be substantially more or less variable than Poisson depending on the stimuli, attentional state, and brain area. Here we examine a set of spike count models based on the Conway-Maxwell-Poisson (COM-Poisson) distribution that can flexibly account for both over- and under-dispersion in spike count data. We illustrate applications of this noise model for Bayesian estimation of tuning curves and peri-stimulus time histograms. We find that COM-Poisson models with group/observation-level dispersion, where spike count variability is a function of time or stimulus, produce more accurate descriptions of spike counts compared to Poisson models as well as negative-binomial models often used as alternatives. Since dispersion is one determinant of parameter standard errors, COM-Poisson models are also likely to yield more accurate model comparison. More generally, these methods provide a useful, model-based framework for inferring both the mean and variability of neural responses.
Artificial Intelligence as a Business Forecasting and Error Handling Tool
Md. Tabrez Quasim
2015-10-01
Full Text Available Any business enterprise must rely a lot on how well it can predict the future happenings. To cope up with the modern global customer demand, technological challenges, market competitions etc., any organization is compelled to foresee the future having maximum impact and least chances of errors. The traditional forecasting approaches have some limitations. That is why the business world is adopting the modern Artificial Intelligence based forecasting techniques. This paper has tried to present different types of forecasting and AI techniques that are useful in business forecasting. At the later stage we have also discussed the forecasting errors and the steps involved in planning the AI support system.
An analytical model of crater count equilibrium
Hirabayashi, Masatoshi; Minton, David A.; Fassett, Caleb I.
2017-06-01
Crater count equilibrium occurs when new craters form at the same rate that old craters are erased, such that the total number of observable impacts remains constant. Despite substantial efforts to understand this process, there remain many unsolved problems. Here, we propose an analytical model that describes how a heavily cratered surface reaches a state of crater count equilibrium. The proposed model formulates three physical processes contributing to crater count equilibrium: cookie-cutting (simple, geometric overlap), ejecta-blanketing, and sandblasting (diffusive erosion). These three processes are modeled using a degradation parameter that describes the efficiency for a new crater to erase old craters. The flexibility of our newly developed model allows us to represent the processes that underlie crater count equilibrium problems. The results show that when the slope of the production function is steeper than that of the equilibrium state, the power law of the equilibrium slope is independent of that of the production function slope. We apply our model to the cratering conditions in the Sinus Medii region and at the Apollo 15 landing site on the Moon and demonstrate that a consistent degradation parameterization can successfully be determined based on the empirical results of these regions. Further developments of this model will enable us to better understand the surface evolution of airless bodies due to impact bombardment.
A multilevel analysis of intercompany claim counts
Antonio, K.; Frees, E.W.; Valdez, E.A.
2009-01-01
In this paper, we use multilevel models to analyze data on claim counts provided by the General Insurance Association of Singapore, an organization consisting of most of the general insurers in Singapore. Our data comes from the financial records of automobile insurance policies followed over a peri
Asynchronous ASCII Event Count Status Code
2012-03-01
IRIG STANDARD 215-12 TELECOMMUNICATIONS AND TIMING GROUP ASYNCHRONOUS ASCII EVENT COUNT STATUS CODES...Inter-range Instrumentation Group ( IRIG ) Standard for American Standard Code for Information Interchange (ASCII)-formatted EC status transfer which can be...circuits and Ethernet networks. Provides systems engineers and equipment vendors with an Inter-range Instrumentation Group ( IRIG ) Standard for American
Approximately Counting Embeddings into Random Graphs
Furer, Martin
2008-01-01
Let H be a graph, and let C(H,G) be the number of (subgraph isomorphic) copies of H contained in a graph G. We investigate the fundamental problem of estimating C(H,G). Previous results cover only a few specific instances of this general problem, for example, the case when H has degree at most one (monomer-dimer problem). In this paper, we present the first general subcase of the subgraph isomorphism counting problem which is almost always efficiently approximable. The results rely on a new graph decomposition technique. Informally, the decomposition is a labeling of the vertices generating a sequence of bipartite graphs. The decomposition permits us to break the problem of counting embeddings of large subgraphs into that of counting embeddings of small subgraphs. Using this method, we present a simple randomized algorithm for the counting problem. For all decomposable graphs H and all graphs G, the algorithm is an unbiased estimator. Furthermore, for all graphs H having a decomposition where each of the bipa...
Differential white cell count by centrifugal microfluidics.
Sommer, Gregory Jon; Tentori, Augusto M.; Schaff, Ulrich Y.
2010-07-01
We present a method for counting white blood cells that is uniquely compatible with centrifugation based microfluidics. Blood is deposited on top of one or more layers of density media within a microfluidic disk. Spinning the disk causes the cell populations within whole blood to settle through the media, reaching an equilibrium based on the density of each cell type. Separation and fluorescence measurement of cell types stained with a DNA dye is demonstrated using this technique. The integrated signal from bands of fluorescent microspheres is shown to be proportional to their initial concentration in suspension. Among the current generation of medical diagnostics are devices based on the principle of centrifuging a CD sized disk functionalized with microfluidics. These portable 'lab on a disk' devices are capable of conducting multiple assays directly from a blood sample, embodied by platforms developed by Gyros, Samsung, and Abaxis. [1,2] However, no centrifugal platform to date includes a differential white blood cell count, which is an important metric complimentary to diagnostic assays. Measuring the differential white blood cell count (the relative fraction of granulocytes, lymphocytes, and monocytes) is a standard medical diagnostic technique useful for identifying sepsis, leukemia, AIDS, radiation exposure, and a host of other conditions that affect the immune system. Several methods exist for measuring the relative white blood cell count including flow cytometry, electrical impedance, and visual identification from a stained drop of blood under a microscope. However, none of these methods is easily incorporated into a centrifugal microfluidic diagnostic platform.
A multilevel analysis of intercompany claim counts
Antonio, K.; Frees, E.W.; Valdez, E.A.
2009-01-01
In this paper, we use multilevel models to analyze data on claim counts provided by the General Insurance Association of Singapore, an organization consisting of most of the general insurers in Singapore. Our data comes from the financial records of automobile insurance policies followed over a peri
Kids Count in Delaware: Fact Book, 1995.
Delaware Univ., Newark. Kids Count in Delaware.
This Kids Count fact book examines statewide trends in the well-being of Delaware's children. The statistical portrait is based on key indicators in four areas: single-parent families, births to teenage mothers, juvenile crime and violence, and education. Following brief sections on the state's demographics and economic status, the fact book…
ESL Proficiency and a Word Frequency Count.
Harlech-Jones, Brian
1983-01-01
In a study of the vocabulary proficiency of some South African ESL teacher trainees, the General Service List of English Words' validity was evaluated. It was found that mastery of this list would meet most of the vocabulary needs of the test group. Recommendations are made for practical uses of word counts. (MSE)
KIDS COUNT in Virginia, 2001 [Data Book].
Action Alliance for Virginia's Children and Youth, Richmond.
This Kids Count data book details statewide trends in the well-being of Virginia's children. The statistical portrait is based on the following four areas of children's well-being: health and safety; education; family; and economy. Key indicators examined are: (1) prenatal care; (2) low birth weight babies; (3) infant mortality; (4) child abuse or…
Renormalization of singular potentials and power counting
Long, B.; van Koick, U.; van Kolck, U.
2008-01-01
We use a toy model to illustrate how to build effective theories for singular potentials. We consider a central attractive 1/r(2) potential perturbed by a 1/r(4) correction. The power-counting rule, an important ingredient of effective theory, is established by seeking the minimum set of short-range
Reduced Component Count RGB LED Driver
De Pedro, I.; Ackermann, B.
2008-01-01
The goal of this master thesis is to develop new drive and contrololutions, for creating white light from mixing the light of different-color LEDs, aiming at a reduced component count resulting in less space required by the electronics and lower cost. It evaluates the LED driver concept proposed in
Single Entity Electrochemistry Progresses to Cell Counting.
Gooding, J Justin
2016-10-10
Red blood cells have been counted in an electrochemical collision experiment recently described by Compton and co-workers. As a cell collides with the electrode it lyses and a current is observed from the reduction of oxygen from within the cell.
Stalking the count. Dracula, Fandom and Tourism
S.L. Reijnders (Stijn)
2011-01-01
textabstractLarge numbers of tourists travel to Transylvania every year, looking for traces of Count Dracula. This article investigates why people feel the need to connect fictional stories, such as Dracula, with identifiable physical locations, and why they subsequently want to visit these
Statistical tests to compare motif count exceptionalities
Vandewalle Vincent
2007-03-01
Full Text Available Abstract Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use.
Adaptive and Approximate Orthogonal Range Counting
Chan, Timothy M.; Wilkinson, Bryan Thomas
2013-01-01
, we consider the 1-D range selection problem, where a query in an array involves finding the kth least element in a given subarray. This problem is closely related to 2-D 3-sided orthogonal range counting. Recently, Jørgensen and Larsen [SODA 2011] presented a linear-space adaptive data structure...
Health Advocacy--Counting the Costs
Dyall, Lorna; Marama, Maria
2010-01-01
Access to, and delivery of, safe and culturally appropriate health services is increasingly important in New Zealand. This paper will focus on counting the costs of health advocacy through the experience of a small non government charitable organisation, the Health Advocates Trust, (HAT) which aimed to provide advocacy services for a wide range of…
Fast box-counting algorithm on GPU.
Jiménez, J; Ruiz de Miras, J
2012-12-01
The box-counting algorithm is one of the most widely used methods for calculating the fractal dimension (FD). The FD has many image analysis applications in the biomedical field, where it has been used extensively to characterize a wide range of medical signals. However, computing the FD for large images, especially in 3D, is a time consuming process. In this paper we present a fast parallel version of the box-counting algorithm, which has been coded in CUDA for execution on the Graphic Processing Unit (GPU). The optimized GPU implementation achieved an average speedup of 28 times (28×) compared to a mono-threaded CPU implementation, and an average speedup of 7 times (7×) compared to a multi-threaded CPU implementation. The performance of our improved box-counting algorithm has been tested with 3D models with different complexity, features and sizes. The validity and accuracy of the algorithm has been confirmed using models with well-known FD values. As a case study, a 3D FD analysis of several brain tissues has been performed using our GPU box-counting algorithm.
Adaptive and Approximate Orthogonal Range Counting
Chan, Timothy M.; Wilkinson, Bryan Thomas
2013-01-01
, we consider the 1-D range selection problem, where a query in an array involves finding the kth least element in a given subarray. This problem is closely related to 2-D 3-sided orthogonal range counting. Recently, Jørgensen and Larsen [SODA 2011] presented a linear-space adaptive data structure...