WorldWideScience

Sample records for count robust estimates

  1. A robust method for estimating motorbike count based on visual information learning

    Science.gov (United States)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  2. Robust small area prediction for counts.

    Science.gov (United States)

    Tzavidis, Nikos; Ranalli, M Giovanna; Salvati, Nicola; Dreassi, Emanuela; Chambers, Ray

    2015-06-01

    A new semiparametric approach to model-based small area prediction for counts is proposed and used for estimating the average number of visits to physicians for Health Districts in Central Italy. The proposed small area predictor can be viewed as an outlier robust alternative to the more commonly used empirical plug-in predictor that is based on a Poisson generalized linear mixed model with Gaussian random effects. Results from the real data application and from a simulation experiment confirm that the proposed small area predictor has good robustness properties and in some cases can be more efficient than alternative small area approaches. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  3. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  4. Robust Optical Flow Estimation

    Directory of Open Access Journals (Sweden)

    Javier Sánchez Pérez

    2013-10-01

    Full Text Available n this work, we describe an implementation of the variational method proposed by Brox etal. in 2004, which yields accurate optical flows with low running times. It has several benefitswith respect to the method of Horn and Schunck: it is more robust to the presence of outliers,produces piecewise-smooth flow fields and can cope with constant brightness changes. Thismethod relies on the brightness and gradient constancy assumptions, using the information ofthe image intensities and the image gradients to find correspondences. It also generalizes theuse of continuous L1 functionals, which help mitigate the effect of outliers and create a TotalVariation (TV regularization. Additionally, it introduces a simple temporal regularizationscheme that enforces a continuous temporal coherence of the flow fields.

  5. Robust Wave Resource Estimation

    DEFF Research Database (Denmark)

    Lavelle, John; Kofoed, Jens Peter

    2013-01-01

    density estimates of the PDF as a function both of Hm0 and Tp, and Hm0 and T0;2, together with the mean wave power per unit crest length, Pw, as a function of Hm0 and T0;2. The wave elevation parameters, from which the wave parameters are calculated, are filtered to correct or remove spurious data....... An overview is given of the methods used to do this, and a method for identifying outliers of the wave elevation data, based on the joint distribution of wave elevations and accelerations, is presented. The limitations of using a JONSWAP spectrum to model the measured wave spectra as a function of Hm0 and T0......;2 or Hm0 and Tp for the Hanstholm site data are demonstrated. As an alternative, the non-parametric loess method, which does not rely on any assumptions about the shape of the wave elevation spectra, is used to accurately estimate Pw as a function of Hm0 and T0;2....

  6. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  7. Robust AIC with High Breakdown Scale Estimate

    Directory of Open Access Journals (Sweden)

    Shokrya Saleh

    2014-01-01

    Full Text Available Akaike Information Criterion (AIC based on least squares (LS regression minimizes the sum of the squared residuals; LS is sensitive to outlier observations. Alternative criterion, which is less sensitive to outlying observation, has been proposed; examples are robust AIC (RAIC, robust Mallows Cp (RCp, and robust Bayesian information criterion (RBIC. In this paper, we propose a robust AIC by replacing the scale estimate with a high breakdown point estimate of scale. The robustness of the proposed methods is studied through its influence function. We show that, the proposed robust AIC is effective in selecting accurate models in the presence of outliers and high leverage points, through simulated and real data examples.

  8. Robust motion estimation using connected operators

    OpenAIRE

    Salembier Clairon, Philippe Jean; Sanson, H

    1997-01-01

    This paper discusses the use of connected operators for robust motion estimation The proposed strategy involves a motion estimation step extracting the dominant motion and a ltering step relying on connected operators that remove objects that do not fol low the dominant motion. These two steps are iterated in order to obtain an accurate motion estimation and a precise de nition of the objects fol lowing this motion This strategy can be applied on the entire frame or on individual connected c...

  9. Robust estimation for ordinary differential equation models.

    Science.gov (United States)

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  10. Robust power spectral estimation for EEG data.

    Science.gov (United States)

    Melman, Tamar; Victor, Jonathan D

    2016-08-01

    Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. Using the multitaper method (Thomson, 1982) as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Robust bearing estimation for 3-component stations

    International Nuclear Information System (INIS)

    CLAASSEN, JOHN P.

    2000-01-01

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the inherent information in the arrival at every step of the process to achieve near-optimal results. In particular the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, and finally to apply bias corrections when calibration information is available to yield a single final estimate. The algorithm was applied to a small but challenging set of events in a seismically active region. It demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted from these findings

  12. Robust median estimator in logisitc regression

    Czech Academy of Sciences Publication Activity Database

    Hobza, T.; Pardo, L.; Vajda, Igor

    2008-01-01

    Roč. 138, č. 12 (2008), s. 3822-3840 ISSN 0378-3758 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MPO FI - IM3/136; GA MŠk(CZ) MTM 2006-06872 Institutional research plan: CEZ:AV0Z10750506 Keywords : Logistic regression * Median * Robustness * Consistency and asymptotic normality * Morgenthaler * Bianco and Yohai * Croux and Hasellbroeck Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.679, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/vajda-robust%20median%20estimator%20in%20logistic%20regression.pdf

  13. Heteroscedasticity resistant robust covariance matrix estimator

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2010-01-01

    Roč. 17, č. 27 (2010), s. 33-49 ISSN 1212-074X Grant - others:GA UK(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10750506 Keywords : Regression * Covariance matrix * Heteroscedasticity * Resistant Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/SI/visek-heteroscedasticity resistant robust covariance matrix estimator.pdf

  14. Neuromorphic Configurable Architecture for Robust Motion Estimation

    Directory of Open Access Journals (Sweden)

    Guillermo Botella

    2008-01-01

    Full Text Available The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM. This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.

  15. Robust Optical Richness Estimation with Reduced Scatter

    Energy Technology Data Exchange (ETDEWEB)

    Rykoff, E.S.; /LBL, Berkeley; Koester, B.P.; /Chicago U. /Chicago U., KICP; Rozo, E.; /Chicago U. /Chicago U., KICP; Annis, J.; /Fermilab; Evrard, A.E.; /Michigan U. /Michigan U., MCTP; Hansen, S.M.; /Lick Observ.; Hao, J.; /Fermilab; Johnston, D.E.; /Fermilab; McKay, T.A.; /Michigan U. /Michigan U., MCTP; Wechsler, R.H.; /KIPAC, Menlo Park /SLAC

    2012-06-07

    Reducing the scatter between cluster mass and optical richness is a key goal for cluster cosmology from photometric catalogs. We consider various modifications to the red-sequence matched filter richness estimator of Rozo et al. (2009b), and evaluate their impact on the scatter in X-ray luminosity at fixed richness. Most significantly, we find that deeper luminosity cuts can reduce the recovered scatter, finding that {sigma}{sub ln L{sub X}|{lambda}} = 0.63 {+-} 0.02 for clusters with M{sub 500c} {approx}> 1.6 x 10{sup 14} h{sub 70}{sup -1} M{sub {circle_dot}}. The corresponding scatter in mass at fixed richness is {sigma}{sub ln M|{lambda}} {approx} 0.2-0.3 depending on the richness, comparable to that for total X-ray luminosity. We find that including blue galaxies in the richness estimate increases the scatter, as does weighting galaxies by their optical luminosity. We further demonstrate that our richness estimator is very robust. Specifically, the filter employed when estimating richness can be calibrated directly from the data, without requiring a-priori calibrations of the red-sequence. We also demonstrate that the recovered richness is robust to up to 50% uncertainties in the galaxy background, as well as to the choice of photometric filter employed, so long as the filters span the 4000 {angstrom} break of red-sequence galaxies. Consequently, our richness estimator can be used to compare richness estimates of different clusters, even if they do not share the same photometric data. Appendix A includes 'easy-bake' instructions for implementing our optimal richness estimator, and we are releasing an implementation of the code that works with SDSS data, as well as an augmented maxBCG catalog with the {lambda} richness measured for each cluster.

  16. Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...

  17. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  18. a comparative study of some robust ridge and liu estimators

    African Journals Online (AJOL)

    Dr A.B.Ahmed

    estimation techniques such as Ridge and Liu Estimators are preferable to Ordinary Least Square. On the other hand, when outliers exist in the data, robust estimators like M, MM, LTS and S. Estimators, are preferred. To handle these two problems jointly, the study combines the Ridge and Liu Estimators with Robust.

  19. Robust position estimation of a mobile vehicle

    International Nuclear Information System (INIS)

    Conan, V.

    1994-01-01

    The ability to estimate the position of a mobile vehicle is a key task for navigation over large distances in complex indoor environments such as nuclear power plants. Schematics of the plants are available, but they are incomplete, as real settings contain many objects, such as pipes, cables or furniture, that mask part of the model. The position estimation method described in this paper matches 3-D data with a simple schematic of a plant. It is basically independent of odometer information and viewpoint, robust to noisy data and spurious points and largely insensitive to occlusions. The method is based on a hypothesis/verification paradigm and its complexity is polynomial; it runs in O(m 4 n 4 ), where m represents the number of model patches and n the number of scene patches. Heuristics are presented to speed up the algorithm. Results on real 3-D data show good behaviour even when the scene is very occluded. (authors). 16 refs., 3 figs., 1 tab

  20. Robust estimation of seismic coda shape

    Science.gov (United States)

    Nikkilä, Mikko; Polishchuk, Valentin; Krasnoshchekov, Dmitry

    2014-04-01

    We present a new method for estimation of seismic coda shape. It falls into the same class of methods as non-parametric shape reconstruction with the use of neural network techniques where data are split into a training and validation data sets. We particularly pursue the well-known problem of image reconstruction formulated in this case as shape isolation in the presence of a broadly defined noise. This combined approach is enabled by the intrinsic feature of seismogram which can be divided objectively into a pre-signal seismic noise with lack of the target shape, and the remainder that contains scattered waveforms compounding the coda shape. In short, we separately apply shape restoration procedure to pre-signal seismic noise and the event record, which provides successful delineation of the coda shape in the form of a smooth almost non-oscillating function of time. The new algorithm uses a recently developed generalization of classical computational-geometry tool of α-shape. The generalization essentially yields robust shape estimation by ignoring locally a number of points treated as extreme values, noise or non-relevant data. Our algorithm is conceptually simple and enables the desired or pre-determined level of shape detail, constrainable by an arbitrary data fit criteria. The proposed tool for coda shape delineation provides an alternative to moving averaging and/or other smoothing techniques frequently used for this purpose. The new algorithm is illustrated with an application to the problem of estimating the coda duration after a local event. The obtained relation coefficient between coda duration and epicentral distance is consistent with the earlier findings in the region of interest.

  1. Reproducibility of Manual Platelet Estimation Following Automated Low Platelet Counts

    Directory of Open Access Journals (Sweden)

    Zainab S Al-Hosni

    2016-11-01

    Full Text Available Objectives: Manual platelet estimation is one of the methods used when automated platelet estimates are very low. However, the reproducibility of manual platelet estimation has not been adequately studied. We sought to assess the reproducibility of manual platelet estimation following automated low platelet counts and to evaluate the impact of the level of experience of the person counting on the reproducibility of manual platelet estimates. Methods: In this cross-sectional study, peripheral blood films of patients with platelet counts less than 100 × 109/L were retrieved and given to four raters to perform manual platelet estimation independently using a predefined method (average of platelet counts in 10 fields using 100× objective multiplied by 20. Data were analyzed using intraclass correlation coefficient (ICC as a method of reproducibility assessment. Results: The ICC across the four raters was 0.840, indicating excellent agreement. The median difference of the two most experienced raters was 0 (range: -64 to 78. The level of platelet estimate by the least-experienced rater predicted the disagreement (p = 0.037. When assessing the difference between pairs of raters, there was no significant difference in the ICC (p = 0.420. Conclusions: The agreement between different raters using manual platelet estimation was excellent. Further confirmation is necessary, with a prospective study using a gold standard method of platelet counts.

  2. It counts who counts: an experimental evaluation of the importance of observer effects on spotlight count estimates

    DEFF Research Database (Denmark)

    Sunde, Peter; Jessen, Lonnie

    2013-01-01

    observers with respect to their ability to detect and estimate distance to realistic animal silhouettes at different distances. Detection probabilities were higher for observers experienced in spotlighting mammals than for inexperienced observers, higher for observers with a hunting background compared...... with non-hunters and decreased as function of age but were independent of sex or educational background. If observer-specific detection probabilities were applied to real counting routes, point count estimates from inexperienced observers without a hunting background would only be 43 % (95 % CI, 39...

  3. Pedestrian count estimation using texture feature with spatial distribution

    Directory of Open Access Journals (Sweden)

    Hongyu Hu

    2016-12-01

    Full Text Available We present a novel pedestrian count estimation approach based on global image descriptors formed from multi-scale texture features that considers spatial distribution. For regions of interest, local texture features are represented based on histograms of multi-scale block local binary pattern, which jointly constitute the feature vector of the whole image. Therefore, to achieve an effective estimation of pedestrian count, principal component analysis is used to reduce the dimension of the global representation features, and a fitting model between image global features and pedestrian count is constructed via support vector regression. The experimental result shows that the proposed method exhibits high accuracy on pedestrian count estimation and can be applied well in the real world.

  4. Introduction to Robust Estimation and Hypothesis Testing

    CERN Document Server

    Wilcox, Rand R

    2012-01-01

    This revised book provides a thorough explanation of the foundation of robust methods, incorporating the latest updates on R and S-Plus, robust ANOVA (Analysis of Variance) and regression. It guides advanced students and other professionals through the basic strategies used for developing practical solutions to problems, and provides a brief background on the foundations of modern methods, placing the new methods in historical context. Author Rand Wilcox includes chapter exercises and many real-world examples that illustrate how various methods perform in different situations.Introduction to R

  5. Second order statistics of bilinear forms of robust scatter estimators

    KAUST Repository

    Kammoun, Abla

    2015-08-12

    This paper lies in the lineage of recent works studying the asymptotic behaviour of robust-scatter estimators in the case where the number of observations and the dimension of the population covariance matrix grow at infinity with the same pace. In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved generalized likelihood ratio based detector which combines robustness to impulsive observations and optimality across the shrinkage parameter, the optimality being considered for the false alarm regulation.

  6. Robust Covariance Estimators Based on Information Divergences and Riemannian Manifold

    Directory of Open Access Journals (Sweden)

    Xiaoqiang Hua

    2018-03-01

    Full Text Available This paper proposes a class of covariance estimators based on information divergences in heterogeneous environments. In particular, the problem of covariance estimation is reformulated on the Riemannian manifold of Hermitian positive-definite (HPD matrices. The means associated with information divergences are derived and used as the estimators. Without resorting to the complete knowledge of the probability distribution of the sample data, the geometry of the Riemannian manifold of HPD matrices is considered in mean estimators. Moreover, the robustness of mean estimators is analyzed using the influence function. Simulation results indicate the robustness and superiority of an adaptive normalized matched filter with our proposed estimators compared with the existing alternatives.

  7. Trainable estimators for indirect people counting : a comparative study

    NARCIS (Netherlands)

    Acampora, G.; Loia, V.; Percannella, G.; Vento, M.

    2011-01-01

    Estimating the number of people in a scene is a very relevant issue due to the possibility of using it in a large number of contexts where it is necessary to automatically monitor an area for security/safety reasons, for economic purposes, etc. The large number of people counting approaches

  8. Second order statistics of bilinear forms of robust scatter estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Fré dé ric

    2015-01-01

    . In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved

  9. A comparative study of some robust ridge and liu estimators ...

    African Journals Online (AJOL)

    In multiple linear regression analysis, multicollinearity and outliers are two main problems. When multicollinearity exists, biased estimation techniques such as Ridge and Liu Estimators are preferable to Ordinary Least Square. On the other hand, when outliers exist in the data, robust estimators like M, MM, LTS and S ...

  10. Robust Parameter and Signal Estimation in Induction Motors

    DEFF Research Database (Denmark)

    Børsting, H.

    This thesis deals with theories and methods for robust parameter and signal estimation in induction motors. The project originates in industrial interests concerning sensor-less control of electrical drives. During the work, some general problems concerning estimation of signals and parameters...... in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods...... for robust estimation of the rotor speed and driving torque of the induction motor based only on measurements of stator voltages and currents. Only contimuous-time models have been used, which means that physical related signals and parameters are estimated directly and not indirectly by some discrete...

  11. On robust parameter estimation in brain-computer interfacing

    Science.gov (United States)

    Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert

    2017-12-01

    Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.

  12. Evaluation of Robust Estimators Applied to Fluorescence Assays

    Directory of Open Access Journals (Sweden)

    U. Ruotsalainen

    2007-12-01

    Full Text Available We evaluated standard robust methods in the estimation of fluorescence signal in novel assays used for determining the biomolecule concentrations. The objective was to obtain an accurate and reliable estimate using as few observations as possible by decreasing the influence of outliers. We assumed the true signals to have Gaussian distribution, while no assumptions about the outliers were made. The experimental results showed that arithmetic mean performs poorly even with the modest deviations. Further, the robust methods, especially the M-estimators, performed extremely well. The results proved that the use of robust methods is advantageous in the estimation problems where noise and deviations are significant, such as in biological and medical applications.

  13. Experimental estimation of snare detectability for robust threat monitoring.

    Science.gov (United States)

    O'Kelly, Hannah J; Rowcliffe, J Marcus; Durant, Sarah; Milner-Gulland, E J

    2018-02-01

    Hunting with wire snares is rife within many tropical forest systems, and constitutes one of the severest threats to a wide range of vertebrate taxa. As for all threats, reliable monitoring of snaring levels is critical for assessing the relative effectiveness of management interventions. However, snares pose a particular challenge in terms of tracking spatial or temporal trends in their prevalence because they are extremely difficult to detect, and are typically spread across large, inaccessible areas. As with cryptic animal targets, any approach used to monitor snaring levels must address the issue of imperfect detection, but no standard method exists to do so. We carried out a field experiment in Keo Seima Wildlife Reserve in eastern Cambodia with the following objectives: (1) To estimate the detection probably of wire snares within a tropical forest context, and to investigate how detectability might be affected by habitat type, snare type, or observer. (2) To trial two sets of sampling protocols feasible to implement in a range of challenging field conditions. (3) To conduct a preliminary assessment of two potential analytical approaches to dealing with the resulting snare encounter data. We found that although different observers had no discernible effect on detection probability, detectability did vary between habitat type and snare type. We contend that simple repeated counts carried out at multiple sites and analyzed using binomial mixture models could represent a practical yet robust solution to the problem of monitoring snaring levels both inside and outside of protected areas. This experiment represents an important first step in developing improved methods of threat monitoring, and such methods are greatly needed in southeast Asia, as well as in as many other regions.

  14. Generalized estimators of avian abundance from count survey data

    Directory of Open Access Journals (Sweden)

    Royle, J. A.

    2004-01-01

    Full Text Available I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture-recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.

  15. Robust estimation of track parameters in wire chambers

    International Nuclear Information System (INIS)

    Bogdanova, N.B.; Bourilkov, D.T.

    1988-01-01

    The aim of this paper is to compare numerically the possibilities of the least square fit (LSF) and robust methods for modelled and real track data to determine the linear regression parameters of charged particles in wire chambers. It is shown, that Tukey robust estimate is superior to more standard (versions of LSF) methods. The efficiency of the method is illustrated by tables and figures for some important physical characteristics

  16. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  17. Estimating open population site occupancy from presence-absence data lacking the robust design.

    Science.gov (United States)

    Dail, D; Madsen, L

    2013-03-01

    Many animal monitoring studies seek to estimate the proportion of a study area occupied by a target population. The study area is divided into spatially distinct sites where the detected presence or absence of the population is recorded, and this is repeated in time for multiple seasons. However, when occupied sites are detected with probability p Ecology 84, 2200-2207) developed a multiseason model for estimating seasonal site occupancy (ψt ) while accounting for unknown p. Their model performs well when observations are collected according to the robust design, where multiple sampling occasions occur during each season; the repeated sampling aids in the estimation p. However, their model does not perform as well when the robust design is lacking. In this paper, we propose an alternative likelihood model that yields improved seasonal estimates of p and Ψt in the absence of the robust design. We construct the marginal likelihood of the observed data by conditioning on, and summing out, the latent number of occupied sites during each season. A simulation study shows that in cases without the robust design, the proposed model estimates p with less bias than the MacKenzie et al. model and hence improves the estimates of Ψt . We apply both models to a data set consisting of repeated presence-absence observations of American robins (Turdus migratorius) with yearly survey periods. The two models are compared to a third estimator available when the repeated counts (from the same study) are considered, with the proposed model yielding estimates of Ψt closest to estimates from the point count model. Copyright © 2013, The International Biometric Society.

  18. Doubly Robust Estimation of Optimal Dynamic Treatment Regimes

    DEFF Research Database (Denmark)

    Barrett, Jessica K; Henderson, Robin; Rosthøj, Susanne

    2014-01-01

    We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret-regression appro......We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret......-regression approach of Almirall et al. (in Biometrics 66:131-139, 2010) and Henderson et al. (in Biometrics 66:1192-1201, 2010) and demonstrate that it is equivalent to a reduced form of Robins' efficient g-estimation procedure (Robins, in Proceedings of the Second Symposium on Biostatistics. Springer, New York, pp....... 189-326, 2004). Simulation studies suggest that while the regret-regression approach is most efficient when there is no model misspecification, in the presence of misspecification the efficient g-estimation procedure is more robust. The g-estimation method can be difficult to apply in complex...

  19. QuantiFly: Robust Trainable Software for Automated Drosophila Egg Counting.

    Directory of Open Access Journals (Sweden)

    Dominic Waithe

    Full Text Available We report the development and testing of software called QuantiFly: an automated tool to quantify Drosophila egg laying. Many laboratories count Drosophila eggs as a marker of fitness. The existing method requires laboratory researchers to count eggs manually while looking down a microscope. This technique is both time-consuming and tedious, especially when experiments require daily counts of hundreds of vials. The basis of the QuantiFly software is an algorithm which applies and improves upon an existing advanced pattern recognition and machine-learning routine. The accuracy of the baseline algorithm is additionally increased in this study through correction of bias observed in the algorithm output. The QuantiFly software, which includes the refined algorithm, has been designed to be immediately accessible to scientists through an intuitive and responsive user-friendly graphical interface. The software is also open-source, self-contained, has no dependencies and is easily installed (https://github.com/dwaithe/quantifly. Compared to manual egg counts made from digital images, QuantiFly achieved average accuracies of 94% and 85% for eggs laid on transparent (defined and opaque (yeast-based fly media. Thus, the software is capable of detecting experimental differences in most experimental situations. Significantly, the advanced feature recognition capabilities of the software proved to be robust to food surface artefacts like bubbles and crevices. The user experience involves image acquisition, algorithm training by labelling a subset of eggs in images of some of the vials, followed by a batch analysis mode in which new images are automatically assessed for egg numbers. Initial training typically requires approximately 10 minutes, while subsequent image evaluation by the software is performed in just a few seconds. Given the average time per vial for manual counting is approximately 40 seconds, our software introduces a timesaving advantage for

  20. Robust estimation of the noise variance from background MR data

    NARCIS (Netherlands)

    Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.

    2006-01-01

    In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum

  1. Robust cylinder pressure estimation in heavy-duty diesel engines

    NARCIS (Netherlands)

    Kulah, S.; Forrai, A.; Rentmeester, F.; Donkers, T.; Willems, F.P.T.

    2017-01-01

    The robustness of a new single-cylinder pressure sensor concept is experimentally demonstrated on a six-cylinder heavy-duty diesel engine. Using a single-cylinder pressure sensor and a crank angle sensor, this single-cylinder pressure sensor concept estimates the in-cylinder pressure traces in the

  2. On the robust nonparametric regression estimation for a functional regressor

    OpenAIRE

    Azzedine , Nadjia; Laksaci , Ali; Ould-Saïd , Elias

    2009-01-01

    On the robust nonparametric regression estimation for a functional regressor correspondance: Corresponding author. (Ould-Said, Elias) (Azzedine, Nadjia) (Laksaci, Ali) (Ould-Said, Elias) Departement de Mathematiques--> , Univ. Djillali Liabes--> , BP 89--> , 22000 Sidi Bel Abbes--> - ALGERIA (Azzedine, Nadjia) Departement de Mathema...

  3. Robust efficient estimation of heart rate pulse from video

    Science.gov (United States)

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  4. Robust Visual Tracking Using the Bidirectional Scale Estimation

    Directory of Open Access Journals (Sweden)

    An Zhiyong

    2017-01-01

    Full Text Available Object tracking with robust scale estimation is a challenging task in computer vision. This paper presents a novel tracking algorithm that learns the translation and scale filters with a complementary scheme. The translation filter is constructed using the ridge regression and multidimensional features. A robust scale filter is constructed by the bidirectional scale estimation, including the forward scale and backward scale. Firstly, we learn the scale filter using the forward tracking information. Then the forward scale and backward scale can be estimated using the respective scale filter. Secondly, a conservative strategy is adopted to compromise the forward and backward scales. Finally, the scale filter is updated based on the final scale estimation. It is effective to update scale filter since the stable scale estimation can improve the performance of scale filter. To reveal the effectiveness of our tracker, experiments are performed on 32 sequences with significant scale variation and on the benchmark dataset with 50 challenging videos. Our results show that the proposed tracker outperforms several state-of-the-art trackers in terms of robustness and accuracy.

  5. The effect of volume and quenching on estimation of counting efficiencies in liquid scintillation counting

    International Nuclear Information System (INIS)

    Knoche, H.W.; Parkhurst, A.M.; Tam, S.W.

    1979-01-01

    The effect of volume on the liquid scintillation counting performance of 14 C-samples has been investigated. A decrease in counting efficiency was observed for samples with volumes below about 6 ml and those above about 18 ml when unquenched samples were assayed. Two quench-correction methods, sample channels ratio and external standard channels ratio, and three different liquid scintillation counters, were used in an investigation to determine the magnitude of the error in predicting counting efficiencies when small volume samples (2 ml) with different levels of quenching were assayed. The 2 ml samples exhibited slightly greater standard deviations of the difference between predicted and determined counting efficiencies than did 15 ml samples. Nevertheless, the magnitude of the errors indicate that if the sample channels ratio method of quench correction is employed, 2 ml samples may be counted in conventional counting vials with little loss in counting precision. (author)

  6. A robust bayesian estimate of the concordance correlation coefficient.

    Science.gov (United States)

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2015-01-01

    A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.

  7. Robust linear discriminant analysis with distance based estimators

    Science.gov (United States)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  8. Robust estimation of adaptive tensors of curvature by tensor voting.

    Science.gov (United States)

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  9. Robust estimation of the correlation matrix of longitudinal data

    KAUST Repository

    Maadooliat, Mehdi

    2011-09-23

    We propose a double-robust procedure for modeling the correlation matrix of a longitudinal dataset. It is based on an alternative Cholesky decomposition of the form Σ=DLL⊤D where D is a diagonal matrix proportional to the square roots of the diagonal entries of Σ and L is a unit lower-triangular matrix determining solely the correlation matrix. The first robustness is with respect to model misspecification for the innovation variances in D, and the second is robustness to outliers in the data. The latter is handled using heavy-tailed multivariate t-distributions with unknown degrees of freedom. We develop a Fisher scoring algorithm for computing the maximum likelihood estimator of the parameters when the nonredundant and unconstrained entries of (L,D) are modeled parsimoniously using covariates. We compare our results with those based on the modified Cholesky decomposition of the form LD2L⊤ using simulations and a real dataset. © 2011 Springer Science+Business Media, LLC.

  10. A robust methodology for modal parameters estimation applied to SHM

    Science.gov (United States)

    Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio

    2017-10-01

    The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.

  11. Robust k-mer frequency estimation using gapped k-mers.

    Science.gov (United States)

    Ghandi, Mahmoud; Mohammad-Noori, Morteza; Beer, Michael A

    2014-08-01

    Oligomers of fixed length, k, commonly known as k-mers, are often used as fundamental elements in the description of DNA sequence features of diverse biological function, or as intermediate elements in the constuction of more complex descriptors of sequence features such as position weight matrices. k-mers are very useful as general sequence features because they constitute a complete and unbiased feature set, and do not require parameterization based on incomplete knowledge of biological mechanisms. However, a fundamental limitation in the use of k-mers as sequence features is that as k is increased, larger spatial correlations in DNA sequence elements can be described, but the frequency of observing any specific k-mer becomes very small, and rapidly approaches a sparse matrix of binary counts. Thus any statistical learning approach using k-mers will be susceptible to noisy estimation of k-mer frequencies once k becomes large. Because all molecular DNA interactions have limited spatial extent, gapped k-mers often carry the relevant biological signal. Here we use gapped k-mer counts to more robustly estimate the ungapped k-mer frequencies, by deriving an equation for the minimum norm estimate of k-mer frequencies given an observed set of gapped k-mer frequencies. We demonstrate that this approach provides a more accurate estimate of the k-mer frequencies in real biological sequences using a sample of CTCF binding sites in the human genome.

  12. Geomagnetic matching navigation algorithm based on robust estimation

    Science.gov (United States)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  13. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  14. Histogram equalization with Bayesian estimation for noise robust speech recognition.

    Science.gov (United States)

    Suh, Youngjoo; Kim, Hoirin

    2018-02-01

    The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.

  15. Robust w-Estimators for Cryo-EM Class Means

    Science.gov (United States)

    Huang, Chenxi; Tagare, Hemant D.

    2016-01-01

    A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397

  16. Robust head pose estimation via supervised manifold learning.

    Science.gov (United States)

    Wang, Chao; Song, Xubo

    2014-05-01

    Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Influence of binary mask estimation errors on robust speaker identification

    DEFF Research Database (Denmark)

    May, Tobias

    2017-01-01

    Missing-data strategies have been developed to improve the noise-robustness of automatic speech recognition systems in adverse acoustic conditions. This is achieved by classifying time-frequency (T-F) units into reliable and unreliable components, as indicated by a so-called binary mask. Different...... approaches have been proposed to handle unreliable feature components, each with distinct advantages. The direct masking (DM) approach attenuates unreliable T-F units in the spectral domain, which allows the extraction of conventionally used mel-frequency cepstral coefficients (MFCCs). Instead of attenuating....... Since each of these approaches utilizes the knowledge about reliable and unreliable feature components in a different way, they will respond differently to estimation errors in the binary mask. The goal of this study was to identify the most effective strategy to exploit knowledge about reliable...

  18. Fast and Robust Nanocellulose Width Estimation Using Turbidimetry.

    Science.gov (United States)

    Shimizu, Michiko; Saito, Tsuguyuki; Nishiyama, Yoshiharu; Iwamoto, Shinichiro; Yano, Hiroyuki; Isogai, Akira; Endo, Takashi

    2016-10-01

    The dimensions of nanocelluloses are important factors in controlling their material properties. The present study reports a fast and robust method for estimating the widths of individual nanocellulose particles based on the turbidities of their water dispersions. Seven types of nanocellulose, including short and rigid cellulose nanocrystals and long and flexible cellulose nanofibers, are prepared via different processes. Their widths are calculated from the respective turbidity plots of their water dispersions, based on the theory of light scattering by thin and long particles. The turbidity-derived widths of the seven nanocelluloses range from 2 to 10 nm, and show good correlations with the thicknesses of nanocellulose particles spread on flat mica surfaces determined using atomic force microscopy. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Effects of lek count protocols on greater sage-grouse population trend estimates

    Science.gov (United States)

    Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.

    2016-01-01

    Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count

  20. Estimation of atomic interaction parameters by photon counting

    DEFF Research Database (Denmark)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    2014-01-01

    Detection of radiation signals is at the heart of precision metrology and sensing. In this article we show how the fluctuations in photon counting signals can be exploited to optimally extract information about the physical parameters that govern the dynamics of the emitter. For a simple two......-level emitter subject to photon counting, we show that the Fisher information and the Cram\\'er- Rao sensitivity bound based on the full detection record can be evaluated from the waiting time distribution in the fluorescence signal which can, in turn, be calculated for both perfect and imperfect detectors...

  1. Estimating nonrigid motion from inconsistent intensity with robust shape features

    International Nuclear Information System (INIS)

    Liu, Wenyang; Ruan, Dan

    2013-01-01

    Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided

  2. Fish growth parameters are commonly estimated from the counts of ...

    African Journals Online (AJOL)

    spamer

    >300 mm TL were tagged with external plastic dart tags, 89 mm long ... lithognathus were compared to predictions from growth models based on otolith ring counts. Galjoen ..... On the other hand, a new type of bio-compatible tag, which does ...

  3. Robust estimation of event-related potentials via particle filter.

    Science.gov (United States)

    Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito

    2016-03-01

    In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Counting Parasites: Using Shrimp to Teach Students about Estimation

    Science.gov (United States)

    Gunzburger, Lindsay; Curran, Mary Carla

    2013-01-01

    Estimation is an important skill that we rely on every day for simple tasks, such as providing food for a dinner party or arriving at an appointment on time. Despite its importance, most people have never been formally taught how to estimate. Estimation can also be a vital tool for scientific inquiry. We have created an activity designed to teach…

  5. Impact of microbial count distributions on human health risk estimates

    DEFF Research Database (Denmark)

    Ribeiro Duarte, Ana Sofia; Nauta, Maarten

    2015-01-01

    Quantitative microbiological risk assessment (QMRA) is influenced by the choice of the probability distribution used to describe pathogen concentrations, as this may eventually have a large effect on the distribution of doses at exposure. When fitting a probability distribution to microbial...... enumeration data, several factors may have an impact on the accuracy of that fit. Analysis of the best statistical fits of different distributions alone does not provide a clear indication of the impact in terms of risk estimates. Thus, in this study we focus on the impact of fitting microbial distributions...... on risk estimates, at two different concentration scenarios and at a range of prevalence levels. By using five different parametric distributions, we investigate whether different characteristics of a good fit are crucial for an accurate risk estimate. Among the factors studied are the importance...

  6. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2017-01-01

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill

  7. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    Science.gov (United States)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  8. Counting and confusion: Bayesian rate estimation with multiple populations

    Science.gov (United States)

    Farr, Will M.; Gair, Jonathan R.; Mandel, Ilya; Cutler, Curt

    2015-01-01

    We show how to obtain a Bayesian estimate of the rates or numbers of signal and background events from a set of events when the shapes of the signal and background distributions are known, can be estimated, or approximated; our method works well even if the foreground and background event distributions overlap significantly and the nature of any individual event cannot be determined with any certainty. We give examples of determining the rates of gravitational-wave events in the presence of background triggers from a template bank when noise parameters are known and/or can be fit from the trigger data. We also give an example of determining globular-cluster shape, location, and density from an observation of a stellar field that contains a nonuniform background density of stars superimposed on the cluster stars.

  9. Weak Properties and Robustness of t-Hill Estimators

    Czech Academy of Sciences Publication Activity Database

    Jordanova, P.; Fabián, Zdeněk; Hermann, P.; Střelec, L.; Rivera, A.; Girard, S.; Torres, S.; Stehlík, M.

    2016-01-01

    Roč. 19, č. 4 (2016), s. 591-626 ISSN 1386-1999 Institutional support: RVO:67985807 Keywords : asymptotic properties of estimators * point estimation * t-Hill estimator * t-lgHill estimator Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.679, year: 2016

  10. Robust Parametric Fault Estimation in a Hopper System

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2012-01-01

    The ability of diagnosis of the possible faults is a necessity for satellite launch vehicles during their mission. In this paper, a structural analysis method is employed to divide the complex propulsion system into simpler subsystems for fault diagnosis filter design. A robust fault diagnosis me...

  11. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2012-01-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general

  12. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  13. Determinants of long-term growth : New results applying robust estimation and extreme bounds analysis

    NARCIS (Netherlands)

    Sturm, J.-E.; de Haan, J.

    2005-01-01

    Two important problems exist in cross-country growth studies: outliers and model uncertainty. Employing Sala-i-Martin's (1997a,b) data set, we first use robust estimation and analyze to what extent outliers influence OLS regressions. We then use both OLS and robust estimation techniques in applying

  14. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2013-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  15. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  16. A Robust Threshold for Iterative Channel Estimation in OFDM Systems

    Directory of Open Access Journals (Sweden)

    A. Kalaycioglu

    2010-04-01

    Full Text Available A novel threshold computation method for pilot symbol assisted iterative channel estimation in OFDM systems is considered. As the bits are transmitted in packets, the proposed technique is based on calculating a particular threshold for each data packet in order to select the reliable decoder output symbols to improve the channel estimation performance. Iteratively, additional pilot symbols are established according to the threshold and the channel is re-estimated with the new pilots inserted to the known channel estimation pilot set. The proposed threshold calculation method for selecting additional pilots performs better than non-iterative channel estimation, no threshold and fixed threshold techniques in poor HF channel simulations.

  17. The estimation of differential counting measurements of possitive quantities with relatively large statistical errors

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1982-01-01

    Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)

  18. Estimated allele substitution effects underlying genomic evaluation models depend on the scaling of allele counts

    NARCIS (Netherlands)

    Bouwman, Aniek C.; Hayes, Ben J.; Calus, Mario P.L.

    2017-01-01

    Background: Genomic evaluation is used to predict direct genomic values (DGV) for selection candidates in breeding programs, but also to estimate allele substitution effects (ASE) of single nucleotide polymorphisms (SNPs). Scaling of allele counts influences the estimated ASE, because scaling of

  19. Image Analytical Approach for Needle-Shaped Crystal Counting and Length Estimation

    DEFF Research Database (Denmark)

    Wu, Jian X.; Kucheryavskiy, Sergey V.; Jensen, Linda G.

    2015-01-01

    Estimation of nucleation and crystal growth rates from microscopic information is of critical importance. This can be an especially challenging task if needle growth of crystals is observed. To address this challenge, an image analytical method for counting of needle-shaped crystals and estimating...

  20. Reducing Inventory System Costs by Using Robust Demand Estimators

    OpenAIRE

    Raymond A. Jacobs; Harvey M. Wagner

    1989-01-01

    Applications of inventory theory typically use historical data to estimate demand distribution parameters. Imprecise knowledge of the demand distribution adds to the usual replenishment costs associated with stochastic demands. Only limited research has been directed at the problem of choosing cost effective statistical procedures for estimating these parameters. Available theoretical findings on estimating the demand parameters for (s, S) inventory replenishment policies are limited by their...

  1. The comparison between several robust ridge regression estimators in the presence of multicollinearity and multiple outliers

    Science.gov (United States)

    Zahari, Siti Meriam; Ramli, Norazan Mohamed; Moktar, Balkiah; Zainol, Mohammad Said

    2014-09-01

    In the presence of multicollinearity and multiple outliers, statistical inference of linear regression model using ordinary least squares (OLS) estimators would be severely affected and produces misleading results. To overcome this, many approaches have been investigated. These include robust methods which were reported to be less sensitive to the presence of outliers. In addition, ridge regression technique was employed to tackle multicollinearity problem. In order to mitigate both problems, a combination of ridge regression and robust methods was discussed in this study. The superiority of this approach was examined when simultaneous presence of multicollinearity and multiple outliers occurred in multiple linear regression. This study aimed to look at the performance of several well-known robust estimators; M, MM, RIDGE and robust ridge regression estimators, namely Weighted Ridge M-estimator (WRM), Weighted Ridge MM (WRMM), Ridge MM (RMM), in such a situation. Results of the study showed that in the presence of simultaneous multicollinearity and multiple outliers (in both x and y-direction), the RMM and RIDGE are more or less similar in terms of superiority over the other estimators, regardless of the number of observation, level of collinearity and percentage of outliers used. However, when outliers occurred in only single direction (y-direction), the WRMM estimator is the most superior among the robust ridge regression estimators, by producing the least variance. In conclusion, the robust ridge regression is the best alternative as compared to robust and conventional least squares estimators when dealing with simultaneous presence of multicollinearity and outliers.

  2. National South African HIV prevalence estimates robust despite ...

    African Journals Online (AJOL)

    Approximately 18% of all people living with HIV in 2013 were estimated to live in South Africa (SA),[1] which ... 1 Research Department of Infection and Population Health, Institute for Global Health, University College London, UK.

  3. HOTELLING'S T2 CONTROL CHARTS BASED ON ROBUST ESTIMATORS

    Directory of Open Access Journals (Sweden)

    SERGIO YÁÑEZ

    2010-01-01

    Full Text Available Under the presence of multivariate outliers, in a Phase I analysis of historical set of data, the T 2 control chart based on the usual sample mean vector and sample variance covariance matrix performs poorly. Several alternative estimators have been proposed. Among them, estimators based on the minimum volume ellipsoid (MVE and the minimum covariance determinant (MCD are powerful in detecting a reasonable number of outliers. In this paper we propose a T 2 control chart using the biweight S estimators for the location and dispersion parameters when monitoring multivariate individual observations. Simulation studies show that this method outperforms the T 2 control chart based on MVE estimators for a small number of observations.

  4. Robust Estimation of Productivity Changes in Japanese Shinkin Banks

    Directory of Open Access Journals (Sweden)

    Jianzhong DAI

    2014-05-01

    Full Text Available This paper estimates productivity changes in Japanese shinkin banks during the fiscal years 2001 to 2008 using the Malmquist index as the measure of productivity change. Data envelopment analysis (DEA is used to estimate the index. We also apply a smoothed bootstrapping approach to set up confidence intervals for estimates and study their statistical characteristics. By analyzing estimated scores, we identify trends in productivity changes in Japanese shinkin banks during the study period and investigate the sources of these trends. We find that in the latter half of the study period, productivity has significantly declined, primarily because of deterioration in technical efficiency, but scale efficiency has been significantly improved. Grouping the total sample according to the levels of competition reveals more details of productivity changes in shinkin banks.

  5. Robust-BD Estimation and Inference for General Partially Linear Models

    Directory of Open Access Journals (Sweden)

    Chunming Zhang

    2017-11-01

    Full Text Available The classical quadratic loss for the partially linear model (PLM and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM, which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005 between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations.

  6. ROBUST ALGORITHMS OF PARAMETRIC ESTIMATION IN SOME STABILIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A.A. Vedyakov

    2016-07-01

    Full Text Available Subject of Research.The tasks of dynamic systems provision in the stable state by means of ensuring of trite solution stability for various dynamic systems in the education regime with the aid of their parameters tuning are considered. Method. The problems are solved by application of ideology of the robust finitely convergent algorithms creation. Main Results. The concepts of parametric algorithmization of stability and steady asymptotic stability are introduced and the results are presented on synthesis of coarsed gradient algorithms solving the proposed tasks for finite number of iterations with the purpose of the posed problems decision. Practical Relevance. The article results may be called for decision of practical stabilization tasks in the process of various engineering constructions and devices operation.

  7. Robust stability and ℋ ∞ -estimation for uncertain discrete systems with state-delay

    Directory of Open Access Journals (Sweden)

    Mahmoud Magdi S.

    2001-01-01

    Full Text Available In this paper, we investigate the problems of robust stability and ℋ ∞ -estimation for a class of linear discrete-time systems with time-varying norm-bounded parameter uncertainty and unknown state-delay. We provide complete results for robust stability with prescribed performance measure and establish a version of the discrete Bounded Real Lemma. Then, we design a linear estimator such that the estimation error dynamics is robustly stable with a guaranteed ℋ ∞ -performance irrespective of the parameteric uncertainties and unknown state delays. A numerical example is worked out to illustrate the developed theory.

  8. Computationally Efficient and Noise Robust DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...... a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...

  9. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    Science.gov (United States)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  10. Robust estimators based on generalization of trimmed mean

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Bejda, P.

    (2018) ISSN 0361-0918 Institutional support: RVO:67985556 Keywords : Breakdown point * Estimators * Geometric median * Location * Trimmed mean Subject RIV: BA - General Mathematics Impact factor: 0.457, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/adam-0481224.pdf

  11. Perception-oriented methodology for robust motion estimation design

    NARCIS (Netherlands)

    Heinrich, A.; Vleuten, van der R.J.; Haan, de G.

    2014-01-01

    Optimizing a motion estimator (ME) for picture rate conversion is challenging. This is because there are many types of MEs and, within each type, many parameters, which makes subjective assessment of all the alternatives impractical. To solve this problem, we propose an automatic design methodology

  12. Reconstruction of financial networks for robust estimation of systemic risk

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-01-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks

  13. Reconstruction of financial networks for robust estimation of systemic risk

    Science.gov (United States)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-03-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks.

  14. Robust Estimation and Moment Selection in Dynamic Fixed-effects Panel Data Models

    NARCIS (Netherlands)

    Cizek, P.; Aquaro, M.

    2015-01-01

    This paper extends an existing outlier-robust estimator of linear dynamic panel data models with fixed effects, which is based on the median ratio of two consecutive pairs of first-differenced data. To improve its precision and robust properties, a general procedure based on many pairwise

  15. Improved stove programs need robust methods to estimate carbon offsets

    OpenAIRE

    Johnson, Michael; Edwards, Rufus; Masera, Omar

    2010-01-01

    Current standard methods result in significant discrepancies in carbon offset accounting compared to approaches based on representative community based subsamples, which provide more realistic assessments at reasonable cost. Perhaps more critically, neither of the currently approved methods incorporates uncertainties inherent in estimates of emission factors or non-renewable fuel usage (fNRB). Since emission factors and fNRB contribute 25% and 47%, respectively, to the overall uncertainty in ...

  16. Face Value: Towards Robust Estimates of Snow Leopard Densities.

    Directory of Open Access Journals (Sweden)

    Justine S Alexander

    Full Text Available When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01 individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87. Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.

  17. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    Science.gov (United States)

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  18. Estimation of U content in coffee samples by fission-track counting

    International Nuclear Information System (INIS)

    Sharma, P.K.; Lal, N.; Nagpaul, K.K.

    1985-01-01

    Because coffee is consumed in large quantities by humans, the authors undertook the study of the uranium content of coffee as a continuation of earlier work to estimate the U content of foodstuffs. Since literature on this subject is scarce, they decided to use the well-established fission-track-counting technique to determine the U content of coffee

  19. Simultaneous Robust Fault and State Estimation for Linear Discrete-Time Uncertain Systems

    Directory of Open Access Journals (Sweden)

    Feten Gannouni

    2017-01-01

    Full Text Available We consider the problem of robust simultaneous fault and state estimation for linear uncertain discrete-time systems with unknown faults which affect both the state and the observation matrices. Using transformation of the original system, a new robust proportional integral filter (RPIF having an error variance with an optimized guaranteed upper bound for any allowed uncertainty is proposed to improve robust estimation of unknown time-varying faults and to improve robustness against uncertainties. In this study, the minimization problem of the upper bound of the estimation error variance is formulated as a convex optimization problem subject to linear matrix inequalities (LMI for all admissible uncertainties. The proportional and the integral gains are optimally chosen by solving the convex optimization problem. Simulation results are given in order to illustrate the performance of the proposed filter, in particular to solve the problem of joint fault and state estimation.

  20. Order Tracking Based on Robust Peak Search Instantaneous Frequency Estimation

    International Nuclear Information System (INIS)

    Gao, Y; Guo, Y; Chi, Y L; Qin, S R

    2006-01-01

    Order tracking plays an important role in non-stationary vibration analysis of rotating machinery, especially to run-up or coast down. An instantaneous frequency estimation (IFE) based order tracking of rotating machinery is introduced. In which, a peak search algorithms of spectrogram of time-frequency analysis is employed to obtain IFE of vibrations. An improvement to peak search is proposed, which can avoid strong non-order components or noises disturbing to the peak search work. Compared with traditional methods of order tracking, IFE based order tracking is simplified in application and only software depended. Testing testify the validity of the method. This method is an effective supplement to traditional methods, and the application in condition monitoring and diagnosis of rotating machinery is imaginable

  1. Robust Homography Estimation Based on Nonlinear Least Squares Optimization

    Directory of Open Access Journals (Sweden)

    Wei Mou

    2014-01-01

    Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.

  2. Uranium mass and neutron multiplication factor estimates from time-correlation coincidence counts

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Wenxiong [China Academy of Engineering Physics, Center for Strategic Studies, Beijing 100088 (China); Li, Jiansheng [China Academy of Engineering Physics, Institute of Nuclear Physics and Chemistry, Mianyang 621900 (China); Zhu, Jianyu [China Academy of Engineering Physics, Center for Strategic Studies, Beijing 100088 (China)

    2015-10-11

    Time-correlation coincidence counts of neutrons are an important means to measure attributes of nuclear material. The main deficiency in the analysis is that an attribute of an unknown component can only be assessed by comparing it with similar known components. There is a lack of a universal method of measurement suitable for the different attributes of the components. This paper presents a new method that uses universal relations to estimate the mass and neutron multiplication factor of any uranium component with known enrichment. Based on numerical simulations and analyses of 64 highly enriched uranium components with different thicknesses and average radii, the relations between mass, multiplication and coincidence spectral features have been obtained by linear regression analysis. To examine the validity of the method in estimating the mass of uranium components with different sizes, shapes, enrichment, and shielding, the features of time-correlation coincidence-count spectra for other objects with similar attributes are simulated. Most of the masses and multiplications for these objects could also be derived by the formulation. Experimental measurements of highly enriched uranium castings have also been used to verify the formulation. The results show that for a well-designed time-dependent coincidence-count measuring system of a uranium attribute, there are a set of relations dependent on the uranium enrichment by which the mass and multiplication of the measured uranium components of any shape and size can be estimated from the features of the source-detector coincidence-count spectrum.

  3. Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.

    Science.gov (United States)

    Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.

  4. Robust estimation for partially linear models with large-dimensional covariates.

    Science.gov (United States)

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  5. Methodology for estimation of 32P in bioassay samples by Cerenkov counting

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Sawant, Pramilla D.; Yadav, R.K.B.; Rao, D.D.

    2016-01-01

    Radioactive phosphorus ( 32 P) as phosphate is used to effectively reduce bone pain in terminal cancer patients. Several hospitals in India carry out this palliative care procedure on a regular basis. Thus, production as well as synthesis of 32 P compounds has increased over the years to meet this requirement. Monitoring of radiation workers handling 32 P compounds is important for further strengthening of radiological protection program at processing facility. 32 P being a pure beta emitter (β max = 1.71 MeV, t 1/2 = 14.3 d), bioassay is the preferred individual monitoring technique. Method standardized at Bioassay Lab, Trombay, includes estimation of 32 P in urine by co-precipitation with ammonium phosphomolybdate (AMP) followed by gross beta counting. In the present study, feasibility of Cerenkov counting for detection of 32 P in bioassay samples was explored and the results obtained were compared with the gross beta counting technique

  6. Body protein losses estimated by nitrogen balance and potassium-40 counting

    International Nuclear Information System (INIS)

    Belyea, R.L.; Babbitt, C.L.; Sedgwick, H.T.; Zinn, G.M.

    1986-01-01

    Body protein losses estimated from N balance were compared with those estimated by 40K counting. Six nonlactating dairy cows were fed an adequate N diet for 7 wk, a low N diet for 9 wk, and a replete N diet for 3 wk. The low N diet contained high cell wall grass hay plus ground corn, starch, and molasses. Soybean meal was added to the low N diet to increase N in the adequate N and replete N diets. Intake was measured daily. Digestibilities, N balance, and body composition (estimated by 40K counting) were determined during each dietary regimen. During low N treatment, hay dry matter intake declined 2 kg/d, and supplement increased about .5 kg/d. Dry matter digestibility was not altered by N treatment. Protein and acid detergent fiber digestibilities decreased from 40 and 36% during adequate N to 20 and 2%, respectively, during low N. Fecal and urinary N also declined when cows were fed the low N diet. By the end of repletion, total intake, fiber, and protein digestibilities as well as N partition were similar to or exceeded those during adequate N intake. Body protein (N) loss was estimated by N balance to be about 3 kg compared with 8 kg by 40K counting. Body fat losses (32 kg) were large because of low energy digestibility and intake. Seven kilograms of body fat were regained during repletion, but there was no change in body protein

  7. A Lossy Counting-Based State of Charge Estimation Method and Its Application to Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2015-12-01

    Full Text Available Estimating the residual capacity or state-of-charge (SoC of commercial batteries on-line without destroying them or interrupting the power supply, is quite a challenging task for electric vehicle (EV designers. Many Coulomb counting-based methods have been used to calculate the remaining capacity in EV batteries or other portable devices. The main disadvantages of these methods are the cumulative error and the time-varying Coulombic efficiency, which are greatly influenced by the operating state (SoC, temperature and current. To deal with this problem, we propose a lossy counting-based Coulomb counting method for estimating the available capacity or SoC. The initial capacity of the tested battery is obtained from the open circuit voltage (OCV. The charging/discharging efficiencies, used for compensating the Coulombic losses, are calculated by the lossy counting-based method. The measurement drift, resulting from the current sensor, is amended with the distorted Coulombic efficiency matrix. Simulations and experimental results show that the proposed method is both effective and convenient.

  8. Robust and bias-corrected estimation of the coefficient of tail dependence

    DEFF Research Database (Denmark)

    Dutang, C.; Goegebeur, Y.; Guillou, A.

    2014-01-01

    We introduce a robust and asymptotically unbiased estimator for the coefficient of tail dependence in multivariate extreme value statistics. The estimator is obtained by fitting a second order model to the data by means of the minimum density power divergence criterion. The asymptotic properties ...

  9. Robust estimation and moment selection in dynamic fixed-effects panel data models

    NARCIS (Netherlands)

    Cizek, Pavel; Aquaro, Michele

    Considering linear dynamic panel data models with fixed effects, existing outlier–robust estimators based on the median ratio of two consecutive pairs of first-differenced data are extended to higher-order differencing. The estimation procedure is thus based on many pairwise differences and their

  10. Estimator-based multiobjective robust control strategy for an active pantograph in high-speed railways

    DEFF Research Database (Denmark)

    Lu, Xiaobing; Liu, Zhigang; Song, Yang

    2018-01-01

    Active control of the pantograph is one of the promising measures for decreasing fluctuation in the contact force between the pantograph and the catenary. In this paper, an estimator-based multiobjective robust control strategy is proposed for an active pantograph, which consists of a state estim...

  11. Enhanced coulomb counting method for estimating state-of-charge and state-of-health of lithium-ion batteries

    International Nuclear Information System (INIS)

    Ng, Kong Soon; Moo, Chin-Sien; Chen, Yi-Ping; Hsieh, Yao-Ching

    2009-01-01

    The coulomb counting method is expedient for state-of-charge (SOC) estimation of lithium-ion batteries with high charging and discharging efficiencies. The charging and discharging characteristics are investigated and reveal that the coulomb counting method is convenient and accurate for estimating the SOC of lithium-ion batteries. A smart estimation method based on coulomb counting is proposed to improve the estimation accuracy. The corrections are made by considering the charging and operating efficiencies. Furthermore, the state-of-health (SOH) is evaluated by the maximum releasable capacity. Through the experiments that emulate practical operations, the SOC estimation method is verified to demonstrate the effectiveness and accuracy.

  12. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    Science.gov (United States)

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  13. Where can pixel counting area estimates meet user-defined accuracy requirements?

    Science.gov (United States)

    Waldner, François; Defourny, Pierre

    2017-08-01

    Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.

  14. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  15. ESTIMATION OF GRASPING TORQUE USING ROBUST REACTION TORQUE OBSERVER FOR ROBOTIC FORCEPS

    OpenAIRE

    塚本, 祐介

    2015-01-01

    Abstract— In this paper, the estimation of the grasping torque of robotic forceps without the use of a force/torque sensor is discussed. To estimate the grasping torque when the robotic forceps driven by a rotary motor with a reduction gear grasps an object, a novel robust reaction torque observer is proposed. In the case where a conventional reaction force/torque observer is applied, the estimated torque includes not only the grasping torque, namely the reaction torque, but also t...

  16. Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using...

  17. Simple robust technique using time delay estimation for the control and synchronization of Lorenz systems

    International Nuclear Information System (INIS)

    Jin, Maolin; Chang, Pyung Hun

    2009-01-01

    This work presents two simple and robust techniques based on time delay estimation for the respective control and synchronization of chaos systems. First, one of these techniques is applied to the control of a chaotic Lorenz system with both matched and mismatched uncertainties. The nonlinearities in the Lorenz system is cancelled by time delay estimation and desired error dynamics is inserted. Second, the other technique is applied to the synchronization of the Lue system and the Lorenz system with uncertainties. The synchronization input consists of three elements that have transparent and clear meanings. Since time delay estimation enables a very effective and efficient cancellation of disturbances and nonlinearities, the techniques turn out to be simple and robust. Numerical simulation results show fast, accurate and robust performance of the proposed techniques, thereby demonstrating their effectiveness for the control and synchronization of Lorenz systems.

  18. Power System Real-Time Monitoring by Using PMU-Based Robust State Estimation Method

    DEFF Research Database (Denmark)

    Zhao, Junbo; Zhang, Gexiang; Das, Kaushik

    2016-01-01

    Accurate real-time states provided by the state estimator are critical for power system reliable operation and control. This paper proposes a novel phasor measurement unit (PMU)-based robust state estimation method (PRSEM) to real-time monitor a power system under different operation conditions...... the system real-time states with good robustness and can address several kinds of BD.......-based bad data (BD) detection method, which can handle the smearing effect and critical measurement errors, is presented. We evaluate PRSEM by using IEEE benchmark test systems and a realistic utility system. The numerical results indicate that, in short computation time, PRSEM can effectively track...

  19. Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies

    KAUST Repository

    Chen, Yi-Hau

    2009-03-01

    Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.

  20. Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies

    KAUST Repository

    Chen, Yi-Hau; Chatterjee, Nilanjan; Carroll, Raymond J.

    2009-01-01

    Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.

  1. Urban birds in the Sonoran Desert: estimating population density from point counts

    Directory of Open Access Journals (Sweden)

    Karina Johnston López

    2015-01-01

    Full Text Available We conducted bird surveys in Hermosillo, Sonora using distance sampling to characterize detection functions at point-transects for native and non-native urban birds in a desert environment. From March to August 2013 we sampled 240 plots in the city and its surroundings; each plot was visited three times. Our purpose was to provide information for a rapid assessment of bird density in this region by using point counts. We identified 72 species, including six non-native species. Sixteen species had sufficient detections to accurately estimate the parameters of the detection functions. To illustrate the estimation of density from bird count data using our inferred detection functions, we estimated the density of the Eurasian Collared-Dove (Streptopelia decaocto under two different levels of urbanization: highly urbanized (90-100% of urban impact and moderately urbanized zones (39-50% of urban impact. Density of S. decaocto in the highly-urbanized and moderately-urbanized zones was 3.97±0.52 and 2.92±0.52 individuals/ha, respectively. By using our detection functions, avian ecologists can efficiently relocate time and effort that is regularly used for the estimation of detection distances, to increase the number of sites surveyed and to collect other relevant ecological information.

  2. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.

    Science.gov (United States)

    Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing

    2018-03-07

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.

  3. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    Science.gov (United States)

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  4. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    Science.gov (United States)

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Robust state estimation for uncertain fuzzy bidirectional associative memory networks with time-varying delays

    Science.gov (United States)

    Vadivel, P.; Sakthivel, R.; Mathiyalagan, K.; Arunkumar, A.

    2013-09-01

    This paper addresses the issue of robust state estimation for a class of fuzzy bidirectional associative memory (BAM) neural networks with time-varying delays and parameter uncertainties. By constructing the Lyapunov-Krasovskii functional, which contains the triple-integral term and using the free-weighting matrix technique, a set of sufficient conditions are derived in terms of linear matrix inequalities (LMIs) to estimate the neuron states through available output measurements such that the dynamics of the estimation error system is robustly asymptotically stable. In particular, we consider a generalized activation function in which the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. More precisely, the design of the state estimator for such BAM neural networks can be obtained by solving some LMIs, which are dependent on the size of the time derivative of the time-varying delays. Finally, a numerical example with simulation result is given to illustrate the obtained theoretical results.

  6. Robust state estimation for uncertain fuzzy bidirectional associative memory networks with time-varying delays

    International Nuclear Information System (INIS)

    Vadivel, P; Sakthivel, R; Mathiyalagan, K; Arunkumar, A

    2013-01-01

    This paper addresses the issue of robust state estimation for a class of fuzzy bidirectional associative memory (BAM) neural networks with time-varying delays and parameter uncertainties. By constructing the Lyapunov–Krasovskii functional, which contains the triple-integral term and using the free-weighting matrix technique, a set of sufficient conditions are derived in terms of linear matrix inequalities (LMIs) to estimate the neuron states through available output measurements such that the dynamics of the estimation error system is robustly asymptotically stable. In particular, we consider a generalized activation function in which the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. More precisely, the design of the state estimator for such BAM neural networks can be obtained by solving some LMIs, which are dependent on the size of the time derivative of the time-varying delays. Finally, a numerical example with simulation result is given to illustrate the obtained theoretical results. (paper)

  7. A Robust and Multi-Weighted Approach to Estimating Topographically Correlated Tropospheric Delays in Radar Interferograms

    Directory of Open Access Journals (Sweden)

    Bangyan Zhu

    2016-07-01

    Full Text Available Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.

  8. Robust Backlash Estimation for Industrial Drive-Train Systems—Theory and Validation

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2018-01-01

    Backlash compensation is used in modern machinetool controls to ensure high-accuracy positioning. When wear of a machine causes deadzone width to increase, high-accuracy control may be maintained if the deadzone is accurately estimated. Deadzone estimation is also an important parameter to indica......-of-the-art Siemens equipment. The experiments validate the theory and show that expected performance and robustness to parameter uncertainties are both achieved....

  9. Robust experiment design for estimating myocardial β adrenergic receptor concentration using PET

    International Nuclear Information System (INIS)

    Salinas, Cristian; Muzic, Raymond F. Jr.; Ernsberger, Paul; Saidel, Gerald M.

    2007-01-01

    Myocardial β adrenergic receptor (β-AR) concentration can substantially decrease in congestive heart failure and significantly increase in chronic volume overload, such as in severe aortic valve regurgitation. Positron emission tomography (PET) with an appropriate ligand-receptor model can be used for noninvasive estimation of myocardial β-AR concentration in vivo. An optimal design of the experiment protocol, however, is needed for sufficiently precise estimates of β-AR concentration in a heterogeneous population. Standard methods of optimal design do not account for a heterogeneous population with a wide range of β-AR concentrations and other physiological parameters and consequently are inadequate. To address this, we have developed a methodology to design a robust two-injection protocol that provides reliable estimates of myocardial β-AR concentration in normal and pathologic states. A two-injection protocol of the high affinity β-AR antagonist [ 18 F]-(S)-fluorocarazolol was designed based on a computer-generated (or synthetic) population incorporating a wide range of β-AR concentrations. Timing and dosage of the ligand injections were optimally designed with minimax criterion to provide the least bad β-AR estimates for the worst case in the synthetic population. This robust experiment design for PET was applied to experiments with pigs before and after β-AR upregulation by chemical sympathectomy. Estimates of β-AR concentration were found by minimizing the difference between the model-predicted and experimental PET data. With this robust protocol, estimates of β-AR concentration showed high precision in both normal and pathologic states. The increase in β-AR concentration after sympathectomy predicted noninvasively with PET is consistent with the increase shown by in vitro assays in pig myocardium. A robust experiment protocol was designed for PET that yields reliable estimates of β-AR concentration in a population with normal and pathologic

  10. Robust Estimation for a CSTR Using a High Order Sliding Mode Observer and an Observer-Based Estimator

    Directory of Open Access Journals (Sweden)

    Esteban Jiménez-Rodríguez

    2016-12-01

    Full Text Available This paper presents an estimation structure for a continuous stirred-tank reactor, which is comprised of a sliding mode observer-based estimator coupled with a high-order sliding-mode observer. The whole scheme allows the robust estimation of the state and some parameters, specifically the concentration of the reactive mass, the heat of reaction and the global coefficient of heat transfer, by measuring the temperature inside the reactor and the temperature inside the jacket. In order to verify the results, the convergence proof of the proposed structure is done, and numerical simulations are presented with noiseless and noisy measurements, suggesting the applicability of the posed approach.

  11. Detection of heart beats in multimodal data: a robust beat-to-beat interval estimation approach.

    Science.gov (United States)

    Antink, Christoph Hoog; Brüser, Christoph; Leonhardt, Steffen

    2015-08-01

    The heart rate and its variability play a vital role in the continuous monitoring of patients, especially in the critical care unit. They are commonly derived automatically from the electrocardiogram as the interval between consecutive heart beat. While their identification by QRS-complexes is straightforward under ideal conditions, the exact localization can be a challenging task if the signal is severely contaminated with noise and artifacts. At the same time, other signals directly related to cardiac activity are often available. In this multi-sensor scenario, methods of multimodal sensor-fusion allow the exploitation of redundancies to increase the accuracy and robustness of beat detection.In this paper, an algorithm for the robust detection of heart beats in multimodal data is presented. Classic peak-detection is augmented by robust multi-channel, multimodal interval estimation to eliminate false detections and insert missing beats. This approach yielded a score of 90.70 and was thus ranked third place in the PhysioNet/Computing in Cardiology Challenge 2014: Robust Detection of Heart Beats in Muthmodal Data follow-up analysis.In the future, the robust beat-to-beat interval estimator may directly be used for the automated processing of multimodal patient data for applications such as diagnosis support and intelligent alarming.

  12. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  13. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  14. Estimating temporary emigration and breeding proportions using capture-recapture data with Pollock's robust design

    Science.gov (United States)

    Kendall, W.L.; Nichols, J.D.; Hines, J.E.

    1997-01-01

    Statistical inference for capture-recapture studies of open animal populations typically relies on the assumption that all emigration from the studied population is permanent. However, there are many instances in which this assumption is unlikely to be met. We define two general models for the process of temporary emigration, completely random and Markovian. We then consider effects of these two types of temporary emigration on Jolly-Seber (Seber 1982) estimators and on estimators arising from the full-likelihood approach of Kendall et al. (1995) to robust design data. Capture-recapture data arising from Pollock's (1982) robust design provide the basis for obtaining unbiased estimates of demographic parameters in the presence of temporary emigration and for estimating the probability of temporary emigration. We present a likelihood-based approach to dealing with temporary emigration that permits estimation under different models of temporary emigration and yields tests for completely random and Markovian emigration. In addition, we use the relationship between capture probability estimates based on closed and open models under completely random temporary emigration to derive three ad hoc estimators for the probability of temporary emigration, two of which should be especially useful in situations where capture probabilities are heterogeneous among individual animals. Ad hoc and full-likelihood estimators are illustrated for small mammal capture-recapture data sets. We believe that these models and estimators will be useful for testing hypotheses about the process of temporary emigration, for estimating demographic parameters in the presence of temporary emigration, and for estimating probabilities of temporary emigration. These latter estimates are frequently of ecological interest as indicators of animal movement and, in some sampling situations, as direct estimates of breeding probabilities and proportions.

  15. Robust estimation of autoregressive processes using a mixture-based filter-bank

    Czech Academy of Sciences Publication Activity Database

    Šmídl, V.; Anthony, Q.; Kárný, Miroslav; Guy, Tatiana Valentine

    2005-01-01

    Roč. 54, č. 4 (2005), s. 315-323 ISSN 0167-6911 R&D Projects: GA AV ČR IBS1075351; GA ČR GA102/03/0049; GA ČR GP102/03/P010; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian estimation * probabilistic mixtures * recursive estimation Subject RIV: BC - Control Systems Theory Impact factor: 1.239, year: 2005 http://library.utia.cas.cz/separaty/historie/karny-robust estimation of autoregressive processes using a mixture-based filter- bank .pdf

  16. Can genetic estimators provide robust estimates of the effective number of breeders in small populations?

    Directory of Open Access Journals (Sweden)

    Marion Hoehn

    Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.

  17. Estimation non-paramétrique robuste pour données fonctionnelles

    OpenAIRE

    Crambes , Christophe; Delsol , Laurent; Laksaci , Ali

    2009-01-01

    International audience; L'estimation robuste présente une approche alternative aux méthodes de régression classiques, par exemple lorsque les observations sont affectées par la présence de données aberrantes. Récemment, ces estimateurs robustes ont été considérés pour des modèles avec données fonctionnelles. Dans cet exposé, nous considérons un modèle de régression robuste avec une variable d'intérêt réelle et une variable explicative fonctionnelle. Nous définissons un estimateur non-paramétr...

  18. Robust estimation for homoscedastic regression in the secondary analysis of case-control data

    KAUST Repository

    Wei, Jiawei

    2012-12-04

    Primary analysis of case-control studies focuses on the relationship between disease D and a set of covariates of interest (Y, X). A secondary application of the case-control study, which is often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated owing to the case-control sampling, where the regression of Y on X is different from what it is in the population. Previous work has assumed a parametric distribution for Y given X and derived semiparametric efficient estimation and inference without any distributional assumptions about X. We take up the issue of estimation of a regression function when Y given X follows a homoscedastic regression model, but otherwise the distribution of Y is unspecified. The semiparametric efficient approaches can be used to construct semiparametric efficient estimates, but they suffer from a lack of robustness to the assumed model for Y given X. We take an entirely different approach. We show how to estimate the regression parameters consistently even if the assumed model for Y given X is incorrect, and thus the estimates are model robust. For this we make the assumption that the disease rate is known or well estimated. The assumption can be dropped when the disease is rare, which is typically so for most case-control studies, and the estimation algorithm simplifies. Simulations and empirical examples are used to illustrate the approach.

  19. Robust estimation for homoscedastic regression in the secondary analysis of case-control data

    KAUST Repository

    Wei, Jiawei; Carroll, Raymond J.; Mü ller, Ursula U.; Keilegom, Ingrid Van; Chatterjee, Nilanjan

    2012-01-01

    Primary analysis of case-control studies focuses on the relationship between disease D and a set of covariates of interest (Y, X). A secondary application of the case-control study, which is often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated owing to the case-control sampling, where the regression of Y on X is different from what it is in the population. Previous work has assumed a parametric distribution for Y given X and derived semiparametric efficient estimation and inference without any distributional assumptions about X. We take up the issue of estimation of a regression function when Y given X follows a homoscedastic regression model, but otherwise the distribution of Y is unspecified. The semiparametric efficient approaches can be used to construct semiparametric efficient estimates, but they suffer from a lack of robustness to the assumed model for Y given X. We take an entirely different approach. We show how to estimate the regression parameters consistently even if the assumed model for Y given X is incorrect, and thus the estimates are model robust. For this we make the assumption that the disease rate is known or well estimated. The assumption can be dropped when the disease is rare, which is typically so for most case-control studies, and the estimation algorithm simplifies. Simulations and empirical examples are used to illustrate the approach.

  20. Efficient estimation of the robustness region of biological models with oscillatory behavior.

    Directory of Open Access Journals (Sweden)

    Mochamad Apri

    Full Text Available Robustness is an essential feature of biological systems, and any mathematical model that describes such a system should reflect this feature. Especially, persistence of oscillatory behavior is an important issue. A benchmark model for this phenomenon is the Laub-Loomis model, a nonlinear model for cAMP oscillations in Dictyostelium discoideum. This model captures the most important features of biomolecular networks oscillating at constant frequencies. Nevertheless, the robustness of its oscillatory behavior is not yet fully understood. Given a system that exhibits oscillating behavior for some set of parameters, the central question of robustness is how far the parameters may be changed, such that the qualitative behavior does not change. The determination of such a "robustness region" in parameter space is an intricate task. If the number of parameters is high, it may be also time consuming. In the literature, several methods are proposed that partially tackle this problem. For example, some methods only detect particular bifurcations, or only find a relatively small box-shaped estimate for an irregularly shaped robustness region. Here, we present an approach that is much more general, and is especially designed to be efficient for systems with a large number of parameters. As an illustration, we apply the method first to a well understood low-dimensional system, the Rosenzweig-MacArthur model. This is a predator-prey model featuring satiation of the predator. It has only two parameters and its bifurcation diagram is available in the literature. We find a good agreement with the existing knowledge about this model. When we apply the new method to the high dimensional Laub-Loomis model, we obtain a much larger robustness region than reported earlier in the literature. This clearly demonstrates the power of our method. From the results, we conclude that the biological system underlying is much more robust than was realized until now.

  1. Improving causal inference with a doubly robust estimator that combines propensity score stratification and weighting.

    Science.gov (United States)

    Linden, Ariel

    2017-08-01

    When a randomized controlled trial is not feasible, health researchers typically use observational data and rely on statistical methods to adjust for confounding when estimating treatment effects. These methods generally fall into 3 categories: (1) estimators based on a model for the outcome using conventional regression adjustment; (2) weighted estimators based on the propensity score (ie, a model for the treatment assignment); and (3) "doubly robust" (DR) estimators that model both the outcome and propensity score within the same framework. In this paper, we introduce a new DR estimator that utilizes marginal mean weighting through stratification (MMWS) as the basis for weighted adjustment. This estimator may prove more accurate than treatment effect estimators because MMWS has been shown to be more accurate than other models when the propensity score is misspecified. We therefore compare the performance of this new estimator to other commonly used treatment effects estimators. Monte Carlo simulation is used to compare the DR-MMWS estimator to regression adjustment, 2 weighted estimators based on the propensity score and 2 other DR methods. To assess performance under varied conditions, we vary the level of misspecification of the propensity score model as well as misspecify the outcome model. Overall, DR estimators generally outperform methods that model one or the other components (eg, propensity score or outcome). The DR-MMWS estimator outperforms all other estimators when both the propensity score and outcome models are misspecified and performs equally as well as other DR estimators when only the propensity score is misspecified. Health researchers should consider using DR-MMWS as the principal evaluation strategy in observational studies, as this estimator appears to outperform other estimators in its class. © 2017 John Wiley & Sons, Ltd.

  2. A new method for robust video watermarking resistant against key estimation attacks

    Science.gov (United States)

    Mitekin, Vitaly

    2015-12-01

    This paper presents a new method for high-capacity robust digital video watermarking and algorithms of embedding and extraction of watermark based on this method. Proposed method uses password-based two-dimensional pseudonoise arrays for watermark embedding, making brute-force attacks aimed at steganographic key retrieval mostly impractical. Proposed algorithm for 2-dimensional "noise-like" watermarking patterns generation also allows to significantly decrease watermark collision probability ( i.e. probability of correct watermark detection and extraction using incorrect steganographic key or password).. Experimental research provided in this work also shows that simple correlation-based watermark detection procedure can be used, providing watermark robustness against lossy compression and watermark estimation attacks. At the same time, without decreasing robustness of embedded watermark, average complexity of the brute-force key retrieval attack can be increased to 1014 watermark extraction attempts (compared to 104-106 for a known robust watermarking schemes). Experimental results also shows that for lowest embedding intensity watermark preserves it's robustness against lossy compression of host video and at the same time preserves higher video quality (PSNR up to 51dB) compared to known wavelet-based and DCT-based watermarking algorithms.

  3. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  4. Estimating the Robustness of Composite CBA and MCDA Assessments by Variation of Criteria Importance Order

    DEFF Research Database (Denmark)

    Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen

    2011-01-01

    described is based on the fact that when using MCA as a decision-support tool, questions often arise about the weighting (or prioritising) of the included criteria. This part of the MCA is seen as the most subjective part and could give reasons for discussion among the decision makers or stakeholders......Abstract This paper discusses the concept of using rank variation concerning the stakeholder prioritising of importance criteria for exploring the sensitivity of criteria weights in multi-criteria analysis (MCA). Thereby the robustness of the MCA-based decision support can be tested. The analysis....... Furthermore, the relative weights can make a large difference in the resulting assessment of alternatives (Hobbs and Meier 2000). Therefore it is highly relevant to introduce a procedure for estimating the importance of criteria weights. This paper proposes a methodology for estimating the robustness...

  5. Estimating the robustness of composite CBA & MCA assessments by variation of criteria importance order

    DEFF Research Database (Denmark)

    Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen

    is based on the fact that when using MCA as a decision-support tool, questions often arise about the weighting (or prioritising) of the included criteria. This part of the MCA is seen as the most subjective part and could give reasons for discussion among the decision makers or stakeholders. Furthermore......This paper discusses the concept of using rank variation concerning the stake-holder prioritising of importance criteria for exploring the sensitivity of criteria weights in multi-criteria analysis (MCA). Thereby the robustness of the MCA-based decision support can be tested. The analysis described......, the relative weights can make a large difference in the resulting assessment of alternatives [1]. Therefore it is highly relevant to introduce a procedure for estimating the importance of criteria weights. This paper proposes a methodology for estimating the robustness of weights used in additive utility...

  6. Estimating spatial and temporal components of variation in count data using negative binomial mixed models

    Science.gov (United States)

    Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.

    2013-01-01

    Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.

  7. WTA estimates using the method of paired comparison: tests of robustness

    Science.gov (United States)

    Patricia A. Champ; John B. Loomis

    1998-01-01

    The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...

  8. Enumerating the Hidden Homeless: Strategies to Estimate the Homeless Gone Missing From a Point-in-Time Count

    Directory of Open Access Journals (Sweden)

    Agans Robert P.

    2014-06-01

    Full Text Available To receive federal homeless funds, communities are required to produce statistically reliable, unduplicated counts or estimates of homeless persons in sheltered and unsheltered locations during a one-night period (within the last ten days of January called a point-in-time (PIT count. In Los Angeles, a general population telephone survey was implemented to estimate the number of unsheltered homeless adults who are hidden from view during the PIT count. Two estimation approaches were investigated: i the number of homeless persons identified as living on private property, which employed a conventional household weight for the estimated total (Horvitz-Thompson approach; and ii the number of homeless persons identified as living on a neighbor’s property, which employed an additional adjustment derived from the size of the neighborhood network to estimate the total (multiplicity-based approach. This article compares the results of these two methods and discusses the implications therein.

  9. Rotated Walsh-Hadamard Spreading with Robust Channel Estimation for a Coded MC-CDMA System

    Directory of Open Access Journals (Sweden)

    Raulefs Ronald

    2004-01-01

    Full Text Available We investigate rotated Walsh-Hadamard spreading matrices for a broadband MC-CDMA system with robust channel estimation in the synchronous downlink. The similarities between rotated spreading and signal space diversity are outlined. In a multiuser MC-CDMA system, possible performance improvements are based on the chosen detector, the channel code, and its Hamming distance. By applying rotated spreading in comparison to a standard Walsh-Hadamard spreading code, a higher throughput can be achieved. As combining the channel code and the spreading code forms a concatenated code, the overall minimum Hamming distance of the concatenated code increases. This asymptotically results in an improvement of the bit error rate for high signal-to-noise ratio. Higher convolutional channel code rates are mostly generated by puncturing good low-rate channel codes. The overall Hamming distance decreases significantly for the punctured channel codes. Higher channel code rates are favorable for MC-CDMA, as MC-CDMA utilizes diversity more efficiently compared to pure OFDMA. The application of rotated spreading in an MC-CDMA system allows exploiting diversity even further. We demonstrate that the rotated spreading gain is still present for a robust pilot-aided channel estimator. In a well-designed system, rotated spreading extends the performance by using a maximum likelihood detector with robust channel estimation at the receiver by about 1 dB.

  10. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    Science.gov (United States)

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  11. mBEEF-vdW: Robust fitting of error estimation density functionals

    DEFF Research Database (Denmark)

    Lundgård, Keld Troen; Wellendorff, Jess; Voss, Johannes

    2016-01-01

    . The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012); J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014)]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function...... catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show...

  12. Robust subspace estimation using low-rank optimization theory and applications

    CERN Document Server

    Oreifej, Omar

    2014-01-01

    Various fundamental applications in computer vision and machine learning require finding the basis of a certain subspace. Examples of such applications include face detection, motion estimation, and activity recognition. An increasing interest has been recently placed on this area as a result of significant advances in the mathematics of matrix rank optimization. Interestingly, robust subspace estimation can be posed as a low-rank optimization problem, which can be solved efficiently using techniques such as the method of Augmented Lagrange Multiplier. In this book,?the authors?discuss fundame

  13. A robust methodology for kinetic model parameter estimation for biocatalytic reactions

    DEFF Research Database (Denmark)

    Al-Haque, Naweed; Andrade Santacoloma, Paloma de Gracia; Lima Afonso Neto, Watson

    2012-01-01

    lead to globally optimized parameter values. In this article, a robust methodology to estimate parameters for biocatalytic reaction kinetic expressions is proposed. The methodology determines the parameters in a systematic manner by exploiting the best features of several of the current approaches...... parameters, which are strongly correlated with each other. State-of-the-art methodologies such as nonlinear regression (using progress curves) or graphical analysis (using initial rate data, for example, the Lineweaver-Burke plot, Hanes plot or Dixon plot) often incorporate errors in the estimates and rarely...

  14. Rapid bioassay method for estimation of 90Sr in urine samples by liquid scintillation counting

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Chaudhary, Seema; Sawant, Pramilla D.

    2018-01-01

    Radiostrontium (Sr) is a by-product of the nuclear fission of uranium and plutonium in nuclear reactors and is an important radionuclide in spent nuclear fuel and radioactive waste. Rapid bioassay methods are required for estimating Sr in urine following internal contamination. Decision regarding medical intervention, if any can be based upon the results of urinalysis. The present method used at Bioassay Laboratory, Trombay is by Solid Extraction Chromatography (SEC) technique. The Sr separated from urine sample is precipitated as SrCO 3 and analyzed gravimetrically. However, gravimetric procedure is time consuming and therefore, in the present study, feasibility of Liquid Scintillation Counting for direct detection of radiostrontium in effluent was explored. The results obtained in the present study were compared with those obtained using gravimetric method

  15. Robust and efficient parameter estimation in dynamic models of biological systems.

    Science.gov (United States)

    Gábor, Attila; Banga, Julio R

    2015-10-29

    Dynamic modelling provides a systematic framework to understand function in biological systems. Parameter estimation in nonlinear dynamic models remains a very challenging inverse problem due to its nonconvexity and ill-conditioning. Associated issues like overfitting and local solutions are usually not properly addressed in the systems biology literature despite their importance. Here we present a method for robust and efficient parameter estimation which uses two main strategies to surmount the aforementioned difficulties: (i) efficient global optimization to deal with nonconvexity, and (ii) proper regularization methods to handle ill-conditioning. In the case of regularization, we present a detailed critical comparison of methods and guidelines for properly tuning them. Further, we show how regularized estimations ensure the best trade-offs between bias and variance, reducing overfitting, and allowing the incorporation of prior knowledge in a systematic way. We illustrate the performance of the presented method with seven case studies of different nature and increasing complexity, considering several scenarios of data availability, measurement noise and prior knowledge. We show how our method ensures improved estimations with faster and more stable convergence. We also show how the calibrated models are more generalizable. Finally, we give a set of simple guidelines to apply this strategy to a wide variety of calibration problems. Here we provide a parameter estimation strategy which combines efficient global optimization with a regularization scheme. This method is able to calibrate dynamic models in an efficient and robust way, effectively fighting overfitting and allowing the incorporation of prior information.

  16. Evaluation of the robustness of estimating five components from a skin spectral image

    Science.gov (United States)

    Akaho, Rina; Hirose, Misa; Tsumura, Norimichi

    2018-04-01

    We evaluated the robustness of a method used to estimate five components (i.e., melanin, oxy-hemoglobin, deoxy-hemoglobin, shading, and surface reflectance) from the spectral reflectance of skin at five wavelengths against noise and a change in epidermis thickness. We also estimated the five components from recorded images of age spots and circles under the eyes using the method. We found that noise in the image must be no more 0.1% to accurately estimate the five components and that the thickness of the epidermis affects the estimation. We acquired the distribution of major causes for age spots and circles under the eyes by applying the method to recorded spectral images.

  17. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  18. Robust Non-Local TV-L1 Optical Flow Estimation with Occlusion Detection.

    Science.gov (United States)

    Zhang, Congxuan; Chen, Zhen; Wang, Mingrun; Li, Ming; Jiang, Shaofeng

    2017-06-05

    In this paper, we propose a robust non-local TV-L1 optical flow method with occlusion detection to address the problem of weak robustness of optical flow estimation with motion occlusion. Firstly, a TV-L1 form for flow estimation is defined using a combination of the brightness constancy and gradient constancy assumptions in the data term and by varying the weight under the Charbonnier function in the smoothing term. Secondly, to handle the potential risk of the outlier in the flow field, a general non-local term is added in the TV-L1 optical flow model to engender the typical non-local TV-L1 form. Thirdly, an occlusion detection method based on triangulation is presented to detect the occlusion regions of the sequence. The proposed non-local TV-L1 optical flow model is performed in a linearizing iterative scheme using improved median filtering and a coarse-to-fine computing strategy. The results of the complex experiment indicate that the proposed method can overcome the significant influence of non-rigid motion, motion occlusion, and large displacement motion. Results of experiments comparing the proposed method and existing state-of-the-art methods by respectively using Middlebury and MPI Sintel database test sequences show that the proposed method has higher accuracy and better robustness.

  19. Estimation of State of Charge of Lithium-Ion Batteries Used in HEV Using Robust Extended Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Suleiman M. Sharkh

    2012-04-01

    Full Text Available A robust extended Kalman filter (EKF is proposed as a method for estimation of the state of charge (SOC of lithium-ion batteries used in hybrid electric vehicles (HEVs. An equivalent circuit model of the battery, including its electromotive force (EMF hysteresis characteristics and polarization characteristics is used. The effect of the robust EKF gain coefficient on SOC estimation is analyzed, and an optimized gain coefficient is determined to restrain battery terminal voltage from fluctuating. Experimental and simulation results are presented. SOC estimates using the standard EKF are compared with the proposed robust EKF algorithm to demonstrate the accuracy and precision of the latter for SOC estimation.

  20. Robust Estimator for Non-Line-of-Sight Error Mitigation in Indoor Localization

    Science.gov (United States)

    Casas, R.; Marco, A.; Guerrero, J. J.; Falcó, J.

    2006-12-01

    Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS) errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.). In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS), even when nearly half the measures suffered from NLOS or other coarse errors.

  1. BROJA-2PID: A Robust Estimator for Bivariate Partial Information Decomposition

    Directory of Open Access Journals (Sweden)

    Abdullah Makkeh

    2018-04-01

    Full Text Available Makkeh, Theis, and Vicente found that Cone Programming model is the most robust to compute the Bertschinger et al. partial information decomposition (BROJA PID measure. We developed a production-quality robust software that computes the BROJA PID measure based on the Cone Programming model. In this paper, we prove the important property of strong duality for the Cone Program and prove an equivalence between the Cone Program and the original Convex problem. Then, we describe in detail our software, explain how to use it, and perform some experiments comparing it to other estimators. Finally, we show that the software can be extended to compute some quantities of a trivaraite PID measure.

  2. Robust best linear estimation for regression analysis using surrogate and instrumental variables.

    Science.gov (United States)

    Wang, C Y

    2012-04-01

    We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.

  3. A Robust Real Time Direction-of-Arrival Estimation Method for Sequential Movement Events of Vehicles.

    Science.gov (United States)

    Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang

    2018-03-27

    Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.

  4. Modified generalized method of moments for a robust estimation of polytomous logistic model

    Directory of Open Access Journals (Sweden)

    Xiaoshan Wang

    2014-07-01

    Full Text Available The maximum likelihood estimation (MLE method, typically used for polytomous logistic regression, is prone to bias due to both misclassification in outcome and contamination in the design matrix. Hence, robust estimators are needed. In this study, we propose such a method for nominal response data with continuous covariates. A generalized method of weighted moments (GMWM approach is developed for dealing with contaminated polytomous response data. In this approach, distances are calculated based on individual sample moments. And Huber weights are applied to those observations with large distances. Mellow-type weights are also used to downplay leverage points. We describe theoretical properties of the proposed approach. Simulations suggest that the GMWM performs very well in correcting contamination-caused biases. An empirical application of the GMWM estimator on data from a survey demonstrates its usefulness.

  5. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data

    Directory of Open Access Journals (Sweden)

    Li Chen

    2018-04-01

    Full Text Available Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios—a simple but effective normalization method—for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  6. A simple and robust statistical framework for planning, analysing and interpreting faecal egg count reduction test (FECRT) studies

    DEFF Research Database (Denmark)

    Denwood, M.J.; McKendrick, I.J.; Matthews, L.

    Introduction. There is an urgent need for a method of analysing FECRT data that is computationally simple and statistically robust. A method for evaluating the statistical power of a proposed FECRT study would also greatly enhance the current guidelines. Methods. A novel statistical framework has...... been developed that evaluates observed FECRT data against two null hypotheses: (1) the observed efficacy is consistent with the expected efficacy, and (2) the observed efficacy is inferior to the expected efficacy. The method requires only four simple summary statistics of the observed data. Power...... that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...

  7. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    Science.gov (United States)

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  8. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    Science.gov (United States)

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  9. Estimating milk yield and value losses from increased somatic cell count on US dairy farms.

    Science.gov (United States)

    Hadrich, J C; Wolf, C A; Lombard, J; Dolak, T M

    2018-04-01

    Milk loss due to increased somatic cell counts (SCC) results in economic losses for dairy producers. This research uses 10 mo of consecutive dairy herd improvement data from 2013 and 2014 to estimate milk yield loss using SCC as a proxy for clinical and subclinical mastitis. A fixed effects regression was used to examine factors that affected milk yield while controlling for herd-level management. Breed, milking frequency, days in milk, seasonality, SCC, cumulative months with SCC greater than 100,000 cells/mL, lactation, and herd size were variables included in the regression analysis. The cumulative months with SCC above a threshold was included as a proxy for chronic mastitis. Milk yield loss increased as the number of test days with SCC ≥100,000 cells/mL increased. Results from the regression were used to estimate a monetary value of milk loss related to SCC as a function of cow and operation related explanatory variables for a representative dairy cow. The largest losses occurred from increased cumulative test days with a SCC ≥100,000 cells/mL, with daily losses of $1.20/cow per day in the first month to $2.06/cow per day in mo 10. Results demonstrate the importance of including the duration of months above a threshold SCC when estimating milk yield losses. Cows with chronic mastitis, measured by increased consecutive test days with SCC ≥100,000 cells/mL, resulted in higher milk losses than cows with a new infection. This provides farm managers with a method to evaluate the trade-off between treatment and culling decisions as it relates to mastitis control and early detection. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    Science.gov (United States)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures

  11. Binomial distribution of Poisson statistics and tracks overlapping probability to estimate total tracks count with low uncertainty

    International Nuclear Information System (INIS)

    Khayat, Omid; Afarideh, Hossein; Mohammadnia, Meisam

    2015-01-01

    In the solid state nuclear track detectors of chemically etched type, nuclear tracks with center-to-center neighborhood of distance shorter than two times the radius of tracks will emerge as overlapping tracks. Track overlapping in this type of detectors causes tracks count losses and it becomes rather severe in high track densities. Therefore, tracks counting in this condition should include a correction factor for count losses of different tracks overlapping orders since a number of overlapping tracks may be counted as one track. Another aspect of the problem is the cases where imaging the whole area of the detector and counting all tracks are not possible. In these conditions a statistical generalization method is desired to be applicable in counting a segmented area of the detector and the results can be generalized to the whole surface of the detector. Also there is a challenge in counting the tracks in densely overlapped tracks because not sufficient geometrical or contextual information are available. It this paper we present a statistical counting method which gives the user a relation between the tracks overlapping probabilities on a segmented area of the detector surface and the total number of tracks. To apply the proposed method one can estimate the total number of tracks on a solid state detector of arbitrary shape and dimensions by approximating the tracks averaged area, whole detector surface area and some orders of tracks overlapping probabilities. It will be shown that this method is applicable in high and ultra high density tracks images and the count loss error can be enervated using a statistical generalization approach. - Highlights: • A correction factor for count losses of different tracks overlapping orders. • For the cases imaging the whole area of the detector is not possible. • Presenting a statistical generalization method for segmented areas. • Giving a relation between the tracks overlapping probabilities and the total tracks

  12. Robust Pose Estimation using the SwissRanger SR-3000 Camera

    DEFF Research Database (Denmark)

    Gudmundsson, Sigurjon Arni; Larsen, Rasmus; Ersbøll, Bjarne Kjær

    2007-01-01

    In this paper a robust method is presented to classify and estimate an objects pose from a real time range image and a low dimensional model. The model is made from a range image training set which is reduced dimensionally by a nonlinear manifold learning method named Local Linear Embedding (LLE)......). New range images are then projected to this model giving the low dimensional coordinates of the object pose in an efficient manner. The range images are acquired by a state of the art SwissRanger SR-3000 camera making the projection process work in real-time....

  13. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    Science.gov (United States)

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R

  14. Age Estimation Robust to Optical and Motion Blurring by Deep Residual CNN

    Directory of Open Access Journals (Sweden)

    Jeon Seong Kang

    2018-04-01

    Full Text Available Recently, real-time human age estimation based on facial images has been applied in various areas. Underneath this phenomenon lies an awareness that age estimation plays an important role in applying big data to target marketing for age groups, product demand surveys, consumer trend analysis, etc. However, in a real-world environment, various optical and motion blurring effects can occur. Such effects usually cause a problem in fully capturing facial features such as wrinkles, which are essential to age estimation, thereby degrading accuracy. Most of the previous studies on age estimation were conducted for input images almost free from blurring effect. To overcome this limitation, we propose the use of a deep ResNet-152 convolutional neural network for age estimation, which is robust to various optical and motion blurring effects of visible light camera sensors. We performed experiments with various optical and motion blurred images created from the park aging mind laboratory (PAL and craniofacial longitudinal morphological face database (MORPH databases, which are publicly available. According to the results, the proposed method exhibited better age estimation performance than the previous methods.

  15. Robust variance estimation with dependent effect sizes: practical considerations including a software tutorial in Stata and spss.

    Science.gov (United States)

    Tanner-Smith, Emily E; Tipton, Elizabeth

    2014-03-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Collateral missing value imputation: a new robust missing value estimation algorithm for microarray data.

    Science.gov (United States)

    Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S

    2005-05-15

    Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE

  17. More recent robust methods for the estimation of mean and standard deviation of data

    International Nuclear Information System (INIS)

    Kanisch, G.

    2003-01-01

    Outliers in a data set result in biased values of mean and standard deviation. One way to improve the estimation of a mean is to apply tests to identify outliers and to exclude them from the calculations. Tests according to Grubbs or to Dixon, which are frequently used in practice, especially within laboratory intercomparisons, are not very efficient in identifying outliers. Since more than ten years now so-called robust methods are used more and more, which determine mean and standard deviation by iteration and down-weighting values far from the mean, thereby diminishing the impact of outliers. In 1989 the Analytical Methods Committee of the British Royal Chemical Society published such a robust method. Since 1993 the US Environmental Protection Agency published a more efficient and quite versatile method. Mean and standard deviation are calculated by iteration and application of a special weight function for down-weighting outlier candidates. In 2000, W. Cofino et al. published a very efficient robust method which works quite different from the others. It applies methods taken from the basics of quantum mechanics, such as ''wave functions'' associated with each laboratory mean value and matrix algebra (solving eigenvalue problems). In contrast to the other ones, this method includes the individual measurement uncertainties. (orig.)

  18. Clutch pressure estimation for a power-split hybrid transmission using nonlinear robust observer

    Science.gov (United States)

    Zhou, Bin; Zhang, Jianwu; Gao, Ji; Yu, Haisheng; Liu, Dong

    2018-06-01

    For a power-split hybrid transmission, using the brake clutch to realize the transition from electric drive mode to hybrid drive mode is an available strategy. Since the pressure information of the brake clutch is essential for the mode transition control, this research designs a nonlinear robust reduced-order observer to estimate the brake clutch pressure. Model uncertainties or disturbances are considered as additional inputs, thus the observer is designed in order that the error dynamics is input-to-state stable. The nonlinear characteristics of the system are expressed as the lookup tables in the observer. Moreover, the gain matrix of the observer is solved by two optimization procedures under the constraints of the linear matrix inequalities. The proposed observer is validated by offline simulation and online test, the results have shown that the observer achieves significant performance during the mode transition, as the estimation error is within a reasonable range, more importantly, it is asymptotically stable.

  19. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2015-08-01

    Full Text Available Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method.

  20. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  1. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Directory of Open Access Journals (Sweden)

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  2. Estimation and robust control of microalgae culture for optimization of biological fixation of CO2

    International Nuclear Information System (INIS)

    Filali, R.

    2012-01-01

    This thesis deals with the optimization of carbon dioxide consumption by microalgae. Indeed, following several current environmental issues primarily related to large emissions of CO 2 , it is shown that microalgae represent a very promising solution for CO 2 mitigation. From this perspective, we are interested in the optimization strategy of CO 2 consumption through the development of a robust control law. The main aim is to ensure optimal operating conditions for a Chlorella vulgaris culture in an instrumented photo-bioreactor. The thesis is based on three major axes. The first one concerns growth modeling of the selected species based on a mathematical model reflecting the influence of light and total inorganic carbon concentration. For the control context, the second axis is related to biomass estimation from the real-time measurement of dissolved carbon dioxide. This step is necessary for the control part due to the lack of affordable real-time sensors for this kind of measurement. Three observers structures have been studied and compared: an extended Kalman filter, an asymptotic observer and an interval observer. The last axis deals with the implementation of a non-linear predictive control law coupled to the estimation strategy for the regulation of the cellular concentration around a value which maximizes the CO 2 consumption. Performance and robustness of this control law have been validated in simulation and experimentally on a laboratory-scale instrumented photo-bioreactor. This thesis represents a preliminary study for the optimization of CO 2 mitigation strategy by microalgae. (author)

  3. National South African HIV prevalence estimates robust despite substantial test non-participation

    Directory of Open Access Journals (Sweden)

    Guy Harling

    2017-07-01

    Full Text Available Background. South African (SA national HIV seroprevalence estimates are of crucial policy relevance in the country, and for the worldwide HIV response. However, the most recent nationally representative HIV test survey in 2012 had 22% test non-participation, leaving the potential for substantial bias in current seroprevalence estimates, even after controlling for selection on observed factors. Objective. To re-estimate national HIV prevalence in SA, controlling for bias due to selection on both observed and unobserved factors in the 2012 SA National HIV Prevalence, Incidence and Behaviour Survey. Methods. We jointly estimated regression models for consent to test and HIV status in a Heckman-type bivariate probit framework. As selection variable, we used assigned interviewer identity, a variable known to predict consent but highly unlikely to be associated with interviewees’ HIV status. From these models, we estimated the HIV status of interviewed participants who did not test. Results. Of 26 710 interviewed participants who were invited to test for HIV, 21.3% of females and 24.3% of males declined. Interviewer identity was strongly correlated with consent to test for HIV; declining a test was weakly associated with HIV serostatus. Our HIV prevalence estimates were not significantly different from those using standard methods to control for bias due to selection on observed factors: 15.1% (95% confidence interval (CI 12.1 - 18.6 v. 14.5% (95% CI 12.8 - 16.3 for 15 - 49-year-old males; 23.3% (95% CI 21.7 - 25.8 v. 23.2% (95% CI 21.3 - 25.1 for 15 - 49-year-old females. Conclusion. The most recent SA HIV prevalence estimates are robust under the strongest available test for selection bias due to missing data. Our findings support the reliability of inferences drawn from such data.

  4. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of

  5. Robustness of a Neural Network Model for Power Peak Factor Estimation in Protection Systems

    International Nuclear Information System (INIS)

    Souza, Rose Mary G.P.; Moreira, Joao M.L.

    2006-01-01

    This work presents results of robustness verification of artificial neural network correlations that improve the real time prediction of the power peak factor for reactor protection systems. The input variables considered in the correlation are those available in the reactor protection systems, namely, the axial power differences obtained from measured ex-core detectors, and the position of control rods. The correlations, based on radial basis function (RBF) and multilayer perceptron (MLP) neural networks, estimate the power peak factor, without faulty signals, with average errors between 0.13%, 0.19% and 0.15%, and maximum relative error of 2.35%. The robustness verification was performed for three different neural network correlations. The results show that they are robust against signal degradation, producing results with faulty signals with a maximum error of 6.90%. The average error associated to faulty signals for the MLP network is about half of that of the RBF network, and the maximum error is about 1% smaller. These results demonstrate that MLP neural network correlation is more robust than the RBF neural network correlation. The results also show that the input variables present redundant information. The axial power difference signals compensate the faulty signal for the position of a given control rod, and improves the results by about 10%. The results show that the errors in the power peak factor estimation by these neural network correlations, even in faulty conditions, are smaller than the current PWR schemes which may have uncertainties as high as 8%. Considering the maximum relative error of 2.35%, these neural network correlations would allow decreasing the power peak factor safety margin by about 5%. Such a reduction could be used for operating the reactor with a higher power level or with more flexibility. The neural network correlation has to meet requirements of high integrity software that performs safety grade actions. It is shown that the

  6. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    Directory of Open Access Journals (Sweden)

    Wenyan Ci

    2016-10-01

    Full Text Available Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  7. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    Science.gov (United States)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  8. Global robust stability of delayed neural networks: Estimating upper limit of norm of delayed connection weight matrix

    International Nuclear Information System (INIS)

    Singh, Vimal

    2007-01-01

    The question of estimating the upper limit of -parallel B -parallel 2 , which is a key step in some recently reported global robust stability criteria for delayed neural networks, is revisited ( B denotes the delayed connection weight matrix). Recently, Cao, Huang, and Qu have given an estimate of the upper limit of -parallel B -parallel 2 . In the present paper, an alternative estimate of the upper limit of -parallel B -parallel 2 is highlighted. It is shown that the alternative estimate may yield some new global robust stability results

  9. Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials.

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M

    2013-02-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Robust Estimator for Non-Line-of-Sight Error Mitigation in Indoor Localization

    Directory of Open Access Journals (Sweden)

    Marco A

    2006-01-01

    Full Text Available Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.. In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS, even when nearly half the measures suffered from NLOS or other coarse errors.

  11. Robust seismicity forecasting based on Bayesian parameter estimation for epidemiological spatio-temporal aftershock clustering models.

    Science.gov (United States)

    Ebrahimian, Hossein; Jalayer, Fatemeh

    2017-08-29

    In the immediate aftermath of a strong earthquake and in the presence of an ongoing aftershock sequence, scientific advisories in terms of seismicity forecasts play quite a crucial role in emergency decision-making and risk mitigation. Epidemic Type Aftershock Sequence (ETAS) models are frequently used for forecasting the spatio-temporal evolution of seismicity in the short-term. We propose robust forecasting of seismicity based on ETAS model, by exploiting the link between Bayesian inference and Markov Chain Monte Carlo Simulation. The methodology considers the uncertainty not only in the model parameters, conditioned on the available catalogue of events occurred before the forecasting interval, but also the uncertainty in the sequence of events that are going to happen during the forecasting interval. We demonstrate the methodology by retrospective early forecasting of seismicity associated with the 2016 Amatrice seismic sequence activities in central Italy. We provide robust spatio-temporal short-term seismicity forecasts with various time intervals in the first few days elapsed after each of the three main events within the sequence, which can predict the seismicity within plus/minus two standard deviations from the mean estimate within the few hours elapsed after the main event.

  12. Estimating the standard deviation for 222Rn scintillation counting - a note concerning the paper by Sarmiento et al

    International Nuclear Information System (INIS)

    Key, R.M.

    1977-01-01

    In a recent report Sarmiento et al.(1976) presented a method for estimating the statistical error associated with 222 Rn scintillation counting. Because of certain approximations, the method is less accurate than that of an earlier work by Lucas and Woodward (1964). The Sarmiento method and the Lucas method are compared, and the magnitude of errors incurred using the approximations are determined. For counting times greater than 300 minutes, the disadvantage of the slight inaccuracies of the Sarmiento method are outweighed by the advantage of easier calculation. (Auth.)

  13. Robust Manhattan Frame Estimation From a Single RGB-D Image

    KAUST Repository

    Bernard Ghanem; Heilbron, Fabian Caba; Niebles, Juan Carlos; Thabet, Ali Kassem

    2015-01-01

    This paper proposes a new framework for estimating the Manhattan Frame (MF) of an indoor scene from a single RGB-D image. Our technique formulates this problem as the estimation of a rotation matrix that best aligns the normals of the captured scene to a canonical world axes. By introducing sparsity constraints, our method can simultaneously estimate the scene MF, the surfaces in the scene that are best aligned to one of three coordinate axes, and the outlier surfaces that do not align with any of the axes. To test our approach, we contribute a new set of annotations to determine ground truth MFs in each image of the popular NYUv2 dataset. We use this new benchmark to experimentally demonstrate that our method is more accurate, faster, more reliable and more robust than the methods used in the literature. We further motivate our technique by showing how it can be used to address the RGB-D SLAM problem in indoor scenes by incorporating it into and improving the performance of a popular RGB-D SLAM method.

  14. Robust Wavelet Estimation to Eliminate Simultaneously the Effects of Boundary Problems, Outliers, and Correlated Noise

    Directory of Open Access Journals (Sweden)

    Alsaidi M. Altaher

    2012-01-01

    Full Text Available Classical wavelet thresholding methods suffer from boundary problems caused by the application of the wavelet transformations to a finite signal. As a result, large bias at the edges and artificial wiggles occur when the classical boundary assumptions are not satisfied. Although polynomial wavelet regression and local polynomial wavelet regression effectively reduce the risk of this problem, the estimates from these two methods can be easily affected by the presence of correlated noise and outliers, giving inaccurate estimates. This paper introduces two robust methods in which the effects of boundary problems, outliers, and correlated noise are simultaneously taken into account. The proposed methods combine thresholding estimator with either a local polynomial model or a polynomial model using the generalized least squares method instead of the ordinary one. A primary step that involves removing the outlying observations through a statistical function is considered as well. The practical performance of the proposed methods has been evaluated through simulation experiments and real data examples. The results are strong evidence that the proposed method is extremely effective in terms of correcting the boundary bias and eliminating the effects of outliers and correlated noise.

  15. Robust Manhattan Frame Estimation From a Single RGB-D Image

    KAUST Repository

    Bernard Ghanem

    2015-06-02

    This paper proposes a new framework for estimating the Manhattan Frame (MF) of an indoor scene from a single RGB-D image. Our technique formulates this problem as the estimation of a rotation matrix that best aligns the normals of the captured scene to a canonical world axes. By introducing sparsity constraints, our method can simultaneously estimate the scene MF, the surfaces in the scene that are best aligned to one of three coordinate axes, and the outlier surfaces that do not align with any of the axes. To test our approach, we contribute a new set of annotations to determine ground truth MFs in each image of the popular NYUv2 dataset. We use this new benchmark to experimentally demonstrate that our method is more accurate, faster, more reliable and more robust than the methods used in the literature. We further motivate our technique by showing how it can be used to address the RGB-D SLAM problem in indoor scenes by incorporating it into and improving the performance of a popular RGB-D SLAM method.

  16. A less field-intensive robust design for estimating demographic parameters with Mark-resight data

    Science.gov (United States)

    McClintock, B.T.; White, Gary C.

    2009-01-01

    The robust design has become popular among animal ecologists as a means for estimating population abundance and related demographic parameters with mark-recapture data. However, two drawbacks of traditional mark-recapture are financial cost and repeated disturbance to animals. Mark-resight methodology may in many circumstances be a less expensive and less invasive alternative to mark-recapture, but the models developed to date for these data have overwhelmingly concentrated only on the estimation of abundance. Here we introduce a mark-resight model analogous to that used in mark-recapture for the simultaneous estimation of abundance, apparent survival, and transition probabilities between observable and unobservable states. The model may be implemented using standard statistical computing software, but it has also been incorporated into the freeware package Program MARK. We illustrate the use of our model with mainland New Zealand Robin (Petroica australis) data collected to ascertain whether this methodology may be a reliable alternative for monitoring endangered populations of a closely related species inhabiting the Chatham Islands. We found this method to be a viable alternative to traditional mark-recapture when cost or disturbance to species is of particular concern in long-term population monitoring programs. ?? 2009 by the Ecological Society of America.

  17. Development of robust flexible OLED encapsulations using simulated estimations and experimental validations

    International Nuclear Information System (INIS)

    Lee, Chang-Chun; Shih, Yan-Shin; Wu, Chih-Sheng; Tsai, Chia-Hao; Yeh, Shu-Tang; Peng, Yi-Hao; Chen, Kuang-Jung

    2012-01-01

    This work analyses the overall stress/strain characteristic of flexible encapsulations with organic light-emitting diode (OLED) devices. A robust methodology composed of a mechanical model of multi-thin film under bending loads and related stress simulations based on nonlinear finite element analysis (FEA) is proposed, and validated to be more reliable compared with related experimental data. With various geometrical combinations of cover plate, stacked thin films and plastic substrate, the position of the neutral axis (NA) plate, which is regarded as a key design parameter to minimize stress impact for the concerned OLED devices, is acquired using the present methodology. The results point out that both the thickness and mechanical properties of the cover plate help in determining the NA location. In addition, several concave and convex radii are applied to examine the reliable mechanical tolerance and to provide an insight into the estimated reliability of foldable OLED encapsulations. (paper)

  18. Robust time estimation reconciles views of the antiquity of placental mammals.

    Directory of Open Access Journals (Sweden)

    Yasuhiro Kitazoe

    2007-04-01

    Full Text Available Molecular studies have reported divergence times of modern placental orders long before the Cretaceous-Tertiary boundary and far older than paleontological data. However, this discrepancy may not be real, but rather appear because of the violation of implicit assumptions in the estimation procedures, such as non-gradual change of evolutionary rate and failure to correct for convergent evolution.New procedures for divergence-time estimation robust to abrupt changes in the rate of molecular evolution are described. We used a variant of the multidimensional vector space (MVS procedure to take account of possible convergent evolution. Numerical simulations of abrupt rate change and convergent evolution showed good performance of the new procedures in contrast to current methods. Application to complete mitochondrial genomes identified marked rate accelerations and decelerations, which are not obtained with current methods. The root of placental mammals is estimated to be approximately 18 million years more recent than when assuming a log Brownian motion model. Correcting the pairwise distances for convergent evolution using MVS lowers the age of the root about another 20 million years compared to using standard maximum likelihood tree branch lengths. These two procedures combined revise the root time of placental mammals from around 122 million years ago to close to 84 million years ago. As a result, the estimated distribution of molecular divergence times is broadly consistent with quantitative analysis of the North American fossil record and traditional morphological views.By including the dual effects of abrupt rate change and directly accounting for convergent evolution at the molecular level, these estimates provide congruence between the molecular results, paleontological analyses and morphological expectations. The programs developed here are provided along with sample data that reproduce the results of this study and are especially

  19. Aspartic acid racemization rate in narwhal (Monodon monoceros) eye lens nuclei estimated by counting of growth layers in tusks

    DEFF Research Database (Denmark)

    Garde, Eva; Heide-Jørgensen, Mads Peter; Ditlevsen, Susanne

    2012-01-01

    Ages of marine mammals have traditionally been estimated by counting dentinal growth layers in teeth. However, this method is difficult to use on narwhals (Monodon monoceros) because of their special tooth structures. Alternative methods are therefore needed. The aspartic acid racemization (AAR......) technique has been used in age estimation studies of cetaceans, including narwhals. The purpose of this study was to estimate a species-specific racemization rate for narwhals by regressing aspartic acid D/L ratios in eye lens nuclei against growth layer groups in tusks (n=9). Two racemization rates were...

  20. Robust total energy demand estimation with a hybrid Variable Neighborhood Search – Extreme Learning Machine algorithm

    International Nuclear Information System (INIS)

    Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.

    2016-01-01

    Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.

  1. Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations.

    Science.gov (United States)

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn

    2015-05-15

    The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  3. Dynamic Output Feedback Robust MPC with Input Saturation Based on Zonotopic Set-Membership Estimation

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2016-01-01

    Full Text Available For quasi-linear parameter varying (quasi-LPV systems with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC with the consideration of input saturation is investigated. The saturated dynamic output feedback controller is represented by a convex hull involving the actual dynamic output controller and an introduced auxiliary controller. By taking both the actual output feedback controller and the auxiliary controller with a parameter-dependent form, the main optimization problem can be formulated as convex optimization. The consideration of input saturation in the main optimization problem reduces the conservatism of dynamic output feedback controller design. The estimation error set and bounded disturbance are represented by zonotopes and refreshed by zonotopic set-membership estimation. Compared with the previous results, the proposed algorithm can not only guarantee the recursive feasibility of the optimization problem, but also improve the control performance at the cost of higher computational burden. A nonlinear continuous stirred tank reactor (CSTR example is given to illustrate the effectiveness of the approach.

  4. Enhancing interferometer phase estimation, sensing sensitivity, and resolution using robust entangled states

    Science.gov (United States)

    Smith, James F.

    2017-11-01

    With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.

  5. A Robust Approach for Clock Offset Estimation in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Kim Jang-Sub

    2010-01-01

    Full Text Available The maximum likelihood estimators (MLEs for the clock phase offset assuming a two-way message exchange mechanism between the nodes of a wireless sensor network were recently derived assuming Gaussian and exponential network delays. However, the MLE performs poorly in the presence of non-Gaussian or nonexponential network delay distributions. Currently, there is a need to develop clock synchronization algorithms that are robust to the distribution of network delays. This paper proposes a clock offset estimator based on the composite particle filter (CPF to cope with the possible asymmetries and non-Gaussianity of the network delay distributions. Also, a variant of the CPF approach based on the bootstrap sampling (BS is shown to exhibit good performance in the presence of reduced number of observations. Computer simulations illustrate that the basic CPF and its BS-based variant present superior performance than MLE under general random network delay distributions such as asymmetric Gaussian, exponential, Gamma, Weibull as well as various mixtures.

  6. Modeling Systematic Change in Stopover Duration Does Not Improve Bias in Trends Estimated from Migration Counts.

    Directory of Open Access Journals (Sweden)

    Tara L Crewe

    Full Text Available The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling to reduce the probability that individuals will

  7. Estimation of single-year-of-age counts of live births, fetal losses, abortions, and pregnant women for counties of Texas.

    Science.gov (United States)

    Singh, Bismark; Meyers, Lauren Ancel

    2017-05-08

    We provide a methodology for estimating counts of single-year-of-age live-births, fetal-losses, abortions, and pregnant women from aggregated age-group counts. As a case study, we estimate counts for the 254 counties of Texas for the year 2010. We use interpolation to estimate counts of live-births, fetal-losses, and abortions by women of each single-year-of-age for all Texas counties. We then use these counts to estimate the numbers of pregnant women for each single-year-of-age, which were previously available only in aggregate. To support public health policy and planning, we provide single-year-of-age estimates of live-births, fetal-losses, abortions, and pregnant women for all Texas counties in the year 2010, as well as the estimation method source code.

  8. A modern robust approach to remotely estimate chlorophyll in coastal and inland zones

    Science.gov (United States)

    Shanmugam, Palanisamy; He, Xianqiang; Singh, Rakesh Kumar; Varunan, Theenathayalan

    2018-05-01

    The chlorophyll concentration of a water body is an important proxy for representing the phytoplankton biomass. Its estimation from multi or hyper-spectral remote sensing data in natural waters is generally achieved by using (i) the waveband ratioing in two or more bands in the blue-green or (ii) by using a combination of the radiance peak position and magnitude in the red-near-infrared (NIR) spectrum. The blue-green ratio algorithms have been extensively used with satellite ocean color data to investigate chlorophyll distributions in open ocean and clear waters and the application of red-NIR algorithms is often restricted to turbid productive water bodies. These issues present the greatest obstacles to our ability to formulate a modern robust method suitable for quantitative assessments of the chlorophyll concentration in a diverse range of water types. The present study is focused to investigate the normalized water-leaving radiance spectra in the visible and NIR region and propose a robust algorithm (Generalized ABI, GABI algorithm) for chlorophyll concentration retrieval based on Algal Bloom index (ABI) which separates phytoplankton signals from other constituents in the water column. The GABI algorithm is validated using independent in-situ data from various regional to global waters and its performance is further evaluated by comparison with the blue-green waveband ratios and red-NIR algorithms. The results revealed that GABI yields significantly more accurate chlorophyll concentrations (with uncertainties less than 13.5%) and remains more stable in different waters types when compared with the blue-green waveband ratios and red-NIR algorithms. The performance of GABI is further demonstrated using HICO images from nearshore turbid productive waters and MERIS and MODIS-Aqua images from coastal and offshore waters of the Arabian Sea, Bay of Bengal and East China Sea.

  9. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  10. Near infrared spectroscopy to estimate the temperature reached on burned soils: strategies to develop robust models.

    Science.gov (United States)

    Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob

    2014-05-01

    The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust

  11. A Robust Mass Estimator for Dark Matter Subhalo Perturbations in Strong Gravitational Lenses

    Energy Technology Data Exchange (ETDEWEB)

    Minor, Quinn E. [Department of Science, Borough of Manhattan Community College, City University of New York, New York, NY 10007 (United States); Kaplinghat, Manoj [Department of Physics and Astronomy, University of California, Irvine CA 92697 (United States); Li, Nan [Department of Astronomy and Astrophysics, The University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)

    2017-08-20

    A few dark matter substructures have recently been detected in strong gravitational lenses through their perturbations of highly magnified images. We derive a characteristic scale for lensing perturbations and show that they are significantly larger than the perturber’s Einstein radius. We show that the perturber’s projected mass enclosed within this radius, scaled by the log-slope of the host galaxy’s density profile, can be robustly inferred even if the inferred density profile and tidal radius of the perturber are biased. We demonstrate the validity of our analytic derivation using several gravitational lens simulations where the tidal radii and the inner log-slopes of the density profile of the perturbing subhalo are allowed to vary. By modeling these simulated data, we find that our mass estimator, which we call the effective subhalo lensing mass, is accurate to within about 10% or smaller in each case, whereas the inferred total subhalo mass can potentially be biased by nearly an order of magnitude. We therefore recommend that the effective subhalo lensing mass be reported in future lensing reconstructions, as this will allow for a more accurate comparison with the results of dark matter simulations.

  12. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    Directory of Open Access Journals (Sweden)

    Annette Mossel

    2015-12-01

    Full Text Available In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1 user tracking for virtual and augmented reality applications, (2 handheld target tracking for tunneling and (3 machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m.

  13. Counting DNA: estimating the complexity of a test tube of DNA.

    Science.gov (United States)

    Faulhammer, D; Lipton, R J; Landweber, L F

    1999-10-01

    We consider the problem of estimation of the 'complexity' of a test tube of DNA. The complexity of a test tube is the number of different kinds of strands of DNA in the test tube. It is quite easy to estimate the number of total strands in a test tube, especially if the strands are all the same length. Estimation of the complexity is much less clear. We propose a simple kind of DNA computation that can estimate the complexity.

  14. Estimated average annual rate of change of CD4(+) T-cell counts in patients on combination antiretroviral therapy

    DEFF Research Database (Denmark)

    Mocroft, Amanda; Phillips, Andrew N; Ledergerber, Bruno

    2010-01-01

    BACKGROUND: Patients receiving combination antiretroviral therapy (cART) might continue treatment with a virologically failing regimen. We sought to identify annual change in CD4(+) T-cell count according to levels of viraemia in patients on cART. METHODS: A total of 111,371 CD4(+) T-cell counts...... and viral load measurements in 8,227 patients were analysed. Annual change in CD4(+) T-cell numbers was estimated using mixed models. RESULTS: After adjustment, the estimated average annual change in CD4(+) T-cell count significantly increased when viral load was cells/mm(3), 95......% confidence interval [CI] 26.6-34.3), was stable when viral load was 500-9,999 copies/ml (3.1 cells/mm(3), 95% CI -5.3-11.5) and decreased when viral load was >/=10,000 copies/ml (-14.8 cells/mm(3), 95% CI -4.5--25.1). Patients taking a boosted protease inhibitor (PI) regimen had more positive annual CD4(+) T-cell...

  15. Tower counts

    Science.gov (United States)

    Woody, Carol Ann; Johnson, D.H.; Shrier, Brianna M.; O'Neal, Jennifer S.; Knutzen, John A.; Augerot, Xanthippe; O'Neal, Thomas A.; Pearsons, Todd N.

    2007-01-01

    Counting towers provide an accurate, low-cost, low-maintenance, low-technology, and easily mobilized escapement estimation program compared to other methods (e.g., weirs, hydroacoustics, mark-recapture, and aerial surveys) (Thompson 1962; Siebel 1967; Cousens et al. 1982; Symons and Waldichuk 1984; Anderson 2000; Alaska Department of Fish and Game 2003). Counting tower data has been found to be consistent with that of digital video counts (Edwards 2005). Counting towers do not interfere with natural fish migration patterns, nor are fish handled or stressed; however, their use is generally limited to clear rivers that meet specific site selection criteria. The data provided by counting tower sampling allow fishery managers to determine reproductive population size, estimate total return (escapement + catch) and its uncertainty, evaluate population productivity and trends, set harvest rates, determine spawning escapement goals, and forecast future returns (Alaska Department of Fish and Game 1974-2000 and 1975-2004). The number of spawning fish is determined by subtracting subsistence, sport-caught fish, and prespawn mortality from the total estimated escapement. The methods outlined in this protocol for tower counts can be used to provide reasonable estimates ( plus or minus 6%-10%) of reproductive salmon population size and run timing in clear rivers. 

  16. A Robust WLS Power System State Estimation Method Integrating a Wide-Area Measurement System and SCADA Technology

    Directory of Open Access Journals (Sweden)

    Tao Jin

    2015-04-01

    Full Text Available With the development of modern society, the scale of the power system is rapidly increased accordingly, and the framework and mode of running of power systems are trending towards more complexity. It is nowadays much more important for the dispatchers to know exactly the state parameters of the power network through state estimation. This paper proposes a robust power system WLS state estimation method integrating a wide-area measurement system (WAMS and SCADA technology, incorporating phasor measurements and the results of the traditional state estimator in a post-processing estimator, which greatly reduces the scale of the non-linear estimation problem as well as the number of iterations and the processing time per iteration. This paper firstly analyzes the wide-area state estimation model in detail, then according to the issue that least squares does not account for bad data and outliers, the paper proposes a robust weighted least squares (WLS method that combines a robust estimation principle with least squares by equivalent weight. The performance assessment is discussed through setting up mathematical models of the distribution network. The effectiveness of the proposed method was proved to be accurate and reliable by simulations and experiments.

  17. A Burst-Mode Photon-Counting Receiver with Automatic Channel Estimation and Bit Rate Detection

    Science.gov (United States)

    2016-02-24

    Grein, M.E., Elgin, L.E., Robinson, B.S., Kachelmyer, A.L., Caplan , D.O., Stevens, M.L., Carney, J.J., Hamilton, S.A., and Boroson, D.M., “Demonstration...Robinson, B.S., Kerman, A.J., Dauler, E.A., Barron, R.J., Caplan , D.O., Stevens, M.L., Carney, J.J., Hamilton, S.A., Yang, J.K.W., and Berggren, K.K., “781...Mbit/s photon-counting optical communications using a superconducting nanowire detector,” Optics Letters, v. 31 no. 4 444-446 (2006). [14] Caplan

  18. Estimation of low level gross alpha activities in the radioactive effluent using liquid scintillation counting technique

    International Nuclear Information System (INIS)

    Bhade, Sonali P.D.; Johnson, Bella E.; Singh, Sanjay; Babu, D.A.R.

    2012-01-01

    A technique has been developed for simultaneous measurement of gross alpha and gross beta activity concentration in low level liquid effluent samples in presence of higher activity concentrations of tritium. For this purpose, alpha beta discriminating Pulse Shape Analysis Liquid Scintillation Counting (LSC) technique was used. Main advantages of this technique are easy sample preparation, rapid measurement and higher sensitivity. The calibration methodology for Quantulus1220 LSC based on PSA technique using 241 Am and 90 Sr/ 90 Y as alpha and beta standards respectively was described in detail. LSC technique was validated by measuring alpha and beta activity concentrations in test samples with known amount of 241 Am and 90 Sr/ 90 Y activities spiked in distilled water. The results obtained by LSC technique were compared with conventional planchet counting methods such as ZnS(Ag) and end window GM detectors. The gross alpha and gross beta activity concentrations in spiked samples, obtained by LSC technique were found to be within ±5% of the reference values. (author)

  19. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    Science.gov (United States)

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.

  20. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.

  1. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Science.gov (United States)

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  2. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  3. A methodology for the optimization of the estimation of tritium in urine by liquid scintillation counting

    International Nuclear Information System (INIS)

    Joseph, S.; Kramer, G.H.

    1982-10-01

    A method has been designed to optimize liquid scintillation (LS) urinalysis with respect to sensitivity and cost. Three related factors, quench, sample composition and counting efficiency, were measured simultaneously and the results plotted in three dimensions to determine the optimum conditions for urinalysis. Picric acid was used to simulate quenching. Subsequent urinalysis experiments showed that quenching by picric acid was analogous to urine quenching. The optimization methodology was applied to ten commercial LS cocktails and a wide divergence in results was obtained. This method can also be used to optimize minimum detectable activities (MDA) but the results show that there is no fixed sample composition that can be used for all the various types of urine samples; however, it is possible to achieve general improvements of at least a factor of 2 in the MDA for Scintiverse (the only one tested for this particular application of the methodology)

  4. Comparison of Drive Counts and Mark-Resight As Methods of Population Size Estimation of Highly Dense Sika Deer (Cervus nippon Populations.

    Directory of Open Access Journals (Sweden)

    Kazutaka Takeshita

    Full Text Available Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2 on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered.

  5. Counting the cost: estimating the economic benefit of pedophile treatment programs.

    Science.gov (United States)

    Shanahan, M; Donato, R

    2001-04-01

    The principal objective of this paper is to identify the economic costs and benefits of pedophile treatment programs incorporating both the tangible and intangible cost of sexual abuse to victims. Cost estimates of cognitive behavioral therapy programs in Australian prisons are compared against the tangible and intangible costs to victims of being sexually abused. Estimates are prepared that take into account a number of problematic issues. These include the range of possible recidivism rates for treatment programs; the uncertainty surrounding the number of child sexual molestation offences committed by recidivists; and the methodological problems associated with estimating the intangible costs of sexual abuse on victims. Despite the variation in parameter estimates that impact on the cost-benefit analysis of pedophile treatment programs, it is found that potential range of economic costs from child sexual abuse are substantial and the economic benefits to be derived from appropriate and effective treatment programs are high. Based on a reasonable set of parameter estimates, in-prison, cognitive therapy treatment programs for pedophiles are likely to be of net benefit to society. Despite this, a critical area of future research must include further methodological developments in estimating the quantitative impact of child sexual abuse in the community.

  6. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  7. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  8. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus Using Unstructured Sampling Data.

    Directory of Open Access Journals (Sweden)

    Femke Broekhuis

    Full Text Available Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  9. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data.

    Science.gov (United States)

    Broekhuis, Femke; Gopalaswamy, Arjun M

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  10. Dynamic Output Feedback Robust Model Predictive Control via Zonotopic Set-Membership Estimation for Constrained Quasi-LPV Systems

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2015-01-01

    Full Text Available For the quasi-linear parameter varying (quasi-LPV system with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC is investigated. The estimation error set is represented by a zonotope and refreshed by the zonotopic set-membership estimation method. By properly refreshing the estimation error set online, the bounds of true state at the next sampling time can be obtained. Furthermore, the feasibility of the main optimization problem at the next sampling time can be determined at the current time. A numerical example is given to illustrate the effectiveness of the approach.

  11. Application of laboratory sourceless object counting for the estimation of the neutron dose

    International Nuclear Information System (INIS)

    Cheng Jie; Ning Jing; Zhang Xiaomin; Qu Decheng; Xie Xiangdong; Nan Hongjie

    2011-01-01

    Objective: To estimate the neutron dose using 24 Na energy spectrum analysis method. Methods: Genius-2000 GeomComposer software package was used to calibrate the efficiency of the detector. Results: The detection efficiency of the detector toward γ photon with an energy of 1.368 MeV was quickly found to be 4.05271×10 -3 while the error of the software was 4.0% . The estimated dose value of the neutron irradiation samples was between 1.94 Gy and 2.82 Gy, with an arithmetic mean value of 2.38 Gy. The uncertainty of the dosimetry was about 20.07% . Conclusion: The application of efficiency calibration without a radioactive source of the energy spectrum analysis of the 24 Na contained in human blood with accelerate the estimation process. (authors)

  12. Feasibility of using single photon counting X-ray for lung tumor position estimation based on 4D-CT

    Energy Technology Data Exchange (ETDEWEB)

    Aschenbrenner, Katharina P.; Hesser, Juergen W. [Heidelberg Univ., Mannheim (Germany). Dept. of Experimental Radiation Oncology; Heidelberg Univ. (Germany). IWR; Guthier, Christian V. [Heidelberg Univ., Mannheim (Germany). Dept. of Experimental Radiation Oncology; Lyatskaya, Yulia [Brigham and Women' s Center, Boston, MA (United States); Harvard Medical School, Boston, MA (United States); Boda-Heggemann, Judit; Wenz, Frederik [Heidelberg Univ., Mannheim (Germany). Dept. of Radiation Oncology

    2017-10-01

    In stereotactic body radiation therapy of lung tumors, reliable position estimation of the tumor is necessary in order to minimize normal tissue complication rate. While kV X-ray imaging is frequently used, continuous application during radiotherapy sessions is often not possible due to concerns about the additional dose. Thus, ultra low-dose (ULD) kV X-ray imaging based on a single photon counting detector is suggested. This paper addresses the lower limit of photons to locate the tumor reliably with an accuracy in the range of state-of-the-art methods, i.e. a few millimeters. 18 patient cases with four dimensional CT (4D-CT), which serves as a-priori information, are included in the study. ULD cone beam projections are simulated from the 4D-CTs including Poisson noise. The projections from the breathing phases which correspond to different tumor positions are compared to the ULD projection by means of Poisson log-likelihood (PML) and correlation coefficient (CC), and template matching under these metrics. The results indicate that in full thorax imaging five photons per pixel suffice for a standard deviation in tumor positions of less than half a breathing phase. Around 50 photons per pixel are needed to achieve this accuracy with the field of view restricted to the tumor region. Compared to CC, PML tends to perform better for low photon counts and shifts in patient setup. Template matching only improves the position estimation in high photon counts. The quality of the reconstruction is independent of the projection angle. The accuracy of the proposed ULD single photon counting system is in the range of a few millimeters and therefore comparable to state-of-the-art tumor tracking methods. At the same time, a reduction in photons per pixel by three to four orders of magnitude relative to commercial systems with flatpanel detectors can be achieved. This enables continuous kV image-based position estimation during all fractions since the additional dose to the

  13. Feasibility of using single photon counting X-ray for lung tumor position estimation based on 4D-CT.

    Science.gov (United States)

    Aschenbrenner, Katharina P; Guthier, Christian V; Lyatskaya, Yulia; Boda-Heggemann, Judit; Wenz, Frederik; Hesser, Jürgen W

    2017-09-01

    In stereotactic body radiation therapy of lung tumors, reliable position estimation of the tumor is necessary in order to minimize normal tissue complication rate. While kV X-ray imaging is frequently used, continuous application during radiotherapy sessions is often not possible due to concerns about the additional dose. Thus, ultra low-dose (ULD) kV X-ray imaging based on a single photon counting detector is suggested. This paper addresses the lower limit of photons to locate the tumor reliably with an accuracy in the range of state-of-the-art methods, i.e. a few millimeters. 18 patient cases with four dimensional CT (4D-CT), which serves as a-priori information, are included in the study. ULD cone beam projections are simulated from the 4D-CTs including Poisson noise. The projections from the breathing phases which correspond to different tumor positions are compared to the ULD projection by means of Poisson log-likelihood (PML) and correlation coefficient (CC), and template matching under these metrics. The results indicate that in full thorax imaging five photons per pixel suffice for a standard deviation in tumor positions of less than half a breathing phase. Around 50 photons per pixel are needed to achieve this accuracy with the field of view restricted to the tumor region. Compared to CC, PML tends to perform better for low photon counts and shifts in patient setup. Template matching only improves the position estimation in high photon counts. The quality of the reconstruction is independent of the projection angle. The accuracy of the proposed ULD single photon counting system is in the range of a few millimeters and therefore comparable to state-of-the-art tumor tracking methods. At the same time, a reduction in photons per pixel by three to four orders of magnitude relative to commercial systems with flatpanel detectors can be achieved. This enables continuous kV image-based position estimation during all fractions since the additional dose to the

  14. Feasibility of using single photon counting X-ray for lung tumor position estimation based on 4D-CT

    International Nuclear Information System (INIS)

    Aschenbrenner, Katharina P.; Hesser, Juergen W.; Boda-Heggemann, Judit; Wenz, Frederik

    2017-01-01

    In stereotactic body radiation therapy of lung tumors, reliable position estimation of the tumor is necessary in order to minimize normal tissue complication rate. While kV X-ray imaging is frequently used, continuous application during radiotherapy sessions is often not possible due to concerns about the additional dose. Thus, ultra low-dose (ULD) kV X-ray imaging based on a single photon counting detector is suggested. This paper addresses the lower limit of photons to locate the tumor reliably with an accuracy in the range of state-of-the-art methods, i.e. a few millimeters. 18 patient cases with four dimensional CT (4D-CT), which serves as a-priori information, are included in the study. ULD cone beam projections are simulated from the 4D-CTs including Poisson noise. The projections from the breathing phases which correspond to different tumor positions are compared to the ULD projection by means of Poisson log-likelihood (PML) and correlation coefficient (CC), and template matching under these metrics. The results indicate that in full thorax imaging five photons per pixel suffice for a standard deviation in tumor positions of less than half a breathing phase. Around 50 photons per pixel are needed to achieve this accuracy with the field of view restricted to the tumor region. Compared to CC, PML tends to perform better for low photon counts and shifts in patient setup. Template matching only improves the position estimation in high photon counts. The quality of the reconstruction is independent of the projection angle. The accuracy of the proposed ULD single photon counting system is in the range of a few millimeters and therefore comparable to state-of-the-art tumor tracking methods. At the same time, a reduction in photons per pixel by three to four orders of magnitude relative to commercial systems with flatpanel detectors can be achieved. This enables continuous kV image-based position estimation during all fractions since the additional dose to the

  15. Estimation of deep infiltration in unsaturated limestone environments using cave lidar and drip count data

    OpenAIRE

    Mahmud, K.; Mariethoz, G.; Baker, A.; Treble, P. C.; Markowska, M.; McGuire, E.

    2016-01-01

    Limestone aeolianites constitute karstic aquifers covering much of the western and southern Australian coastal fringe. They are a key groundwater resource for a range of industries such as winery and tourism, and provide important ecosystem services such as habitat for stygofauna. Moreover, recharge estimation is important for understanding the water cycle, for contaminant transport, for water management, and for stalagmite-based paleoclimate reconstructions. Caves offer a n...

  16. Fast, accurate, and robust frequency offset estimation based on modified adaptive Kalman filter in coherent optical communication system

    Science.gov (United States)

    Yang, Yanfu; Xiang, Qian; Zhang, Qun; Zhou, Zhongqing; Jiang, Wen; He, Qianwen; Yao, Yong

    2017-09-01

    We propose a joint estimation scheme for fast, accurate, and robust frequency offset (FO) estimation along with phase estimation based on modified adaptive Kalman filter (MAKF). The scheme consists of three key modules: extend Kalman filter (EKF), lock detector, and FO cycle slip recovery. The EKF module estimates time-varying phase induced by both FO and laser phase noise. The lock detector module makes decision between acquisition mode and tracking mode and consequently sets the EKF tuning parameter in an adaptive manner. The third module can detect possible cycle slip in the case of large FO and make proper correction. Based on the simulation and experimental results, the proposed MAKF has shown excellent estimation performance featuring high accuracy, fast convergence, as well as the capability of cycle slip recovery.

  17. Mourning dove population trend estimates from Call-Count and North American Breeding Bird Surveys

    Science.gov (United States)

    Sauer, J.R.; Dolton, D.D.; Droege, S.

    1994-01-01

    The mourning dove (Zenaida macroura) Callcount Survey and the North American Breeding Bird Survey provide information on population trends of mourning doves throughout the continental United States. Because surveys are an integral part of the development of hunting regulations, a need exists to determine which survey provides precise information. We estimated population trends from 1966 to 1988 by state and dove management unit, and assessed the relative efficiency of each survey. Estimates of population trend differ (P lt 0.05) between surveys in 11 of 48 states; 9 of 11 states with divergent results occur in the Eastern Management Unit. Differences were probably a consequence of smaller sample sizes in the Callcount Survey. The Breeding Bird Survey generally provided trend estimates with smaller variances than did the Callcount Survey. Although the Callcount Survey probably provides more withinroute accuracy because of survey methods and timing, the Breeding Bird Survey has a larger sample size of survey routes and greater consistency of coverage in the Eastern Unit.

  18. New horizontal global solar radiation estimation models for Turkey based on robust coplot supported genetic programming technique

    International Nuclear Information System (INIS)

    Demirhan, Haydar; Kayhan Atilgan, Yasemin

    2015-01-01

    Highlights: • Precise horizontal global solar radiation estimation models are proposed for Turkey. • Genetic programming technique is used to construct the models. • Robust coplot analysis is applied to reduce the impact of outlier observations. • Better estimation and prediction properties are observed for the models. - Abstract: Renewable energy sources have been attracting more and more attention of researchers due to the diminishing and harmful nature of fossil energy sources. Because of the importance of solar energy as a renewable energy source, an accurate determination of significant covariates and their relationships with the amount of global solar radiation reaching the Earth is a critical research problem. There are numerous meteorological and terrestrial covariates that can be used in the analysis of horizontal global solar radiation. Some of these covariates are highly correlated with each other. It is possible to find a large variety of linear or non-linear models to explain the amount of horizontal global solar radiation. However, models that explain the amount of global solar radiation with the smallest set of covariates should be obtained. In this study, use of the robust coplot technique to reduce the number of covariates before going forward with advanced modelling techniques is considered. After reducing the dimensionality of model space, yearly and monthly mean daily horizontal global solar radiation estimation models for Turkey are built by using the genetic programming technique. It is observed that application of robust coplot analysis is helpful for building precise models that explain the amount of global solar radiation with the minimum number of covariates without suffering from outlier observations and the multicollinearity problem. Consequently, over a dataset of Turkey, precise yearly and monthly mean daily global solar radiation estimation models are introduced using the model spaces obtained by robust coplot technique and

  19. Estimation of deep infiltration in unsaturated limestone environments using cave lidar and drip count data

    Science.gov (United States)

    Mahmud, K.; Mariethoz, G.; Baker, A.; Treble, P. C.; Markowska, M.; McGuire, E.

    2016-01-01

    Limestone aeolianites constitute karstic aquifers covering much of the western and southern Australian coastal fringe. They are a key groundwater resource for a range of industries such as winery and tourism, and provide important ecosystem services such as habitat for stygofauna. Moreover, recharge estimation is important for understanding the water cycle, for contaminant transport, for water management, and for stalagmite-based paleoclimate reconstructions. Caves offer a natural inception point to observe both the long-term groundwater recharge and the preferential movement of water through the unsaturated zone of such limestone. With the availability of automated drip rate logging systems and remote sensing techniques, it is now possible to deploy the combination of these methods for larger-scale studies of infiltration processes within a cave. In this study, we utilize a spatial survey of automated cave drip monitoring in two large chambers of Golgotha Cave, south-western Western Australia (SWWA), with the aim of better understanding infiltration water movement and the relationship between infiltration, stalactite morphology, and unsaturated zone recharge. By applying morphological analysis of ceiling features from Terrestrial LiDAR (T-LiDAR) data, coupled with drip time series and climate data from 2012 to 2014, we demonstrate the nature of the relationships between infiltration through fractures in the limestone and unsaturated zone recharge. Similarities between drip rate time series are interpreted in terms of flow patterns, cave chamber morphology, and lithology. Moreover, we develop a new technique to estimate recharge in large-scale caves, engaging flow classification to determine the cave ceiling area covered by each flow category and drip data for the entire observation period, to calculate the total volume of cave discharge. This new technique can be applied to other cave sites to identify highly focussed areas of recharge and can help to better

  20. Habitat use by semi-domesticated reindeer, estimated with pellet-group counts

    Directory of Open Access Journals (Sweden)

    Anna Skarin

    2009-01-01

    Full Text Available Habitat selection theory predicts that herbivores should select for or against different factors at different spatial scales. For instance, quantity of forage is expected to be a strong factor influencing habitat choice at large scales, while forage quality may be important at finer scales. However, during summer, herbivores such as reindeer (Rangifer tarandus tarandus can be limited in their grazing time by insect harassment, and do not always have the possibility to select for high quality forage. Human disturbances from hikers, etc., can also have a limiting effect on the possibility for reindeer to graze in high quality foraging habitats. Reindeer habitat selection at the landscape level was investigated through faecal pellet-group counts during the summers of 2002 and 2003 in two reindeer herding districts in Sweden. Resource utilization functions (RUFs were developed using multiple linear regressions, where the pellet densities were related to vegetation types, topographic features, distances to tourist resorts, and distances to hiking trails. Validations of the models were performed through cross-validation correlations. Results show that high altitudes with high quality forage were important habitats. Areas that offer both snow patches and fresh forage plants for the reindeer were used in relation to their availability. The reindeer also seemed able to habituate to human intervention to a certain extent. The predictive capabilities of the RUF models were high and pellet-group counts seemed well suited to study how abiotic factors affect the habitat use at large temporal and spatial scales Abstract in Swedish / Sammanfattning: Renens användning av sommarbetesområdet, uppskattat med spillningsinventeringar Hierarkiskt habitatval innebär att djur väljer för och emot olika faktorer beroende på den rumsliga skalan. Mängden bete kan t ex spela stor roll för en växtätares habitatval på en stor skala medan kvalitén på betet kan ha

  1. Models to estimate lactation curves of milk yield and somatic cell count in dairy cows at the herd level for the use in simulations and predictive models

    Directory of Open Access Journals (Sweden)

    Kaare Græsbøll

    2016-12-01

    Full Text Available Typically, central milk recording data from dairy herds are recorded less than monthly. Over-fitting early in lactation periods is a challenge, which we explored in different ways by reducing the number of parameters needed to describe the milk yield and somatic cell count of individual cows. Furthermore, we investigated how the parameters of lactation models correlate between parities and from dam to offspring. The aim of the study was to provide simple and robust models for cow level milk yield and somatic cell count (SCC for fitting to sparse data to parameterise herd- and cow-specific simulation of dairy herds.Data from 610 Danish Holstein herds were used to determine parity traits in milk production regarding milk yield and SCC of individual cows. Parity was stratified in first, second and third and higher for milk, and first to sixth and higher for SCC. Fitting of herd level parameters allowed for cow level lactation curves with three, two or one-parameters per lactation. Correlations of milk yield and SCC were estimated between lactations and between dam and offspring.The shape of the lactation curves varied markedly between farms. The correlation between lactations for milk yield and SCC were 0.2-0.6 and significant on more than 95% of farms. The variation in the daily milk yield was observed to be a source of variation to the SCC, and the total SCC was less correlated with the milk production than somatic cells per ml. A positive correlation was found between relative levels of the total SCC and the milk yield.The variation of lactation and SCC curves between farms highlights the importance of a herd level approach. The one-parameter per cow model using a herd level curve allows for estimating the cow production level from first the recording in the parity, while a two-parameter model requires more recordings for a credible estimate, but may more precisely predict persistence, and given the independence of parameters, these can be

  2. Robust Improvement in Estimation of a Covariance Matrix in an Elliptically Contoured Distribution Respect to Quadratic Loss Function

    Directory of Open Access Journals (Sweden)

    Z. Khodadadi

    2008-03-01

    Full Text Available Let S be matrix of residual sum of square in linear model Y = Aβ + e where matrix e is distributed as elliptically contoured with unknown scale matrix Σ. In present work, we consider the problem of estimating Σ with respect to squared loss function, L(Σˆ , Σ = tr(ΣΣˆ −1 −I 2 . It is shown that improvement of the estimators were obtained by James, Stein [7], Dey and Srivasan [1] under the normality assumption remains robust under an elliptically contoured distribution respect to squared loss function

  3. Model Specifications for Estimating Labor Market Returns to Associate Degrees: How Robust Are Fixed Effects Estimates? A CAPSEE Working Paper

    Science.gov (United States)

    Belfield, Clive; Bailey, Thomas

    2017-01-01

    Recently, studies have adopted fixed effects modeling to identify the returns to college. This method has the advantage over ordinary least squares estimates in that unobservable, individual-level characteristics that may bias the estimated returns are differenced out. But the method requires extensive longitudinal data and involves complex…

  4. Number-counts slope estimation in the presence of Poisson noise

    Science.gov (United States)

    Schmitt, Juergen H. M. M.; Maccacaro, Tommaso

    1986-01-01

    The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.

  5. Comparison of Classical and Robust Estimates of Threshold Auto-regression Parameters

    Directory of Open Access Journals (Sweden)

    V. B. Goryainov

    2017-01-01

    Full Text Available The study object is the first-order threshold auto-regression model with a single zero-located threshold. The model describes a stochastic temporal series with discrete time by means of a piecewise linear equation consisting of two linear classical first-order autoregressive equations. One of these equations is used to calculate a running value of the temporal series. A control variable that determines the choice between these two equations is the sign of the previous value of the same series.The first-order threshold autoregressive model with a single threshold depends on two real parameters that coincide with the coefficients of the piecewise linear threshold equation. These parameters are assumed to be unknown. The paper studies an estimate of the least squares, an estimate the least modules, and the M-estimates of these parameters. The aim of the paper is a comparative study of the accuracy of these estimates for the main probabilistic distributions of the updating process of the threshold autoregressive equation. These probability distributions were normal, contaminated normal, logistic, double-exponential distributions, a Student's distribution with different number of degrees of freedom, and a Cauchy distribution.As a measure of the accuracy of each estimate, was chosen its variance to measure the scattering of the estimate around the estimated parameter. An estimate with smaller variance made from the two estimates was considered to be the best. The variance was estimated by computer simulation. To estimate the smallest modules an iterative weighted least-squares method was used and the M-estimates were done by the method of a deformable polyhedron (the Nelder-Mead method. To calculate the least squares estimate, an explicit analytic expression was used.It turned out that the estimation of least squares is best only with the normal distribution of the updating process. For the logistic distribution and the Student's distribution with the

  6. Variable-structure approaches analysis, simulation, robust control and estimation of uncertain dynamic processes

    CERN Document Server

    Senkel, Luise

    2016-01-01

    This edited book aims at presenting current research activities in the field of robust variable-structure systems. The scope equally comprises highlighting novel methodological aspects as well as presenting the use of variable-structure techniques in industrial applications including their efficient implementation on hardware for real-time control. The target audience primarily comprises research experts in the field of control theory and nonlinear dynamics but the book may also be beneficial for graduate students.

  7. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  8. Pushing for the Extreme: Estimation of Poisson Distribution from Low Count Unreplicated Data—How Close Can We Get?

    Directory of Open Access Journals (Sweden)

    Peter Tiňo

    2013-04-01

    Full Text Available Studies of learning algorithms typically concentrate on situations where potentially ever growing training sample is available. Yet, there can be situations (e.g., detection of differentially expressed genes on unreplicated data or estimation of time delay in non-stationary gravitationally lensed photon streams where only extremely small samples can be used in order to perform an inference. On unreplicated data, the inference has to be performed on the smallest sample possible—sample of size 1. We study whether anything useful can be learnt in such extreme situations by concentrating on a Bayesian approach that can account for possible prior information on expected counts. We perform a detailed information theoretic study of such Bayesian estimation and quantify the effect of Bayesian averaging on its first two moments. Finally, to analyze potential benefits of the Bayesian approach, we also consider Maximum Likelihood (ML estimation as a baseline approach. We show both theoretically and empirically that the Bayesian model averaging can be potentially beneficial.

  9. Estimation of equivalent dose and its uncertainty in the OSL SAR protocol when count numbers do not follow a Poisson distribution

    International Nuclear Information System (INIS)

    Bluszcz, Andrzej; Adamiec, Grzegorz; Heer, Aleksandra J.

    2015-01-01

    The current work focuses on the estimation of equivalent dose and its uncertainty using the single aliquot regenerative protocol in optically stimulated luminescence measurements. The authors show that the count numbers recorded with the use of photomultiplier tubes are well described by negative binomial distributions, different ones for background counts and photon induced counts. This fact is then exploited in pseudo-random count number generation and simulations of D e determination assuming a saturating exponential growth. A least squares fitting procedure is applied using different types of weights to determine whether the obtained D e 's and their error estimates are unbiased and accurate. A weighting procedure is suggested that leads to almost unbiased D e estimates. It is also shown that the assumption of Poisson distribution in D e estimation may lead to severe underestimation of the D e error. - Highlights: • Detailed analysis of statistics of count numbers in luminescence readers. • Generation of realistically scattered pseudo-random numbers of counts in luminescence measurements. • A practical guide for stringent analysis of D e values and errors assessment.

  10. Robust driver heartbeat estimation: A q-Hurst exponent based automatic sensor change with interactive multi-model EKF.

    Science.gov (United States)

    Vrazic, Sacha

    2015-08-01

    Preventing car accidents by monitoring the driver's physiological parameters is of high importance. However, existing measurement methods are not robust to driver's body movements. In this paper, a system that estimates the heartbeat from the seat embedded piezoelectric sensors, and that is robust to strong body movements is presented. Multifractal q-Hurst exponents are used within a classifier to predict the most probable best sensor signal to be used in an Interactive Multi-Model Extended Kalman Filter pulsation estimation procedure. The car vibration noise is reduced using an autoregressive exogenous model to predict the noise on sensors. The performance of the proposed system was evaluated on real driving data up to 100 km/h and with slaloms at high speed. It is shown that this method improves by 36.7% the pulsation estimation under strong body movement compared to static sensor pulsation estimation and appears to provide reliable pulsation variability information for top-level analysis of drowsiness or other conditions.

  11. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    Science.gov (United States)

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  12. Robust feature estimation by non-rigid hierarchical image registration and its application in disparity measurement

    Science.gov (United States)

    Badshah, Amir; Choudhry, Aadil Jaleel; Ullah, Shan

    2017-03-01

    Industries are moving towards automation in order to increase productivity and ensure quality. Variety of electronic and electromagnetic systems are being employed to assist human operator in fast and accurate quality inspection of products. Majority of these systems are equipped with cameras and rely on diverse image processing algorithms. Information is lost in 2D image, therefore acquiring accurate 3D data from 2D images is an open issue. FAST, SURF and SIFT are well-known spatial domain techniques for features extraction and henceforth image registration to find correspondence between images. The efficiency of these methods is measured in terms of the number of perfect matches found. A novel fast and robust technique for stereo-image processing is proposed. It is based on non-rigid registration using modified normalized phase correlation. The proposed method registers two images in hierarchical fashion using quad-tree structure. The registration process works through global to local level resulting in robust matches even in presence of blur and noise. The computed matches can further be utilized to determine disparity and depth for industrial product inspection. The same can be used in driver assistance systems. The preliminary tests on Middlebury dataset produced satisfactory results. The execution time for a 413 x 370 stereo-pair is 500ms approximately on a low cost DSP.

  13. Robust heart rate estimation from multiple asynchronous noisy sources using signal quality indices and a Kalman filter

    International Nuclear Information System (INIS)

    Li, Q; Mark, R G; Clifford, G D

    2008-01-01

    Physiological signals such as the electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often severely corrupted by noise, artifact and missing data, which lead to large errors in the estimation of the heart rate (HR) and ABP. A robust HR estimation method is described that compensates for these problems. The method is based upon the concept of fusing multiple signal quality indices (SQIs) and HR estimates derived from multiple electrocardiogram (ECG) leads and an invasive ABP waveform recorded from ICU patients. Physiological SQIs were obtained by analyzing the statistical characteristics of each waveform and their relationships to each other. HR estimates from the ECG and ABP are tracked with separate Kalman filters, using a modified update sequence based upon the individual SQIs. Data fusion of each HR estimate was then performed by weighting each estimate by the Kalman filters' SQI-modified innovations. This method was evaluated on over 6000 h of simultaneously acquired ECG and ABP from a 437 patient subset of ICU data by adding real ECG and realistic artificial ABP noise. The method provides an accurate HR estimate even in the presence of high levels of persistent noise and artifact, and during episodes of extreme bradycardia and tachycardia

  14. A Robust Parametric Technique for Multipath Channel Estimation in the Uplink of a DS-CDMA System

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available The problem of estimating the multipath channel parameters of a new user entering the uplink of an asynchronous direct sequence-code division multiple access (DS-CDMA system is addressed. The problem is described via a least squares (LS cost function with a rich structure. This cost function, which is nonlinear with respect to the time delays and linear with respect to the gains of the multipath channel, is proved to be approximately decoupled in terms of the path delays. Due to this structure, an iterative procedure of 1D searches is adequate for time delays estimation. The resulting method is computationally efficient, does not require any specific pilot signal, and performs well for a small number of training symbols. Simulation results show that the proposed technique offers a better estimation accuracy compared to existing related methods, and is robust to multiple access interference.

  15. Estimating the robustness of contingenet valuation estimates of WTP to survey mode and treatment of protest responses.

    Science.gov (United States)

    John Loomis; Armando Gonzalez-Caban; Joseph Champ

    2011-01-01

    Over the past four decades teh contingent valuation method (CVM) has become a technique frequently used by economists to estimate willingness-to-pay (WTP) for improvements in environmental quality and prot3tion of natural resources. The CVM was originall applied to estmate recreation use values (Davis, 1963; Hammack and Brown, 1974)and air quality (Brookshire et al....

  16. Robust Estimation of HDR in fMRI using H-infinity Filters

    DEFF Research Database (Denmark)

    Puthusserypady, Sadasivan; Jue, R.; Ratnarajah, T.

    2010-01-01

    Estimation and detection of the hemodynamic response (HDR) are of great importance in functional MRI (fMRI) data analysis. In this paper, we propose the use of three H-infinity adaptive filters (finite memory, exponentially weighted, and timevarying) for accurate estimation and detection of the HDR......-1487]. Performances of the proposed techniques are compared to the conventional t-test method as well as the well-known LMSs and recursive least squares algorithms. Extensive numerical simulations show that the proposed methods result in better HDR estimations and activation detections....

  17. Variation in pre-treatment count lead time and its effect on baseline estimates of cage-level sea lice abundance.

    Science.gov (United States)

    Gautam, R; Boerlage, A S; Vanderstichel, R; Revie, C W; Hammell, K L

    2016-11-01

    Treatment efficacy studies typically use pre-treatment sea lice abundance as the baseline. However, the pre-treatment counting window often varies from the day of treatment to several days before treatment. We assessed the effect of lead time on baseline estimates, using historical data (2010-14) from a sea lice data management programme (Fish-iTrends). Data were aggregated at the cage level for three life stages: (i) chalimus, (ii) pre-adult and adult male and (iii) adult female. Sea lice counts were log-transformed, and mean counts by lead time relative to treatment day were computed and compared separately for each life stage, using linear mixed models. There were 1,658 observations (treatment events) from 56 sites in 5 Bay Management Areas. Our study showed that lead time had a significant effect on the estimated sea lice abundance, which was moderated by season. During the late summer and autumn periods, counting on the day of treatment gave significantly higher values than other days and would be a more appropriate baseline estimate, while during spring and early summer abundance estimates were comparable among counts within 5 days of treatment. A season-based lead time window may be most appropriate when estimating baseline sea lice levels. © 2016 John Wiley & Sons Ltd.

  18. A Robust Localization, Slip Estimation, and Compensation System for WMR in the Indoor Environments

    Directory of Open Access Journals (Sweden)

    Zakir Ullah

    2018-05-01

    Full Text Available A novel approach is proposed for the path tracking of a Wheeled Mobile Robot (WMR in the presence of an unknown lateral slip. Much of the existing work has assumed pure rolling conditions between the wheel and ground. Under the pure rolling conditions, the wheels of a WMR are supposed to roll without slipping. Complex wheel-ground interactions, acceleration and steering system noise are the factors which cause WMR wheel slip. A basic research problem in this context is localization and slip estimation of WMR from a stream of noisy sensors data when the robot is moving on a slippery surface, or moving at a high speed. DecaWave based ranging system and Particle Filter (PF are good candidates to estimate the location of WMR indoors and outdoors. Unfortunately, wheel-slip of WMR limits the ultimate performance that can be achieved by real-world implementation of the PF, because location estimation systems typically partially rely on the robot heading. A small error in the WMR heading leads to a large error in location estimation of the PF because of its cumulative nature. In order to enhance the tracking and localization performance of the PF in the environments where the main reason for an error in the PF location estimation is angular noise, two methods were used for heading estimation of the WMR (1: Reinforcement Learning (RL and (2: Location-based Heading Estimation (LHE. Trilateration is applied to DecaWave based ranging system for calculating the probable location of WMR, this noisy location along with PF current mean is used to estimate the WMR heading by using the above two methods. Beside the WMR location calculation, DecaWave based ranging system is also used to update the PF weights. The localization and tracking performance of the PF is significantly improved through incorporating heading error in localization by applying RL and LHE. Desired trajectory information is then used to develop an algorithm for extracting the lateral slip along

  19. A PSF-Shape-Based Beamforming Strategy for Robust 2D Motion Estimation in Ultrafast Data

    OpenAIRE

    Anne E. C. M. Saris; Stein Fekkes; Maartje M. Nillesen; Hendrik H. G. Hansen; Chris L. de Korte

    2018-01-01

    This paper presents a framework for motion estimation in ultrafast ultrasound data. It describes a novel approach for determining the sampling grid for ultrafast data based on the system’s point-spread-function (PSF). As a consequence, the cross-correlation functions (CCF) used in the speckle tracking (ST) algorithm will have circular-shaped peaks, which can be interpolated using a 2D interpolation method to estimate subsample displacements. Carotid artery wall motion and parabolic blood flow...

  20. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    Science.gov (United States)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  1. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    Ibn-Elhaj E

    2009-01-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  2. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    E. M. Ismaili Aalaoui

    2009-02-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  3. The Big Pumpkin Count.

    Science.gov (United States)

    Coplestone-Loomis, Lenny

    1981-01-01

    Pumpkin seeds are counted after students convert pumpkins to jack-o-lanterns. Among the activities involved, pupils learn to count by 10s, make estimates, and to construct a visual representation of 1,000. (MP)

  4. Robust Online State of Charge Estimation of Lithium-Ion Battery Pack Based on Error Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Ting Zhao

    2015-01-01

    Full Text Available Accurate and reliable state of charge (SOC estimation is a key enabling technique for large format lithium-ion battery pack due to its vital role in battery safety and effective management. This paper tries to make three contributions to existing literatures through robust algorithms. (1 Observer based SOC estimation error model is established, where the crucial parameters on SOC estimation accuracy are determined by quantitative analysis, being a basis for parameters update. (2 The estimation method for a battery pack in which the inconsistency of cells is taken into consideration is proposed, ensuring all batteries’ SOC ranging from 0 to 1, effectively avoiding the battery overcharged/overdischarged. Online estimation of the parameters is also presented in this paper. (3 The SOC estimation accuracy of the battery pack is verified using the hardware-in-loop simulation platform. The experimental results at various dynamic test conditions, temperatures, and initial SOC difference between two cells demonstrate the efficacy of the proposed method.

  5. Robust estimation of the proportion of treatment effect explained by surrogate marker information.

    Science.gov (United States)

    Parast, Layla; McDermott, Mary M; Tian, Lu

    2016-05-10

    In randomized treatment studies where the primary outcome requires long follow-up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow-up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model-based approaches and propose a variance estimation procedure based on a perturbation-resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model-based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group-mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.

  6. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns.

    Directory of Open Access Journals (Sweden)

    Mohammad Manir Hossain Mollah

    Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large

  7. Robust Fault Estimation Design for Discrete-Time Nonlinear Systems via A Modified Fuzzy Fault Estimation Observer.

    Science.gov (United States)

    Xie, Xiang-Peng; Yue, Dong; Park, Ju H

    2018-02-01

    The paper provides relaxed designs of fault estimation observer for nonlinear dynamical plants in the Takagi-Sugeno form. Compared with previous theoretical achievements, a modified version of fuzzy fault estimation observer is implemented with the aid of the so-called maximum-priority-based switching law. Given each activated switching status, the appropriate group of designed matrices can be provided so as to explore certain key properties of the considered plants by means of introducing a set of matrix-valued variables. Owing to the reason that more abundant information of the considered plants can be updated in due course and effectively exploited for each time instant, the conservatism of the obtained result is less than previous theoretical achievements and thus the main defect of those existing methods can be overcome to some extent in practice. Finally, comparative simulation studies on the classical nonlinear truck-trailer model are given to certify the benefits of the theoretic achievement which is obtained in our study. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A robust estimate of the number and characteristics of persons released from prison in Australia.

    Science.gov (United States)

    Avery, Alex; Kinner, Stuart A

    2015-08-01

    To estimate the number and characteristics of adults released from prison in Australia. We calculated ratios, stratified by age, sex and Indigenous status, by comparing the number of persons released from prison in New South Wales (NSW), with the number in NSW prisons on 30 June of the corresponding year. These stratified ratios were applied to Australia-wide prison data to estimate the number and characteristics of persons released annually. We estimated that in 2013, 38,576 persons were released from prison in Australia - 25.3% more than the daily prison population. Young people, Indigenous people and women were over-represented among those released. We estimated that 3.69 Indigenous women aged 18-24 were released annually for each equivalent person in prison; and 2.75 non-Indigenous women aged 18-24 were released annually for each equivalent person in prison. The annual 'flow' through Australia's prisons is well in excess of the daily number, but information on those moving through prison systems is not yet publicly available. The characteristics of those released from prison differ meaningfully from those of people in prison. Routine, national reporting of prison separations is critical to informing upscaling and targeting of Throughcare services for this profoundly vulnerable population. © 2015 Public Health Association of Australia.

  9. The Robustness of Designs for Trials with Nested Data against Incorrect Initial Intracluster Correlation Coefficient Estimates

    Science.gov (United States)

    Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.

    2010-01-01

    In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…

  10. A PSF-shape-based beamforming strategy for robust 2D motion estimation in ultrafast data

    NARCIS (Netherlands)

    Saris, Anne E.C.M.; Fekkes, Stein; Nillesen, Maartje; Hansen, Hendrik H.G.; de Korte, Chris L.

    2018-01-01

    This paper presents a framework for motion estimation in ultrafast ultrasound data. It describes a novel approach for determining the sampling grid for ultrafast data based on the system's point-spread-function (PSF). As a consequence, the cross-correlation functions (CCF) used in the speckle

  11. Robustness of Input features from Noisy Silhouettes in Human Pose Estimation

    DEFF Research Database (Denmark)

    Gong, Wenjuan; Fihl, Preben; Gonzàlez, Jordi

    2014-01-01

    . In this paper, we explore this problem. First, We compare performances of several image features widely used for human pose estimation and explore their performances against each other and select one with best performance. Second, iterative closest point algorithm is introduced for a new quantitative...... of silhouette samples of different noise levels and compare with the selected feature on a public dataset: Human Eva dataset....

  12. Optimal probabilistic energy management in a typical micro-grid based-on robust optimization and point estimate method

    International Nuclear Information System (INIS)

    Alavi, Seyed Arash; Ahmadian, Ali; Aliakbar-Golkar, Masoud

    2015-01-01

    Highlights: • Energy management is necessary in the active distribution network to reduce operation costs. • Uncertainty modeling is essential in energy management studies in active distribution networks. • Point estimate method is a suitable method for uncertainty modeling due to its lower computation time and acceptable accuracy. • In the absence of Probability Distribution Function (PDF) robust optimization has a good ability for uncertainty modeling. - Abstract: Uncertainty can be defined as the probability of difference between the forecasted value and the real value. As this probability is small, the operation cost of the power system will be less. This purpose necessitates modeling of system random variables (such as the output power of renewable resources and the load demand) with appropriate and practicable methods. In this paper, an adequate procedure is proposed in order to do an optimal energy management on a typical micro-grid with regard to the relevant uncertainties. The point estimate method is applied for modeling the wind power and solar power uncertainties, and robust optimization technique is utilized to model load demand uncertainty. Finally, a comparison is done between deterministic and probabilistic management in different scenarios and their results are analyzed and evaluated

  13. TVR-DART: A More Robust Algorithm for Discrete Tomography From Limited Projection Data With Automated Gray Value Estimation.

    Science.gov (United States)

    Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost

    2016-01-01

    In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.

  14. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    Science.gov (United States)

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  15. A PSF-Shape-Based Beamforming Strategy for Robust 2D Motion Estimation in Ultrafast Data

    Directory of Open Access Journals (Sweden)

    Anne E. C. M. Saris

    2018-03-01

    Full Text Available This paper presents a framework for motion estimation in ultrafast ultrasound data. It describes a novel approach for determining the sampling grid for ultrafast data based on the system’s point-spread-function (PSF. As a consequence, the cross-correlation functions (CCF used in the speckle tracking (ST algorithm will have circular-shaped peaks, which can be interpolated using a 2D interpolation method to estimate subsample displacements. Carotid artery wall motion and parabolic blood flow simulations together with rotating disk experiments using a Verasonics Vantage 256 are used for performance evaluation. Zero-degree plane wave data were acquired using an ATL L5-12 (fc = 9 MHz transducer for a range of pulse repetition frequencies (PRFs, resulting in 0–600 µm inter-frame displacements. The proposed methodology was compared to data beamformed on a conventionally spaced grid, combined with the commonly used 1D parabolic interpolation. The PSF-shape-based beamforming grid combined with 2D cubic interpolation showed the most accurate and stable performance with respect to the full range of inter-frame displacements, both for the assessment of blood flow and vessel wall dynamics. The proposed methodology can be used as a protocolled way to beamform ultrafast data and obtain accurate estimates of tissue motion.

  16. Robust estimates of the impact of broadcasting on match attendance in football

    OpenAIRE

    B Buraimo; D Forrest; R Simmons

    2006-01-01

    The paper employs data from 2,884 matches, of which 158 were televised, in the second tier of English football (currently known as The Football League Championship). It builds a model of the determinants of attendance that is designed to yield estimates of the proportionate changes in the size of crowds resulting from games being shown on either free-to-air or subscription based channels. The model has two innovatory features. First, it controls for the market size of home and away teams very...

  17. Robust estimation and forecasting of the long-term seasonal component of electricity spot prices

    International Nuclear Information System (INIS)

    Nowotarski, Jakub; Tomczyk, Jakub; Weron, Rafał

    2013-01-01

    We present the results of an extensive study on estimation and forecasting of the long-term seasonal component (LTSC) of electricity spot prices. We consider a battery of over 300 models, including monthly dummies and models based on Fourier or wavelet decomposition combined with linear or exponential decay. We find that the considered wavelet-based models are significantly better in terms of forecasting spot prices up to a year ahead than the commonly used monthly dummies and sine-based models. This result questions the validity and usefulness of stochastic models of spot electricity prices built on the latter two types of LTSC models. - Highlights: • First comprehensive study on the forecasting of the long-term seasonal components • Over 300 models examined, including commonly used and new approaches • Wavelet-based models outperform sine-based and monthly dummy models. • Validity of stochastic models built on sines or monthly dummies is questionable

  18. Robust node estimation and topology discovery for large-scale networks

    KAUST Repository

    Alouini, Mohamed-Slim

    2017-02-23

    Various examples are provided for node estimation and topology discovery for networks. In one example, a method includes receiving a packet having an identifier from a first node; adding the identifier to another transmission packet based on a comparison between the first identifier and existing identifiers associated with the other packet; adjusting a transmit probability based on the comparison; and transmitting the other packet based on a comparison between the transmit probability and a probability distribution. In another example, a system includes a network device that can adds an identifier received in a packet to a list including existing identifiers and adjust a transmit probability based on a comparison between the identifiers; and transmit another packet based on a comparison between the transmit probability and a probability distribution. In another example, a method includes determining a quantity of sensor devices based on a plurality of identifiers received in a packet.

  19. Robust node estimation and topology discovery for large-scale networks

    KAUST Repository

    Alouini, Mohamed-Slim; Douik, Ahmed S.; Aly, Salah A.; Al-Naffouri, Tareq Y.

    2017-01-01

    Various examples are provided for node estimation and topology discovery for networks. In one example, a method includes receiving a packet having an identifier from a first node; adding the identifier to another transmission packet based on a comparison between the first identifier and existing identifiers associated with the other packet; adjusting a transmit probability based on the comparison; and transmitting the other packet based on a comparison between the transmit probability and a probability distribution. In another example, a system includes a network device that can adds an identifier received in a packet to a list including existing identifiers and adjust a transmit probability based on a comparison between the identifiers; and transmit another packet based on a comparison between the transmit probability and a probability distribution. In another example, a method includes determining a quantity of sensor devices based on a plurality of identifiers received in a packet.

  20. A robust new metric of phenotypic distance to estimate and compare multiple trait differences among populations

    Directory of Open Access Journals (Sweden)

    Rebecca SAFRAN, Samuel FLAXMAN, Michael KOPP, Darren E. IRWIN, Derek BRIGGS, Matthew R. EVANS, W. Chris FUNK, David A. GRAY, Eileen A. HEBE

    2012-06-01

    Full Text Available Whereas a rich literature exists for estimating population genetic divergence, metrics of phenotypic trait divergence are lacking, particularly for comparing multiple traits among three or more populations. Here, we review and analyze via simulation Hedges’ g, a widely used parametric estimate of effect size. Our analyses indicate that g is sensitive to a combination of unequal trait variances and unequal sample sizes among populations and to changes in the scale of measurement. We then go on to derive and explain a new, non-parametric distance measure, “Δp”, which is calculated based upon a joint cumulative distribution function (CDF from all populations under study. More precisely, distances are measured in terms of the percentiles in this CDF at which each population’s median lies. Δp combines many desirable features of other distance metrics into a single metric; namely, compared to other metrics, p is relatively insensitive to unequal variances and sample sizes among the populations sampled. Furthermore, a key feature of Δp—and our main motivation for developing it—is that it easily accommodates simultaneous comparisons of any number of traits across any number of populations. To exemplify its utility, we employ Δp to address a question related to the role of sexual selection in speciation: are sexual signals more divergent than ecological traits in closely related taxa? Using traits of known function in closely related populations, we show that traits predictive of reproductive performance are, indeed, more divergent and more sexually dimorphic than traits related to ecological adaptation [Current Zoology 58 (3: 423-436, 2012].

  1. A Robust Transform Estimator Based on Residual Analysis and Its Application on UAV Aerial Images

    Directory of Open Access Journals (Sweden)

    Guorong Cai

    2018-02-01

    Full Text Available Estimating the transformation between two images from the same scene is a fundamental step for image registration, image stitching and 3D reconstruction. State-of-the-art methods are mainly based on sorted residual for generating hypotheses. This scheme has acquired encouraging results in many remote sensing applications. Unfortunately, mainstream residual based methods may fail in estimating the transform between Unmanned Aerial Vehicle (UAV low altitude remote sensing images, due to the fact that UAV images always have repetitive patterns and severe viewpoint changes, which produce lower inlier rate and higher pseudo outlier rate than other tasks. We performed extensive experiments and found the main reason is that these methods compute feature pair similarity within a fixed window, making them sensitive to the size of residual window. To solve this problem, three schemes that based on the distribution of residuals are proposed, which are called Relational Window (RW, Sliding Window (SW, Reverse Residual Order (RRO, respectively. Specially, RW employs a relaxation residual window size to evaluate the highest similarity within a relaxation model length. SW fixes the number of overlap models while varying the length of window size. RRO takes the permutation of residual values into consideration to measure similarity, not only including the number of overlap structures, but also giving penalty to reverse number within the overlap structures. Experimental results conducted on our own built UAV high resolution remote sensing images show that the proposed three strategies all outperform traditional methods in the presence of severe perspective distortion due to viewpoint change.

  2. Unbiased tensor-based morphometry: Improved robustness and sample size estimates for Alzheimer’s disease clinical trials

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.

    2013-01-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970

  3. The use of ATSR active fire counts for estimating relative patterns of biomass burning - A study from the boreal forest region

    NARCIS (Netherlands)

    Kasischke, Eric S.; Hewson, Jennifer H.; Stocks, Brian; van der Werf, Guido; Randerson, James T.

    2003-01-01

    Satellite fire products have the potential to construct inter-annual time series of fire activity, but estimating area burned requires considering biases introduced by orbiting geometry, fire behavior, and the presence of clouds and smoke. Here we evaluated the performance of fire counts from the

  4. Variance-Constrained Robust Estimation for Discrete-Time Systems with Communication Constraints

    Directory of Open Access Journals (Sweden)

    Baofeng Wang

    2014-01-01

    Full Text Available This paper is concerned with a new filtering problem in networked control systems (NCSs subject to limited communication capacity, which includes measurement quantization, random transmission delay, and packets loss. The measurements are first quantized via a logarithmic quantizer and then transmitted through a digital communication network with random delay and packet loss. The three communication constraints phenomena which can be seen as a class of uncertainties are formulated by a stochastic parameter uncertainty system. The purpose of the paper is to design a linear filter such that, for all the communication constraints, the error state of the filtering process is mean square bounded and the steady-state variance of the estimation error for each state is not more than the individual prescribed upper bound. It is shown that the desired filtering can effectively be solved if there are positive definite solutions to a couple of algebraic Riccati-like inequalities or linear matrix inequalities. Finally, an illustrative numerical example is presented to demonstrate the effectiveness and flexibility of the proposed design approach.

  5. Model reduction and frequency residuals for a robust estimation of nonlinearities in subspace identification

    Science.gov (United States)

    De Filippis, G.; Noël, J. P.; Kerschen, G.; Soria, L.; Stephan, C.

    2017-09-01

    The introduction of the frequency-domain nonlinear subspace identification (FNSI) method in 2013 constitutes one in a series of recent attempts toward developing a realistic, first-generation framework applicable to complex structures. If this method showed promising capabilities when applied to academic structures, it is still confronted with a number of limitations which needs to be addressed. In particular, the removal of nonphysical poles in the identified nonlinear models is a distinct challenge. In the present paper, it is proposed as a first contribution to operate directly on the identified state-space matrices to carry out spurious pole removal. A modal-space decomposition of the state and output matrices is examined to discriminate genuine from numerical poles, prior to estimating the extended input and feedthrough matrices. The final state-space model thus contains physical information only and naturally leads to nonlinear coefficients free of spurious variations. Besides spurious variations due to nonphysical poles, vibration modes lying outside the frequency band of interest may also produce drifts of the nonlinear coefficients. The second contribution of the paper is to include residual terms, accounting for the existence of these modes. The proposed improved FNSI methodology is validated numerically and experimentally using a full-scale structure, the Morane-Saulnier Paris aircraft.

  6. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Shangjie [Tianjin Key Laboratory of Process Measurement and Control, School of Electrical Engineering and Automation, Tianjin University, Tianjin (China); Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Hara, Wendy; Wang, Lei; Buyyounouski, Mark K.; Le, Quynh-Thu; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Li, Ruijiang, E-mail: rli2@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States)

    2017-03-15

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a reference anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.

  8. Fast and accurate phylogenetic reconstruction from high-resolution whole-genome data and a novel robustness estimator.

    Science.gov (United States)

    Lin, Y; Rajan, V; Moret, B M E

    2011-09-01

    The rapid accumulation of whole-genome data has renewed interest in the study of genomic rearrangements. Comparative genomics, evolutionary biology, and cancer research all require models and algorithms to elucidate the mechanisms, history, and consequences of these rearrangements. However, even simple models lead to NP-hard problems, particularly in the area of phylogenetic analysis. Current approaches are limited to small collections of genomes and low-resolution data (typically a few hundred syntenic blocks). Moreover, whereas phylogenetic analyses from sequence data are deemed incomplete unless bootstrapping scores (a measure of confidence) are given for each tree edge, no equivalent to bootstrapping exists for rearrangement-based phylogenetic analysis. We describe a fast and accurate algorithm for rearrangement analysis that scales up, in both time and accuracy, to modern high-resolution genomic data. We also describe a novel approach to estimate the robustness of results-an equivalent to the bootstrapping analysis used in sequence-based phylogenetic reconstruction. We present the results of extensive testing on both simulated and real data showing that our algorithm returns very accurate results, while scaling linearly with the size of the genomes and cubically with their number. We also present extensive experimental results showing that our approach to robustness testing provides excellent estimates of confidence, which, moreover, can be tuned to trade off thresholds between false positives and false negatives. Together, these two novel approaches enable us to attack heretofore intractable problems, such as phylogenetic inference for high-resolution vertebrate genomes, as we demonstrate on a set of six vertebrate genomes with 8,380 syntenic blocks. A copy of the software is available on demand.

  9. Population estimates and geographical distributions of swans and geese in East Asia based on counts during the non-breeding season

    DEFF Research Database (Denmark)

    Jia, Qiang; Koyama, Kazuo; Choi, Chang-Yong

    2016-01-01

    For the first time, we estimated the population sizes of two swan species and four goose species from observations during the non-breeding period in East Asia. Based on combined counts from South Korea, Japan and China, we estimated the total abundance of these species as follows: 42,000–47,000 W......For the first time, we estimated the population sizes of two swan species and four goose species from observations during the non-breeding period in East Asia. Based on combined counts from South Korea, Japan and China, we estimated the total abundance of these species as follows: 42......,000–47,000 Whooper Swans Cygnus cygnus ; 99,000–141,000 Tundra Swans C. columbianus bewickii ; 56,000–98,000 Swan Geese Anser cygnoides ; 157,000–194,000 Bean Geese A. fabalis ; 231,000–283,000 Greater White-fronted Geese A. albifrons ; and 14,000–19,000 Lesser White-fronted Geese A. erythropus. While the count data...... from Korea and Japan provide a good reflection of numbers present, there remain gaps in the coverage in China, which particularly affect the precision of the estimates for Bean, Greater and Lesser White-fronted Geese as well as Tundra Swans. Lack of subspecies distinction of Bean Geese in China until...

  10. The use of comparative 137Cs body burden estimates from environmental data/models and whole body counting to evaluate diet models for the ingestion pathway

    International Nuclear Information System (INIS)

    Robison, W.L.; Sun, C.

    1997-01-01

    Rongelap and Utirik Atolls were contaminated on 1 March 1954, by a U.S. nuclear test at Bikini Atoll code named BRAVO. The people at both atolls were removed from their atolls in the first few days after the detonation and were returned to their atolls at different times. Detailed studies have been carried out over the years by Lawrence Livermore National Laboratory (LLNL) to determine the radiological conditions at the atolls and estimate the doses to the populations. The contribution of each exposure pathway and radionuclide have been evaluated. All dose assessments show that the major potential contribution to the estimated dose is 137 Cs uptake via the terrestrial food chain. Brookhaven National Laboratory (BNL) has carried out an extensive whole body counting program at both atolls over several years to directly measure the 137 Cs body burden. Here we compare the estimates of the body burdens from the LLNL environmental method with body burdens measured by the BNL whole body counting method. The combination of the results from both methods is used to evaluate proposed diet models to establish more realistic dose assessments. Very good agreement is achieved between the two methods with a diet model that includes both local and imported foods. Other diet models greatly overestimate the body burdens (i.e., dose) observed by whole body counting. The upper 95% confidence limit of interindividual variability around the population mean value based on the environmental method is similar to that calculated from direct measurement by whole body counting. Moreover, the uncertainty in the population mean value based on the environmental method is in very good agreement with the whole body counting data. This provides additional confidence in extrapolating the estimated doses calculated by the environmental method to other islands and atolls. 46 refs., 8 figs., 5 tabs

  11. RBC count

    Science.gov (United States)

    ... by kidney disease) RBC destruction ( hemolysis ) due to transfusion, blood vessel injury, or other cause Leukemia Malnutrition Bone ... slight risk any time the skin is broken) Alternative Names Erythrocyte count; Red blood cell count; Anemia - RBC count Images Blood test ...

  12. Deblending of simultaneous-source data using iterative seislet frame thresholding based on a robust slope estimation

    Science.gov (United States)

    Zhou, Yatong; Han, Chunying; Chi, Yue

    2018-06-01

    In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.

  13. Use of the robust design to estimate seasonal abundance and demographic parameters of a coastal bottlenose dolphin (Tursiops aduncus population.

    Directory of Open Access Journals (Sweden)

    Holly C Smith

    Full Text Available As delphinid populations become increasingly exposed to human activities we rely on our capacity to produce accurate abundance estimates upon which to base management decisions. This study applied mark-recapture methods following the Robust Design to estimate abundance, demographic parameters, and temporary emigration rates of an Indo-Pacific bottlenose dolphin (Tursiops aduncus population off Bunbury, Western Australia. Boat-based photo-identification surveys were conducted year-round over three consecutive years along pre-determined transect lines to create a consistent sampling effort throughout the study period and area. The best fitting capture-recapture model showed a population with a seasonal Markovian temporary emigration with time varying survival and capture probabilities. Abundance estimates were seasonally dependent with consistently lower numbers obtained during winter and higher during summer and autumn across the three-year study period. Specifically, abundance estimates for all adults and juveniles (combined varied from a low of 63 (95% CI 59 to 73 in winter of 2007 to a high of 139 (95% CI 134 to148 in autumn of 2009. Temporary emigration rates (γ' for animals absent in the previous period ranged from 0.34 to 0.97 (mean  =  0.54; ±SE 0.11 with a peak during spring. Temporary emigration rates for animals present during the previous period (γ'' were lower, ranging from 0.00 to 0.29, with a mean of 0.16 (± SE 0.04. This model yielded a mean apparent survival estimate for juveniles and adults (combined of 0.95 (± SE 0.02 and a capture probability from 0.07 to 0.51 with a mean of 0.30 (± SE 0.04. This study demonstrates the importance of incorporating temporary emigration to accurately estimate abundance of coastal delphinids. Temporary emigration rates were high in this study, despite the large area surveyed, indicating the challenges of sampling highly mobile animals which range over large spatial areas.

  14. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  15. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  16. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs.

  17. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiments. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. 11 refs., 6 figs., 8 tabs. (Author)

  18. Radiation counting statistics

    International Nuclear Information System (INIS)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H.

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs

  19. Robust small area estimation of poverty indicators using M-quantile approach (Case study: Sub-district level in Bogor district)

    Science.gov (United States)

    Girinoto, Sadik, Kusman; Indahwati

    2017-03-01

    The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.

  20. Estimation of the degree of hydration of blended cement pastes by a scanning electron microscope point-counting procedure

    International Nuclear Information System (INIS)

    Feng, X.; Garboczi, E.J.; Bentz, D.P.; Stutzman, P.E.; Mason, T.O.

    2004-01-01

    A scanning electron microscope (SEM) point-counting technique was employed to study the hydration of plain portland and blended cement pastes containing fly ash or slag. For plain portland cement pastes, the results for the degree of cement hydration obtained by the SEM point-counting technique were consistent with the results from the traditional loss-on-ignition (LOI) of nonevaporable water-content measurements; agreement was within ±10%. The standard deviation in the determination of the degree of cement hydration via point counting ranged from ±1.5% to ±1.8% (one operator, one sample). For the blended cement pastes, it is the first time that the degree of hydration of cement in blended systems has been studied directly. The standard deviation for the degree of hydration of cement in the blended cement pastes ranged from ±1.4% to ±2.2%. Additionally, the degrees of reaction of the mineral admixtures (MAs) were also measured. The standard deviation for the degree of fly ash reaction was ±4.6% to ±5.0% and ±3.6% to ±4.3% for slag. All of the analyses suggest that the SEM point-counting technique can be a reliable and effective analysis tool for use in studies of the hydration of blended cement pastes

  1. A Field Evaluation of the Time-of-Detection Method to Estimate Population Size and Density for Aural Avian Point Counts

    Directory of Open Access Journals (Sweden)

    Mathew W. Alldredge

    2007-12-01

    Full Text Available The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture-recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence, which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low homogenous rates per interval with those singing at (high and low heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant

  2. Data-Driven Robust RVFLNs Modeling of a Blast Furnace Iron-Making Process Using Cauchy Distribution Weighted M-Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ping; Lv, Youbin; Wang, Hong; Chai, Tianyou

    2017-09-01

    Optimal operation of a practical blast furnace (BF) ironmaking process depends largely on a good measurement of molten iron quality (MIQ) indices. However, measuring the MIQ online is not feasible using the available techniques. In this paper, a novel data-driven robust modeling is proposed for online estimation of MIQ using improved random vector functional-link networks (RVFLNs). Since the output weights of traditional RVFLNs are obtained by the least squares approach, a robustness problem may occur when the training dataset is contaminated with outliers. This affects the modeling accuracy of RVFLNs. To solve this problem, a Cauchy distribution weighted M-estimation based robust RFVLNs is proposed. Since the weights of different outlier data are properly determined by the Cauchy distribution, their corresponding contribution on modeling can be properly distinguished. Thus robust and better modeling results can be achieved. Moreover, given that the BF is a complex nonlinear system with numerous coupling variables, the data-driven canonical correlation analysis is employed to identify the most influential components from multitudinous factors that affect the MIQ indices to reduce the model dimension. Finally, experiments using industrial data and comparative studies have demonstrated that the obtained model produces a better modeling and estimating accuracy and stronger robustness than other modeling methods.

  3. Counting carbohydrates

    Science.gov (United States)

    Carb counting; Carbohydrate-controlled diet; Diabetic diet; Diabetes-counting carbohydrates ... Many foods contain carbohydrates (carbs), including: Fruit and fruit juice Cereal, bread, pasta, and rice Milk and milk products, soy milk Beans, legumes, ...

  4. Estimation of absolute microglial cell numbers in mouse fascia dentata using unbiased and efficient stereological cell counting principles

    DEFF Research Database (Denmark)

    Wirenfeldt, Martin; Dalmau, Ishar; Finsen, Bente

    2003-01-01

    Stereology offers a set of unbiased principles to obtain precise estimates of total cell numbers in a defined region. In terms of microglia, which in the traumatized and diseased CNS is an extremely dynamic cell population, the strength of stereology is that the resultant estimate is unaffected...... of microglia, although with this thickness, the intensity of the staining is too high to distinguish single cells. Lectin histochemistry does not visualize microglia throughout the section and, accordingly, is not suited for the optical fractionator. The mean total number of Mac-1+ microglial cells...... in the unilateral dentate gyrus of the normal young adult male C57BL/6 mouse was estimated to be 12,300 (coefficient of variation (CV)=0.13) with a mean coefficient of error (CE) of 0.06. The perspective of estimating microglial cell numbers using stereology is to establish a solid basis for studying the dynamics...

  5. Estimating the impact of somatic cell count on the value of milk utilising parameters obtained from the published literature.

    Science.gov (United States)

    Geary, Una; Lopez-Villalobos, Nicolas; O'Brien, Bernadette; Garrick, Dorian J; Shalloo, Laurence

    2014-05-01

    The impact of mastitis on milk value per litre independent of the effect of mastitis on milk volume, was quantified for Ireland using a meta-analysis and a processing sector model. Changes in raw milk composition, cheese processing and composition associated with increased bulk milk somatic cell count (BMSCC) were incorporated into the model. Processing costs and market values were representative of current industry values. It was assumed that as BMSCC increased (i) milk fat and milk protein increased and milk lactose decreased, (ii) fat and protein recoveries decreased, (iii) cheese protein decreased and cheese moisture increased. Five BMSCC categories were examined from ⩽100 000 to >400 000 cells/ml. The analysis showed that as BMSCC increased the production quantities reduced. An increase in BMSCC from 100 000 to >400 000 cells/ml saw a reduction in net revenue of 3·2% per annum (€51·3 million) which corresponded to a reduction in the value of raw milk of €0·0096 cents/l.

  6. Identification and quantitative grade estimation of Uranium mineralization based on gross-count gamma ray log at Lemajung sector West Kalimantan

    International Nuclear Information System (INIS)

    Adi Gunawan Muhammad

    2014-01-01

    Lemajung sector, is one of uranium potential sector in Kalan Area, West Kalimantan. Uranium mineralization is found in metasiltstone and schistose metapelite rock with general direction of mineralization east - west tilted ± 70° to the north parallel with schistocity pattern (S1). Drilling evaluation has been implemented in 2013 in Lemajung sector at R-05 (LEML-(S1). Drilling evaluation has been implemented in 2013 in Lemajung sector at R-05 (LEML-gamma ray. The purpose of this activity is to determine uranium mineralization grade with quantitatively methode in the rocks and also determine the geological conditions in sorounding of drilling area. The methodology involves determining the value of k-factor, geological mapping for the sorounding of drill hole, determination of the thickness and grade estimation of uranium mineralization with gross-count gamma ray. Quantitatively from grade estimation of uranium using gross-count gamma ray log can be known that the highest % eU_3O_8 in the hole R-05 (LEML-40) reaches 0.7493≈6354 ppm eU found at depth interval from 30.1 to 34.96 m. Uranium mineralization is present as fracture filling (vein) or tectonic breccia matrix filling in metasiltstone with thickness from 0.10 to 2.40 m associated with sulphide (pyrite) and characterized by high ratio of U/Th. (author)

  7. Filling the gaps: Using count survey data to predict bird density distribution patterns and estimate population sizes

    NARCIS (Netherlands)

    Sierdsema, H.; van Loon, E.E.

    2008-01-01

    Birds play an increasingly prominent role in politics, nature conservation and nature management. As a consequence, up-to-date and reliable spatial estimates of bird distributions over large areas are in high demand. The requested bird distribution maps are however not easily obtained. Intensive

  8. Oil Palm Counting and Age Estimation from WorldView-3 Imagery and LiDAR Data Using an Integrated OBIA Height Model and Regression Analysis

    Directory of Open Access Journals (Sweden)

    Hossein Mojaddadi Rizeei

    2018-01-01

    Full Text Available The current study proposes a new method for oil palm age estimation and counting. A support vector machine algorithm (SVM of object-based image analysis (OBIA was implemented for oil palm counting. It was integrated with height model and multiregression methods to accurately estimate the age of trees based on their heights in five different plantation blocks. Multiregression and multi-kernel size models were examined over five different oil palm plantation blocks to achieve the most optimized model for age estimation. The sensitivity analysis was conducted on four SVM kernel types (sigmoid (SIG, linear (LN, radial basis function (RBF, and polynomial (PL with associated parameters (threshold values, gamma γ, and penalty factor (c to obtain the optimal OBIA classification approaches for each plantation block. Very high-resolution imageries of WorldView-3 (WV-3 and light detection and range (LiDAR were used for oil palm detection and age assessment. The results of oil palm detection had an overall accuracy of 98.27%, 99.48%, 99.28%, 99.49%, and 97.49% for blocks A, B, C, D, and E, respectively. Moreover, the accuracy of age estimation analysis showed 90.1% for 3-year-old, 87.9% for 4-year-old, 88.0% for 6-year-old, 87.6% for 8-year-old, 79.1% for 9-year-old, and 76.8% for 22-year-old trees. Overall, the study revealed that remote sensing techniques can be useful to monitor and detect oil palm plantation for sustainable agricultural management.

  9. Estimation of deep infiltration in unsaturated limestone environments using cave LiDAR and drip count data

    OpenAIRE

    K. Mahmud; G. Mariethoz; A. Baker; P. C. Treble; M. Markowska; E. McGuire

    2015-01-01

    Limestone aeolianites constitute karstic aquifers covering much of the western and southern Australian coastal fringe. They are a key groundwater resource for a range of industries such as winery and tourism, and provide important ecosystem services such as habitat for stygofauna. Moreover, recharge estimation is important for understanding the water cycle, for contaminant transport, for water management and for stalagmite-based paleoclimate reconstructions. Caves offer a na...

  10. Counting cormorants

    DEFF Research Database (Denmark)

    Bregnballe, Thomas; Carss, David N; Lorentsen, Svein-Håkon

    2013-01-01

    This chapter focuses on Cormorant population counts for both summer (i.e. breeding) and winter (i.e. migration, winter roosts) seasons. It also explains differences in the data collected from undertaking ‘day’ versus ‘roost’ counts, gives some definitions of the term ‘numbers’, and presents two...

  11. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  12. Kendall-Theil Robust Line (KTRLine--version 1.0)-A Visual Basic Program for Calculating and Graphing Robust Nonparametric Estimates of Linear-Regression Coefficients Between Two Continuous Variables

    Science.gov (United States)

    Granato, Gregory E.

    2006-01-01

    The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and

  13. Cell type specific DNA methylation in cord blood: A 450K-reference data set and cell count-based validation of estimated cell type composition.

    Science.gov (United States)

    Gervin, Kristina; Page, Christian Magnus; Aass, Hans Christian D; Jansen, Michelle A; Fjeldstad, Heidi Elisabeth; Andreassen, Bettina Kulle; Duijts, Liesbeth; van Meurs, Joyce B; van Zelm, Menno C; Jaddoe, Vincent W; Nordeng, Hedvig; Knudsen, Gunn Peggy; Magnus, Per; Nystad, Wenche; Staff, Anne Cathrine; Felix, Janine F; Lyle, Robert

    2016-09-01

    Epigenome-wide association studies of prenatal exposure to different environmental factors are becoming increasingly common. These studies are usually performed in umbilical cord blood. Since blood comprises multiple cell types with specific DNA methylation patterns, confounding caused by cellular heterogeneity is a major concern. This can be adjusted for using reference data consisting of DNA methylation signatures in cell types isolated from blood. However, the most commonly used reference data set is based on blood samples from adult males and is not representative of the cell type composition in neonatal cord blood. The aim of this study was to generate a reference data set from cord blood to enable correct adjustment of the cell type composition in samples collected at birth. The purity of the isolated cell types was very high for all samples (>97.1%), and clustering analyses showed distinct grouping of the cell types according to hematopoietic lineage. We explored whether this cord blood and the adult peripheral blood reference data sets impact the estimation of cell type composition in cord blood samples from an independent birth cohort (MoBa, n = 1092). This revealed significant differences for all cell types. Importantly, comparison of the cell type estimates against matched cell counts both in the cord blood reference samples (n = 11) and in another independent birth cohort (Generation R, n = 195), demonstrated moderate to high correlation of the data. This is the first cord blood reference data set with a comprehensive examination of the downstream application of the data through validation of estimated cell types against matched cell counts.

  14. Robust Preconditioning Estimates for Convection-Dominated Elliptic Problems via a Streamline Poincaré--Friedrichs Inequality

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.; Kovács, B.

    2014-01-01

    Roč. 52, č. 6 (2014), s. 2957-2976 ISSN 0036-1429 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : streamline diffusion finite element method * solving convection-dominated elliptic problems * convergence is robust Subject RIV: BA - General Mathematics Impact factor: 1.788, year: 2014 http://epubs.siam.org/doi/abs/10.1137/130940268

  15. [Corrected count].

    Science.gov (United States)

    1991-11-27

    The data of the 1991 census indicated that the population count of Brazil fell short of a former estimate by 3 million people. The population reached 150 million people with an annual increase of 2%, while projections in the previous decade expected an increase of 2.48% to 153 million people. This reduction indicates more widespread use of family planning (FP) and control of fertility among families of lower social status as more information is being provided to them. However, the Ministry of Health ordered an investigation of foreign family planning organizations because it was suspected that women were forced to undergo tubal ligation during vaccination campaigns. A strange alliance of left wing politicians and the Roman Catholic Church alleges a conspiracy of international FP organizations receiving foreign funds. The FP strategies of Bemfam and Pro-Pater offer women who have little alternative the opportunity to undergo tubal ligation or to receive oral contraceptives to control fertility. The ongoing government program of distributing booklets on FP is feeble and is not backed up by an education campaign. Charges of foreign interference are leveled while the government hypocritically ignores the grave problem of 4 million abortions a year. The population is expected to continue to grow until the year 2040 and then to stabilize at a low growth rate of .4%. In 1980, the number of children per woman was 4.4 whereas the 1991 census figures indicate this has dropped to 3.5. The excess population is associated with poverty and a forsaken caste in the interior. The population actually has decreased in the interior and in cities with 15,000 people. The phenomenon of the drop of fertility associated with rural exodus is contrasted with cities and villages where the population is 20% less than expected.

  16. The Kruskal Count

    OpenAIRE

    Lagarias, Jeffrey C.; Rains, Eric; Vanderbei, Robert J.

    2001-01-01

    The Kruskal Count is a card trick invented by Martin J. Kruskal in which a magician "guesses" a card selected by a subject according to a certain counting procedure. With high probability the magician can correctly "guess" the card. The success of the trick is based on a mathematical principle related to coupling methods for Markov chains. This paper analyzes in detail two simplified variants of the trick and estimates the probability of success. The model predictions are compared with simula...

  17. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Mario Vento

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an ϵ-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  18. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Conte Donatello

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an -SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  19. A Method for Counting Moving People in Video Surveillance Videos

    Science.gov (United States)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  20. Methodology in robust and nonparametric statistics

    CERN Document Server

    Jurecková, Jana; Picek, Jan

    2012-01-01

    Introduction and SynopsisIntroductionSynopsisPreliminariesIntroductionInference in Linear ModelsRobustness ConceptsRobust and Minimax Estimation of LocationClippings from Probability and Asymptotic TheoryProblemsRobust Estimation of Location and RegressionIntroductionM-EstimatorsL-EstimatorsR-EstimatorsMinimum Distance and Pitman EstimatorsDifferentiable Statistical FunctionsProblemsAsymptotic Representations for L-Estimators

  1. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    Science.gov (United States)

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  2. A comparative study of natural 40K content estimated through whole body counting and dietary intake around Narora Atomic Power Station

    International Nuclear Information System (INIS)

    Gautam, Y.P.; Dube, B.; Hegde, A.G.

    2005-01-01

    773 radiation workers at NAPS, aged 20 to 59 years, were monitored using Shadow Shield Whole Body Counting System having NaI (Tl) crystal coupled with NETS-3 1 K Multi Channel Analyser (MCA) to determine the 40 K activity in the body and assess internal dose due to naturally occurring 40 K. The data have been segregated to make analyses for vegetarian and non-vegetarian. The average annual dose from 40 K for the subjects is evaluated as 156.4 ± 36.1 mSv. Natural 40 K content in 463 environmental samples collected from Narora environ estimated using NaI(Tl) well type detector coupled with 1 K NETS-3 Multichannel analyser (MCA). Assessment of daily intake of natural 40 K has been estimated from average daily intake of dietary items and the associated 40 K activity. It works out to be 67.17 ± 16.28 Bq/d and that obtained through analysis of complete meal samples was 73.22 ± 9.78 Bq/ d. The average annual dose to a member of public of this region due to natural 40 K through ingestion route works out to be 152.12 ± 36.83 mSv/year. (author)

  3. Estimating economic thresholds for site-specific weed control using manual weed counts and sensor technology: an example based on three winter wheat trials.

    Science.gov (United States)

    Keller, Martina; Gutjahr, Christoph; Möhring, Jens; Weis, Martin; Sökefeld, Markus; Gerhards, Roland

    2014-02-01

    Precision experimental design uses the natural heterogeneity of agricultural fields and combines sensor technology with linear mixed models to estimate the effect of weeds, soil properties and herbicide on yield. These estimates can be used to derive economic thresholds. Three field trials are presented using the precision experimental design in winter wheat. Weed densities were determined by manual sampling and bi-spectral cameras, yield and soil properties were mapped. Galium aparine, other broad-leaved weeds and Alopecurus myosuroides reduced yield by 17.5, 1.2 and 12.4 kg ha(-1) plant(-1)  m(2) in one trial. The determined thresholds for site-specific weed control with independently applied herbicides were 4, 48 and 12 plants m(-2), respectively. Spring drought reduced yield effects of weeds considerably in one trial, since water became yield limiting. A negative herbicide effect on the crop was negligible, except in one trial, in which the herbicide mixture tended to reduce yield by 0.6 t ha(-1). Bi-spectral cameras for weed counting were of limited use and still need improvement. Nevertheless, large weed patches were correctly identified. The current paper presents a new approach to conducting field trials and deriving decision rules for weed control in farmers' fields. © 2013 Society of Chemical Industry.

  4. Validation of a Robust Neural Real-Time Voltage Estimator for Active Distribution Grids on Field Data

    DEFF Research Database (Denmark)

    Pertl, Michael; Douglass, Philip James; Heussen, Kai

    2018-01-01

    network approach for voltage estimation in active distribution grids by means of measured data from two feeders of a real low voltage distribution grid. The approach enables a real-time voltage estimation at locations in the distribution grid, where otherwise only non-real-time measurements are available......The installation of measurements in distribution grids enables the development of data driven methods for the power system. However, these methods have to be validated in order to understand the limitations and capabilities for their use. This paper presents a systematic validation of a neural...

  5. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  6. Pan-Antarctic analysis aggregating spatial estimates of Adélie penguin abundance reveals robust dynamics despite stochastic noise.

    Science.gov (United States)

    Che-Castaldo, Christian; Jenouvrier, Stephanie; Youngflesh, Casey; Shoemaker, Kevin T; Humphries, Grant; McDowall, Philip; Landrum, Laura; Holland, Marika M; Li, Yun; Ji, Rubao; Lynch, Heather J

    2017-10-10

    Colonially-breeding seabirds have long served as indicator species for the health of the oceans on which they depend. Abundance and breeding data are repeatedly collected at fixed study sites in the hopes that changes in abundance and productivity may be useful for adaptive management of marine resources, but their suitability for this purpose is often unknown. To address this, we fit a Bayesian population dynamics model that includes process and observation error to all known Adélie penguin abundance data (1982-2015) in the Antarctic, covering >95% of their population globally. We find that process error exceeds observation error in this system, and that continent-wide "year effects" strongly influence population growth rates. Our findings have important implications for the use of Adélie penguins in Southern Ocean feedback management, and suggest that aggregating abundance across space provides the fastest reliable signal of true population change for species whose dynamics are driven by stochastic processes.Adélie penguins are a key Antarctic indicator species, but data patchiness has challenged efforts to link population dynamics to key drivers. Che-Castaldo et al. resolve this issue using a pan-Antarctic Bayesian model to infer missing data, and show that spatial aggregation leads to more robust inference regarding dynamics.

  7. Towards a Robust Solution of the Non-Linear Kinematics for the General Stewart Platform with Estimation of Distribution Algorithms

    Directory of Open Access Journals (Sweden)

    Eusebio Eduardo Hernández Martinez

    2013-01-01

    Full Text Available In robotics, solving the direct kinematics problem (DKP for parallel robots is very often more difficult and time consuming than for their serial counterparts. The problem is stated as follows: given the joint variables, the Cartesian variables should be computed, namely the pose of the mobile platform. Most of the time, the DKP requires solving a non-linear system of equations. In addition, given that the system could be non-convex, Newton or Quasi-Newton (Dogleg based solvers get trapped on local minima. The capacity of such kinds of solvers to find an adequate solution strongly depends on the starting point. A well-known problem is the selection of such a starting point, which requires a priori information about the neighbouring region of the solution. In order to circumvent this issue, this article proposes an efficient method to select and to generate the starting point based on probabilistic learning. Experiments and discussion are presented to show the method performance. The method successfully avoids getting trapped on local minima without the need for human intervention, which increases its robustness when compared with a single Dogleg approach. This proposal can be extended to other structures, to any non-linear system of equations, and of course, to non-linear optimization problems.

  8. Using Length of Stay to Control for Unobserved Heterogeneity When Estimating Treatment Effect on Hospital Costs with Observational Data: Issues of Reliability, Robustness, and Usefulness.

    Science.gov (United States)

    May, Peter; Garrido, Melissa M; Cassel, J Brian; Morrison, R Sean; Normand, Charles

    2016-10-01

    To evaluate the sensitivity of treatment effect estimates when length of stay (LOS) is used to control for unobserved heterogeneity when estimating treatment effect on cost of hospital admission with observational data. We used data from a prospective cohort study on the impact of palliative care consultation teams (PCCTs) on direct cost of hospital care. Adult patients with an advanced cancer diagnosis admitted to five large medical and cancer centers in the United States between 2007 and 2011 were eligible for this study. Costs were modeled using generalized linear models with a gamma distribution and a log link. We compared variability in estimates of PCCT impact on hospitalization costs when LOS was used as a covariate, as a sample parameter, and as an outcome denominator. We used propensity scores to account for patient characteristics associated with both PCCT use and total direct hospitalization costs. We analyzed data from hospital cost databases, medical records, and questionnaires. Our propensity score weighted sample included 969 patients who were discharged alive. In analyses of hospitalization costs, treatment effect estimates are highly sensitive to methods that control for LOS, complicating interpretation. Both the magnitude and significance of results varied widely with the method of controlling for LOS. When we incorporated intervention timing into our analyses, results were robust to LOS-controls. Treatment effect estimates using LOS-controls are not only suboptimal in terms of reliability (given concerns over endogeneity and bias) and usefulness (given the need to validate the cost-effectiveness of an intervention using overall resource use for a sample defined at baseline) but also in terms of robustness (results depend on the approach taken, and there is little evidence to guide this choice). To derive results that minimize endogeneity concerns and maximize external validity, investigators should match and analyze treatment and comparison arms

  9. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part I (Theoretical).

    Science.gov (United States)

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    An approach that is commonly used for calculating the retention time of a compound in GC departs from the thermodynamic properties ΔH, ΔS and ΔCp of phase change (from mobile to stationary). Such properties can be estimated by using experimental retention time data, which results in a non-linear regression problem for non-isothermal temperature programs. As shown in this work, the surface of the objective function (approximation error criterion) on the basis of thermodynamic parameters can be divided into three clearly defined regions, and solely in one of them there is a possibility for the global optimum to be found. The main contribution of this study was the development of an algorithm that distinguishes the different regions of the error surface and its use in the robust initialization of the estimation of parameters ΔH, ΔS and ΔCp. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. An approximate multitrait model for genetic evaluation in dairy cattle with a robust estimation of genetic trends (Open Access publication

    Directory of Open Access Journals (Sweden)

    Madsen Per

    2007-07-01

    Full Text Available Abstract In a stochastic simulation study of a dairy cattle population three multitrait models for estimation of genetic parameters and prediction of breeding values were compared. The first model was an approximate multitrait model using a two-step procedure. The first step was a single trait model for all traits. The solutions for fixed effects from these analyses were subtracted from the phenotypes. A multitrait model only containing an overall mean, an additive genetic and a residual term was applied on these preadjusted data. The second model was similar to the first model, but the multitrait model also contained a year effect. The third model was a full multitrait model. Genetic trends for total merit and for the individual traits in the breeding goal were compared for the three scenarios to rank the models. The full multitrait model gave the highest genetic response, but was not significantly better than the approximate multitrait model including a year effect. The inclusion of a year effect into the second step of the approximate multitrait model significantly improved the genetic trend for total merit. In this study, estimation of genetic parameters for breeding value estimation using models corresponding to the ones used for prediction of breeding values increased the accuracy on the breeding values and thereby the genetic progress.

  11. Spatially robust estimates of biological nitrogen (N) fixation imply substantial human alteration of the tropical N cycle.

    Science.gov (United States)

    Sullivan, Benjamin W; Smith, W Kolby; Townsend, Alan R; Nasto, Megan K; Reed, Sasha C; Chazdon, Robin L; Cleveland, Cory C

    2014-06-03

    Biological nitrogen fixation (BNF) is the largest natural source of exogenous nitrogen (N) to unmanaged ecosystems and also the primary baseline against which anthropogenic changes to the N cycle are measured. Rates of BNF in tropical rainforest are thought to be among the highest on Earth, but they are notoriously difficult to quantify and are based on little empirical data. We adapted a sampling strategy from community ecology to generate spatial estimates of symbiotic and free-living BNF in secondary and primary forest sites that span a typical range of tropical forest legume abundance. Although total BNF was higher in secondary than primary forest, overall rates were roughly five times lower than previous estimates for the tropical forest biome. We found strong correlations between symbiotic BNF and legume abundance, but we also show that spatially free-living BNF often exceeds symbiotic inputs. Our results suggest that BNF in tropical forest has been overestimated, and our data are consistent with a recent top-down estimate of global BNF that implied but did not measure low tropical BNF rates. Finally, comparing tropical BNF within the historical area of tropical rainforest with current anthropogenic N inputs indicates that humans have already at least doubled reactive N inputs to the tropical forest biome, a far greater change than previously thought. Because N inputs are increasing faster in the tropics than anywhere on Earth, both the proportion and the effects of human N enrichment are likely to grow in the future.

  12. Spatially robust estimates of biological nitrogen (N) fixation imply substantial human alteration of the tropical N cycle

    Science.gov (United States)

    Sullivan, Benjamin W.; Smith, William K.; Townsend, Alan R.; Nasto, Megan K.; Reed, Sasha C.; Chazdon, Robin L.; Cleveland, Cory C.

    2014-01-01

    Biological nitrogen fixation (BNF) is the largest natural source of exogenous nitrogen (N) to unmanaged ecosystems and also the primary baseline against which anthropogenic changes to the N cycle are measured. Rates of BNF in tropical rainforest are thought to be among the highest on Earth, but they are notoriously difficult to quantify and are based on little empirical data. We adapted a sampling strategy from community ecology to generate spatial estimates of symbiotic and free-living BNF in secondary and primary forest sites that span a typical range of tropical forest legume abundance. Although total BNF was higher in secondary than primary forest, overall rates were roughly five times lower than previous estimates for the tropical forest biome. We found strong correlations between symbiotic BNF and legume abundance, but we also show that spatially free-living BNF often exceeds symbiotic inputs. Our results suggest that BNF in tropical forest has been overestimated, and our data are consistent with a recent top-down estimate of global BNF that implied but did not measure low tropical BNF rates. Finally, comparing tropical BNF within the historical area of tropical rainforest with current anthropogenic N inputs indicates that humans have already at least doubled reactive N inputs to the tropical forest biome, a far greater change than previously thought. Because N inputs are increasing faster in the tropics than anywhere on Earth, both the proportion and the effects of human N enrichment are likely to grow in the future.

  13. Estimation of Radiation Doses in the Marshall Islands Based on Whole Body Counting of Cesium-137 (137Cs) and Plutonium Urinalysis

    Energy Technology Data Exchange (ETDEWEB)

    Daniels, J; Hickman, D; Kehl, S; Hamilton, T

    2007-06-11

    Under the auspices of the U.S. Department of Energy (USDOE), researchers from the Lawrence Livermore National Laboratory (LLNL) have recently implemented a series of initiatives to address long-term radiological surveillance needs at former nuclear test sites in the Republic of the Marshall Islands (RMI). The aim of this radiological surveillance monitoring program (RSMP) is to provide timely radiation protection for individuals in the Marshall Islands with respect to two of the most important internally deposited fallout radionuclides-cesium-137 ({sup 137}Cs) and long-lived isotopes 239 and 240 of plutonium ({sup 239+240}Pu) (Robison et al., 1997 and references therein). Therefore, whole-body counting for {sup 137}Cs and a sensitive bioassay for the presence of {sup 239+240}Pu excreted in urine were adopted as the two most applicable in vivo analytical methods to assess radiation doses for individuals in the RMI from internally deposited fallout radionuclides (see Hamilton et al., 2006a-c; Bell et al., 2002). Through 2005, the USDOE has established three permanent whole-body counting facilities in the Marshall Islands: the Enewetak Radiological Laboratory on Enewetak Atoll, the Utrok Whole-Body Counting Facility on Majuro Atoll, and the Rongelap Whole-Body Counting Facility on Rongelap Atoll. These whole-body counting facilities are operated and maintained by trained Marshallese technicians. Scientists from LLNL provide the technical support and training necessary for maintaining quality assurance for data acquisition and dose reporting. This technical basis document summarizes the methodologies used to calculate the annual total effective dose equivalent (TEDE; or dose for the calendar year of measurement) based on whole-body counting of internally deposited {sup 137}Cs and the measurement of {sup 239+240}Pu excreted in urine. Whole-body counting provides a direct measure of the total amount (or burden) of {sup 137}Cs present in the human body at the time of

  14. A robust observer based on H∞ filtering with parameter uncertainties combined with Neural Networks for estimation of vehicle roll angle

    Science.gov (United States)

    Boada, Beatriz L.; Boada, Maria Jesus L.; Vargas-Melendez, Leandro; Diaz, Vicente

    2018-01-01

    Nowadays, one of the main objectives in road transport is to decrease the number of accident victims. Rollover accidents caused nearly 33% of all deaths from passenger vehicle crashes. Roll Stability Control (RSC) systems prevent vehicles from untripped rollover accidents. The lateral load transfer is the main parameter which is taken into account in the RSC systems. This parameter is related to the roll angle, which can be directly measured from a dual-antenna GPS. Nevertheless, this is a costly technique. For this reason, roll angle has to be estimated. In this paper, a novel observer based on H∞ filtering in combination with a neural network (NN) for the vehicle roll angle estimation is proposed. The design of this observer is based on four main criteria: to use a simplified vehicle model, to use signals of sensors which are installed onboard in current vehicles, to consider the inaccuracy in the system model and to attenuate the effect of the external disturbances. Experimental results show the effectiveness of the proposed observer.

  15. Manifold-Based Visual Object Counting.

    Science.gov (United States)

    Wang, Yi; Zou, Yuexian; Wang, Wenwu

    2018-07-01

    Visual object counting (VOC) is an emerging area in computer vision which aims to estimate the number of objects of interest in a given image or video. Recently, object density based estimation method is shown to be promising for object counting as well as rough instance localization. However, the performance of this method tends to degrade when dealing with new objects and scenes. To address this limitation, we propose a manifold-based method for visual object counting (M-VOC), based on the manifold assumption that similar image patches share similar object densities. Firstly, the local geometry of a given image patch is represented linearly by its neighbors using a predefined patch training set, and the object density of this given image patch is reconstructed by preserving the local geometry using locally linear embedding. To improve the characterization of local geometry, additional constraints such as sparsity and non-negativity are also considered via regularization, nonlinear mapping, and kernel trick. Compared with the state-of-the-art VOC methods, our proposed M-VOC methods achieve competitive performance on seven benchmark datasets. Experiments verify that the proposed M-VOC methods have several favorable properties, such as robustness to the variation in the size of training dataset and image resolution, as often encountered in real-world VOC applications.

  16. Counting probe

    International Nuclear Information System (INIS)

    Matsumoto, Haruya; Kaya, Nobuyuki; Yuasa, Kazuhiro; Hayashi, Tomoaki

    1976-01-01

    Electron counting method has been devised and experimented for the purpose of measuring electron temperature and density, the most fundamental quantities to represent plasma conditions. Electron counting is a method to count the electrons in plasma directly by equipping a probe with the secondary electron multiplier. It has three advantages of adjustable sensitivity, high sensitivity of the secondary electron multiplier, and directional property. Sensitivity adjustment is performed by changing the size of collecting hole (pin hole) on the incident front of the multiplier. The probe is usable as a direct reading thermometer of electron temperature because it requires to collect very small amount of electrons, thus it doesn't disturb the surrounding plasma, and the narrow sweep width of the probe voltage is enough. Therefore it can measure anisotropy more sensitively than a Langmuir probe, and it can be used for very low density plasma. Though many problems remain on anisotropy, computer simulation has been carried out. Also it is planned to provide a Helmholtz coil in the vacuum chamber to eliminate the effect of earth magnetic field. In practical experiments, the measurement with a Langmuir probe and an emission probe mounted to the movable structure, the comparison with the results obtained in reverse magnetic field by using a Helmholtz coil, and the measurement of ionic sound wave are scheduled. (Wakatsuki, Y.)

  17. Robust Self Tuning Controllers

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    1985-01-01

    The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay.......The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...

  18. A Data-driven Study of RR Lyrae Near-IR Light Curves: Principal Component Analysis, Robust Fits, and Metallicity Estimates

    Science.gov (United States)

    Hajdu, Gergely; Dékány, István; Catelan, Márcio; Grebel, Eva K.; Jurcsik, Johanna

    2018-04-01

    RR Lyrae variables are widely used tracers of Galactic halo structure and kinematics, but they can also serve to constrain the distribution of the old stellar population in the Galactic bulge. With the aim of improving their near-infrared photometric characterization, we investigate their near-infrared light curves, as well as the empirical relationships between their light curve and metallicities using machine learning methods. We introduce a new, robust method for the estimation of the light-curve shapes, hence the average magnitudes of RR Lyrae variables in the K S band, by utilizing the first few principal components (PCs) as basis vectors, obtained from the PC analysis of a training set of light curves. Furthermore, we use the amplitudes of these PCs to predict the light-curve shape of each star in the J-band, allowing us to precisely determine their average magnitudes (hence colors), even in cases where only one J measurement is available. Finally, we demonstrate that the K S-band light-curve parameters of RR Lyrae variables, together with the period, allow the estimation of the metallicity of individual stars with an accuracy of ∼0.2–0.25 dex, providing valuable chemical information about old stellar populations bearing RR Lyrae variables. The methods presented here can be straightforwardly adopted for other classes of variable stars, bands, or for the estimation of other physical quantities.

  19. Robust extrapolation scheme for fast estimation of 3D Ising field partition functions: application to within subject fMRI data

    Energy Technology Data Exchange (ETDEWEB)

    Risser, L.; Vincent, T.; Ciuciu, Ph. [NeuroSpin CEA, F-91191 Gif sur Yvette (France); Risser, L.; Vincent, T. [Laboratoire de Neuroimagerie Assistee par Ordinateur (LNAO) CEA - DSV/I2BM/NEUROSPIN (France); Risser, L. [Institut de mecanique des fluides de Toulouse (IMFT), CNRS: UMR5502 - Universite Paul Sabatier - Toulouse III - Institut National Polytechnique de Toulouse - INPT (France); Idier, J. [Institut de Recherche en Communications et en Cybernetique de Nantes (IRCCyN) CNRS - UMR6597 - Universite de Nantes - ecole Centrale de Nantes - Ecole des Mines de Nantes - Ecole Polytechnique de l' Universite de Nantes (France)

    2009-07-01

    In this paper, we present a first numerical scheme to estimate Partition Functions (PF) of 3D Ising fields. Our strategy is applied to the context of the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated regions and estimate region-dependent, hemodynamic filters. For any region, a specific binary Markov random field may embody spatial correlation over the hidden states of the voxels by modeling whether they are activated or not. To make this spatial regularization fully adaptive, our approach is first based upon it, classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, file proposed extrapolation method allows its to approximate the PFs associated with the Ising fields defined over the remaining brain regions. In comparison with preexisting approaches, our method is robust; to topological inhomogeneities in the definition of the reference regions. As a result, it strongly alleviates the computational burden and makes spatially adaptive regularization of whole brain fMRI datasets feasible. (authors)

  20. Counting Possibilia

    Directory of Open Access Journals (Sweden)

    Alfredo Tomasetta

    2010-06-01

    Full Text Available Timothy Williamson supports the thesis that every possible entity necessarily exists and so he needs to explain how a possible son of Wittgenstein’s, for example, exists in our world:he exists as a merely possible object (MPO, a pure locus of potential. Williamson presents a short argument for the existence of MPOs: how many knives can be made by fitting together two blades and two handles? Four: at the most two are concrete objects, the others being merely possible knives and merely possible objects. This paper defends the idea that one can avoid reference and ontological commitment to MPOs. My proposal is that MPOs can be dispensed with by using the notion of rules of knife-making. I first present a solution according to which we count lists of instructions - selected by the rules - describing physical combinations between components. This account, however, has its own difficulties and I eventually suggest that one can find a way out by admitting possible worlds, entities which are more commonly accepted - at least by philosophers - than MPOs. I maintain that, in answering Williamson’s questions, we count classes of physically possible worlds in which the same instance of a general rule is applied.

  1. Robust estimates of environmental effects on population vital rates: an integrated capture–recapture model of seasonal brook trout growth, survival and movement in a stream network

    Science.gov (United States)

    Letcher, Benjamin H.; Schueller, Paul; Bassar, Ronald D.; Nislow, Keith H.; Coombs, Jason A.; Sakrejda, Krzysztof; Morrissey, Michael; Sigourney, Douglas B.; Whiteley, Andrew R.; O'Donnell, Matthew J.; Dubreuil, Todd L.

    2015-01-01

    Modelling the effects of environmental change on populations is a key challenge for ecologists, particularly as the pace of change increases. Currently, modelling efforts are limited by difficulties in establishing robust relationships between environmental drivers and population responses.We developed an integrated capture–recapture state-space model to estimate the effects of two key environmental drivers (stream flow and temperature) on demographic rates (body growth, movement and survival) using a long-term (11 years), high-resolution (individually tagged, sampled seasonally) data set of brook trout (Salvelinus fontinalis) from four sites in a stream network. Our integrated model provides an effective context within which to estimate environmental driver effects because it takes full advantage of data by estimating (latent) state values for missing observations, because it propagates uncertainty among model components and because it accounts for the major demographic rates and interactions that contribute to annual survival.We found that stream flow and temperature had strong effects on brook trout demography. Some effects, such as reduction in survival associated with low stream flow and high temperature during the summer season, were consistent across sites and age classes, suggesting that they may serve as robust indicators of vulnerability to environmental change. Other survival effects varied across ages, sites and seasons, indicating that flow and temperature may not be the primary drivers of survival in those cases. Flow and temperature also affected body growth rates; these responses were consistent across sites but differed dramatically between age classes and seasons. Finally, we found that tributary and mainstem sites responded differently to variation in flow and temperature.Annual survival (combination of survival and body growth across seasons) was insensitive to body growth and was most sensitive to flow (positive) and temperature (negative

  2. Counting statistics in low level radioactivity measurements fluctuating counting efficiency

    International Nuclear Information System (INIS)

    Pazdur, M.F.

    1976-01-01

    A divergence between the probability distribution of the number of nuclear disintegrations and the number of observed counts, caused by counting efficiency fluctuation, is discussed. The negative binominal distribution is proposed to describe the probability distribution of the number of counts, instead of Poisson distribution, which is assumed to hold for the number of nuclear disintegrations only. From actual measurements the r.m.s. amplitude of counting efficiency fluctuation is estimated. Some consequences of counting efficiency fluctuation are investigated and the corresponding formulae are derived: (1) for detection limit as a function of the number of partial measurements and the relative amplitude of counting efficiency fluctuation, and (2) for optimum allocation of the number of partial measurements between sample and background. (author)

  3. Robust Multivariable Estimation of the Relevant Information Coming from a Wheel Speed Sensor and an Accelerometer Embedded in a Car under Performance Tests

    Directory of Open Access Journals (Sweden)

    Wilmar Hernandez

    2005-11-01

    Full Text Available In the present paper, in order to estimate the response of both a wheel speedsensor and an accelerometer placed in a car under performance tests, robust and optimalmultivariable estimation techniques are used. In this case, the disturbances and noisescorrupting the relevant information coming from the sensors’ outputs are so dangerous thattheir negative influence on the electrical systems impoverish the general performance of thecar. In short, the solution to this problem is a safety related problem that deserves our fullattention. Therefore, in order to diminish the negative effects of the disturbances and noiseson the car’s electrical and electromechanical systems, an optimum observer is used. Theexperimental results show a satisfactory improvement in the signal-to-noise ratio of therelevant signals and demonstrate the importance of the fusion of several intelligent sensordesign techniques when designing the intelligent sensors that today’s cars need.

  4. Categorical counting.

    Science.gov (United States)

    Fetterman, J Gregor; Killeen, P Richard

    2010-09-01

    Pigeons pecked on three keys, responses to one of which could be reinforced after a few pecks, to a second key after a somewhat larger number of pecks, and to a third key after the maximum pecking requirement. The values of the pecking requirements and the proportion of trials ending with reinforcement were varied. Transits among the keys were an orderly function of peck number, and showed approximately proportional changes with changes in the pecking requirements, consistent with Weber's law. Standard deviations of the switch points between successive keys increased more slowly within a condition than across conditions. Changes in reinforcement probability produced changes in the location of the psychometric functions that were consistent with models of timing. Analyses of the number of pecks emitted and the duration of the pecking sequences demonstrated that peck number was the primary determinant of choice, but that passage of time also played some role. We capture the basic results with a standard model of counting, which we qualify to account for the secondary experiments. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Radon counting statistics - a Monte Carlo investigation

    International Nuclear Information System (INIS)

    Scott, A.G.

    1996-01-01

    Radioactive decay is a Poisson process, and so the Coefficient of Variation (COV) of open-quotes nclose quotes counts of a single nuclide is usually estimated as 1/√n. This is only true if the count duration is much shorter than the half-life of the nuclide. At longer count durations, the COV is smaller than the Poisson estimate. Most radon measurement methods count the alpha decays of 222 Rn, plus the progeny 218 Po and 214 Po, and estimate the 222 Rn activity from the sum of the counts. At long count durations, the chain decay of these nuclides means that every 222 Rn decay must be followed by two other alpha decays. The total number of decays is open-quotes 3Nclose quotes, where N is the number of radon decays, and the true COV of the radon concentration estimate is 1/√(N), √3 larger than the Poisson total count estimate of 1/√3N. Most count periods are comparable to the half lives of the progeny, so the relationship between COV and count time is complex. A Monte-Carlo estimate of the ratio of true COV to Poisson estimate was carried out for a range of count periods from 1 min to 16 h and three common radon measurement methods: liquid scintillation, scintillation cell, and electrostatic precipitation of progeny. The Poisson approximation underestimates COV by less than 20% for count durations of less than 60 min

  6. Aerial population estimates of wild horses (Equus caballus) in the adobe town and salt wells creek herd management areas using an integrated simultaneous double-count and sightability bias correction technique

    Science.gov (United States)

    Lubow, Bruce C.; Ransom, Jason I.

    2007-01-01

    An aerial survey technique combining simultaneous double-count and sightability bias correction methodologies was used to estimate the population of wild horses inhabiting Adobe Town and Salt Wells Creek Herd Management Areas, Wyoming. Based on 5 surveys over 4 years, we conclude that the technique produced estimates consistent with the known number of horses removed between surveys and an annual population growth rate of 16.2 percent per year. Therefore, evidence from this series of surveys supports the validity of this survey method. Our results also indicate that the ability of aerial observers to see horse groups is very strongly dependent on skill of the individual observer, size of the horse group, and vegetation cover. It is also more modestly dependent on the ruggedness of the terrain and the position of the sun relative to the observer. We further conclude that censuses, or uncorrected raw counts, are inadequate estimates of population size for this herd. Such uncorrected counts were all undercounts in our trials, and varied in magnitude from year to year and observer to observer. As of April 2007, we estimate that the population of the Adobe Town /Salt Wells Creek complex is 906 horses with a 95 percent confidence interval ranging from 857 to 981 horses.

  7. Robust estimation of space influence model. Part 2. ; Synthesis of urban lattice data analysis for practical use. Kukan eikyo model no antei suiteiho. 2. ; Jikkenteki mesh data kaiseki system kochiku no tameno kukan sokan bunsekiho no taikeika

    Energy Technology Data Exchange (ETDEWEB)

    Aoki, Y.; Osaragi, T. (Tokyo Institute of Technology, Tokyo (Japan). Faculty of Engineering)

    1991-07-30

    In this study, a method for robust estimation of parameters of the space influence function model, which was possible to become unstable, was investigated by applying a principal component method. In order to carry out the robust estimation of parameters without the effect of multicollinearity, regression coefficients of principal components with small eigenvalue and with small single-correlation with dependent variables were required to forced to be zero in the estimation method by principal component. Through the case study using the real urban lattice data, the conventional method was compared with the principal component method. As a result, the latter method realized the excellent sabilization of spatial distribution patterns of estimation parameters and the simple interpretation of parameters. It also improved reliability since 95% confidence interval of the estimated value became smaller. This method was found to be effective as a basic measure to acheve the stability of parameters. 10 refs., 7 figs.

  8. Replica exchange enveloping distribution sampling (RE-EDS): A robust method to estimate multiple free-energy differences from a single simulation.

    Science.gov (United States)

    Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina

    2016-10-21

    In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.

  9. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  10. Criterion-Validity of Commercially Available Physical Activity Tracker to Estimate Step Count, Covered Distance and Energy Expenditure during Sports Conditions

    Directory of Open Access Journals (Sweden)

    Yvonne Wahl

    2017-09-01

    Full Text Available Background: In the past years, there was an increasing development of physical activity tracker (Wearables. For recreational people, testing of these devices under walking or light jogging conditions might be sufficient. For (elite athletes, however, scientific trustworthiness needs to be given for a broad spectrum of velocities or even fast changes in velocities reflecting the demands of the sport. Therefore, the aim was to evaluate the validity of eleven Wearables for monitoring step count, covered distance and energy expenditure (EE under laboratory conditions with different constant and varying velocities.Methods: Twenty healthy sport students (10 men, 10 women performed a running protocol consisting of four 5 min stages of different constant velocities (4.3; 7.2; 10.1; 13.0 km·h−1, a 5 min period of intermittent velocity, and a 2.4 km outdoor run (10.1 km·h−1 while wearing eleven different Wearables (Bodymedia Sensewear, Beurer AS 80, Polar Loop, Garmin Vivofit, Garmin Vivosmart, Garmin Vivoactive, Garmin Forerunner 920XT, Fitbit Charge, Fitbit Charge HR, Xaomi MiBand, Withings Pulse Ox. Step count, covered distance, and EE were evaluated by comparing each Wearable with a criterion method (Optogait system and manual counting for step count, treadmill for covered distance and indirect calorimetry for EE.Results: All Wearables, except Bodymedia Sensewear, Polar Loop, and Beurer AS80, revealed good validity (small MAPE, good ICC for all constant and varying velocities for monitoring step count. For covered distance, all Wearables showed a very low ICC (<0.1 and high MAPE (up to 50%, revealing no good validity. The measurement of EE was acceptable for the Garmin, Fitbit and Withings Wearables (small to moderate MAPE, while Bodymedia Sensewear, Polar Loop, and Beurer AS80 showed a high MAPE up to 56% for all test conditions.Conclusion: In our study, most Wearables provide an acceptable level of validity for step counts at different

  11. A robust standard deviation control chart

    NARCIS (Netherlands)

    Schoonhoven, M.; Does, R.J.M.M.

    2012-01-01

    This article studies the robustness of Phase I estimators for the standard deviation control chart. A Phase I estimator should be efficient in the absence of contaminations and resistant to disturbances. Most of the robust estimators proposed in the literature are robust against either diffuse

  12. Integrating chronological uncertainties for annually laminated lake sediments using layer counting, independent chronologies and Bayesian age modelling (Lake Ohau, South Island, New Zealand)

    Science.gov (United States)

    Vandergoes, Marcus J.; Howarth, Jamie D.; Dunbar, Gavin B.; Turnbull, Jocelyn C.; Roop, Heidi A.; Levy, Richard H.; Li, Xun; Prior, Christine; Norris, Margaret; Keller, Liz D.; Baisden, W. Troy; Ditchburn, Robert; Fitzsimons, Sean J.; Bronk Ramsey, Christopher

    2018-05-01

    Annually resolved (varved) lake sequences are important palaeoenvironmental archives as they offer a direct incremental dating technique for high-frequency reconstruction of environmental and climate change. Despite the importance of these records, establishing a robust chronology and quantifying its precision and accuracy (estimations of error) remains an essential but challenging component of their development. We outline an approach for building reliable independent chronologies, testing the accuracy of layer counts and integrating all chronological uncertainties to provide quantitative age and error estimates for varved lake sequences. The approach incorporates (1) layer counts and estimates of counting precision; (2) radiometric and biostratigrapic dating techniques to derive independent chronology; and (3) the application of Bayesian age modelling to produce an integrated age model. This approach is applied to a case study of an annually resolved sediment record from Lake Ohau, New Zealand. The most robust age model provides an average error of 72 years across the whole depth range. This represents a fractional uncertainty of ∼5%, higher than the <3% quoted for most published varve records. However, the age model and reported uncertainty represent the best fit between layer counts and independent chronology and the uncertainties account for both layer counting precision and the chronological accuracy of the layer counts. This integrated approach provides a more representative estimate of age uncertainty and therefore represents a statistically more robust chronology.

  13. Investigating Robustness of Item Response Theory Proficiency Estimators to Atypical Response Behaviors under Two-Stage Multistage Testing. ETS GRE® Board Research Report. ETS GRE®-16-03. ETS Research Report No. RR-16-22

    Science.gov (United States)

    Kim, Sooyeon; Moses, Tim

    2016-01-01

    The purpose of this study is to evaluate the extent to which item response theory (IRT) proficiency estimation methods are robust to the presence of aberrant responses under the "GRE"® General Test multistage adaptive testing (MST) design. To that end, a wide range of atypical response behaviors affecting as much as 10% of the test items…

  14. Estimation of DMFT, Salivary Streptococcus Mutans Count, Flow Rate, Ph, and Salivary Total Calcium Content in Pregnant and Non-Pregnant Women: A Prospective Study.

    Science.gov (United States)

    Kamate, Wasim Ismail; Vibhute, Nupura Aniket; Baad, Rajendra Krishna

    2017-04-01

    Pregnancy, a period from conception till birth, causes changes in the functioning of the human body as a whole and specifically in the oral cavity that may favour the emergence of dental caries. Many studies have shown pregnant women at increased risk for dental caries, however, specific salivary caries risk factors and the particular period of pregnancy at heightened risk for dental caries are yet to be explored and give a scope of further research in this area. The aim of the present study was to assess the severity of dental caries in pregnant women compared to non-pregnant women by evaluating parameters like Decayed, Missing, Filled Teeth (DMFT) index, salivary Streptococcus mutans count, flow rate, pH and total calcium content. A total of 50 first time pregnant women in the first trimester were followed during their second trimester, third trimester and postpartum period for the evaluation of DMFT by World Health Organization (WHO) scoring criteria, salivary flow rate by drooling method, salivary pH by pH meter, salivary total calcium content by bioassay test kit and salivary Streptococcus mutans count by semiautomatic counting of colonies grown on Mitis Salivarius (MS) agar supplemented by 0.2U/ml of bacitracin and 10% sucrose. The observations of pregnant women were then compared with same parameters evaluated in the 50 non-pregnant women. Paired t-test and Wilcoxon sign rank test were performed to assess the association between the study parameters. Evaluation of different caries risk factors between pregnant and non-pregnant women clearly showed that pregnant women were at a higher risk for dental caries. Comparison of caries risk parameters during the three trimesters and postpartum period showed that the salivary Streptococcus mutans count had significantly increased in the second trimester , third trimester and postpartum period while the mean pH and mean salivary total calcium content decreased in the third trimester and postpartum period. These

  15. Estimation of DMFT, Salivary Streptococcus Mutans Count, Flow Rate, Ph, and Salivary Total Calcium Content in Pregnant and Non-Pregnant Women: A Prospective Study

    Science.gov (United States)

    Vibhute, Nupura Aniket; Baad, Rajendra Krishna

    2017-01-01

    Introduction Pregnancy, a period from conception till birth, causes changes in the functioning of the human body as a whole and specifically in the oral cavity that may favour the emergence of dental caries. Many studies have shown pregnant women at increased risk for dental caries, however, specific salivary caries risk factors and the particular period of pregnancy at heightened risk for dental caries are yet to be explored and give a scope of further research in this area. Aim The aim of the present study was to assess the severity of dental caries in pregnant women compared to non-pregnant women by evaluating parameters like Decayed, Missing, Filled Teeth (DMFT) index, salivary Streptococcus mutans count, flow rate, pH and total calcium content. Materials and Methods A total of 50 first time pregnant women in the first trimester were followed during their second trimester, third trimester and postpartum period for the evaluation of DMFT by World Health Organization (WHO) scoring criteria, salivary flow rate by drooling method, salivary pH by pH meter, salivary total calcium content by bioassay test kit and salivary Streptococcus mutans count by semiautomatic counting of colonies grown on Mitis Salivarius (MS) agar supplemented by 0.2U/ml of bacitracin and 10% sucrose. The observations of pregnant women were then compared with same parameters evaluated in the 50 non-pregnant women. Paired t-test and Wilcoxon sign rank test were performed to assess the association between the study parameters. Results Evaluation of different caries risk factors between pregnant and non-pregnant women clearly showed that pregnant women were at a higher risk for dental caries. Comparison of caries risk parameters during the three trimesters and postpartum period showed that the salivary Streptococcus mutans count had significantly increased in the second trimester, third trimester and postpartum period while the mean pH and mean salivary total calcium content decreased in the third

  16. Robust Trust in Expert Testimony

    Directory of Open Access Journals (Sweden)

    Christian Dahlman

    2015-05-01

    Full Text Available The standard of proof in criminal trials should require that the evidence presented by the prosecution is robust. This requirement of robustness says that it must be unlikely that additional information would change the probability that the defendant is guilty. Robustness is difficult for a judge to estimate, as it requires the judge to assess the possible effect of information that the he or she does not have. This article is concerned with expert witnesses and proposes a method for reviewing the robustness of expert testimony. According to the proposed method, the robustness of expert testimony is estimated with regard to competence, motivation, external strength, internal strength and relevance. The danger of trusting non-robust expert testimony is illustrated with an analysis of the Thomas Quick Case, a Swedish legal scandal where a patient at a mental institution was wrongfully convicted for eight murders.

  17. Robustifying Bayesian nonparametric mixtures for count data.

    Science.gov (United States)

    Canale, Antonio; Prünster, Igor

    2017-03-01

    Our motivating application stems from surveys of natural populations and is characterized by large spatial heterogeneity in the counts, which makes parametric approaches to modeling local animal abundance too restrictive. We adopt a Bayesian nonparametric approach based on mixture models and innovate with respect to popular Dirichlet process mixture of Poisson kernels by increasing the model flexibility at the level both of the kernel and the nonparametric mixing measure. This allows to derive accurate and robust estimates of the distribution of local animal abundance and of the corresponding clusters. The application and a simulation study for different scenarios yield also some general methodological implications. Adding flexibility solely at the level of the mixing measure does not improve inferences, since its impact is severely limited by the rigidity of the Poisson kernel with considerable consequences in terms of bias. However, once a kernel more flexible than the Poisson is chosen, inferences can be robustified by choosing a prior more general than the Dirichlet process. Therefore, to improve the performance of Bayesian nonparametric mixtures for count data one has to enrich the model simultaneously at both levels, the kernel and the mixing measure. © 2016, The International Biometric Society.

  18. Robust Portfolio Optimization Using Pseudodistances.

    Science.gov (United States)

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  19. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    The DeltaQ test is a method of estimating the air leakage from forced air duct systems. Developed primarily for residential and small commercial applications it uses the changes in blower door test results due to forced air system operation. Previous studies established the principles behind DeltaQ testing, but raised issues of precision of the test, particularly for leaky homes on windy days. Details of the measurement technique are available in an ASTM Standard (ASTM E1554-2007). In order to ease adoption of the test method, this study answers questions regarding the uncertainty due to changing weather during the test (particularly changes in wind speed) and the applicability to low leakage systems. The first question arises because the building envelope air flows and pressures used in the DeltaQ test are influenced by weather induced pressures. Variability in wind induced pressures rather than temperature difference induced pressures dominates this effect because the wind pressures change rapidly over the time period of a test. The second question needs to answered so that DeltaQ testing can be used in programs requiring or giving credit for tight ducts (e.g., California's Building Energy Code (CEC 2005)). DeltaQ modeling biases have been previously investigated in laboratory studies where there was no weather induced changes in envelope flows and pressures. Laboratory work by Andrews (2002) and Walker et al. (2004) found biases of about 0.5% of forced air system blower flow and individual test uncertainty of about 2% of forced air system blower flow. The laboratory tests were repeated by Walker and Dickerhoff (2006 and 2008) using a new ramping technique that continuously varied envelope pressures and air flows rather than taking data at pre-selected pressure stations (as used in ASTM E1554-2003 and other previous studies). The biases and individual test uncertainties for ramping were found to be very close (less than 0.5% of air handler flow) to those

  20. Usefulness of rate of increase in SPECT counts in one-day method of N-isopropyl-4-iodoamphetamine [123I] SPECT studies at rest and after acetazolamide challenge using a method for estimating time-dependent distribution at rest

    International Nuclear Information System (INIS)

    Kawamura, Yoshifumi; Ashizaki, Michio; Saida, Shoko; Sugimoto, Hideharu

    2008-01-01

    When N-isopropyl-4-iodoamphetamine ( 123 I-IMP) single-photon emission computed tomography (SPECT) studies at rest and after acetazolamide (ACZ) challenge are conducted in a day, the time-dependent change in IMP in the brain at rest should be estimated accurately. We devised the method and investigated whether our one-day method for measuring the rate of increase in SPECT counts allowed reduction in the acquisition time. Sequential, 5-min SPECT scans were performed. We estimated the time-dependent change in the brain using the change in slopes of two linear equations derived from the first three SPECT counts. For the one-day method, ACZ was administered 15 min or 20 min after IMP administration. The second IMP was administered 10 min after ACZ administration. Time-dependent changes in the brain were classified into 13 patterns when estimation was started at 5 min after IMP administration and 6 patterns when estimation was started at 10 min, and fitting coefficients were determined. The correlation between actual measurements at 37.5 min and estimates was high with a correlation coefficient of 0.99 or greater. Rates of increase obtained from 20-min data were highly correlated with those obtained from 15-min or 10-min data (r=0.97 or greater). In patients with unilateral cerebrovascular disease, the rate of increase on the unaffected side was 44.4±10.9% when ACZ was administered 15 min later and 48.0±16.0% when ACZ was administered 20 min later, and the rates of increase with different timings of administration were not significantly different. The examination time may be reduced from 50 min to 45 min or 40 min as needed. The rate of increase was not influenced by the time frame for determination or the timing of ACZ administration. These findings suggest that our estimation method is accurate and versatile. (author)

  1. Cosmic-muon intensity measurement and overburden estimation in a building at surface level and in an underground facility using two BC408 scintillation detectors coincidence counting system.

    Science.gov (United States)

    Zhang, Weihua; Ungar, Kurt; Liu, Chuanlei; Mailhot, Maverick

    2016-10-01

    A series of measurements have been recently conducted to determine the cosmic-muon intensities and attenuation factors at various indoor and underground locations for a gamma spectrometer. For this purpose, a digital coincidence spectrometer was developed by using two BC408 plastic scintillation detectors and an XIA LLC Digital Gamma Finder (DGF)/Pixie-4 software and card package. The results indicate that the overburden in the building at surface level absorbs a large part of cosmic ray protons while attenuating the cosmic-muon intensity by 20-50%. The underground facility has the largest overburden of 39 m water equivalent, where the cosmic-muon intensity is reduced by a factor of 6. The study provides a cosmic-muon intensity measurement and overburden assessment, which are important parameters for analysing the background of an HPGe counting system, or for comparing the background of similar systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Usefulness of estimation of blood procalcitonin concentration versus C-reactive protein concentration and white blood cell count for therapeutic monitoring of sepsis in neonates

    Directory of Open Access Journals (Sweden)

    Agnieszka Kordek

    2014-12-01

    Full Text Available Aim: This study was intended to assess the clinical usefulness of blood procalcitonin (PCT concentrations for the diagnosis and therapeutic monitoring of nosocomial neonatal sepsis.Material/Methods: The enrolment criterion was sepsis clinically manifesting after three days of life. PCT concentrations were measured in venous blood from 52 infected and 88 uninfected neonates. The results were interpreted against C-reactive protein (CRP concentrations and white blood cell counts (WBC.Results: Differences between the two groups in PCT and CRP concentrations were highly significant. No significant differences between the groups were noted for WBC. The threshold value on the receiver operator characteristic curve was 2.06 ng/mL for PCT (SE 75%; SP 80.68%; PPV 62.22%; NPV 88.75%; AUC 0.805, 5.0 mg/L for CRP (SE 67.44%; SP 73.68%; PPV 42.02%; NPV 88.89%; AUC 0.801, and 11.9 x109/L for WBC (SE 51.16%; SP 50.68%; PPV 23.16%; NPV 78.13%; AUC 0.484. Procalcitonin concentrations decreased 24 hours after initiation of antibiotic therapy and reverted to the control level after 5-7 days. C-reactive protein concentrations began to decline after two days of antibiotic therapy but were still higher than in the control group after 5-7 days of treatment. No significant changes in WBC during the treatment were observed.Conclusions: Procalcitonin concentrations in blood appear to be of use for the diagnosis and therapeutic monitoring of nosocomial infections in neonates as this parameter demonstrates greater sensitivity and specificity than C-reactive protein. White blood cell counts appear to be of little diagnostic value in the early phase of infection or for therapeutic monitoring.

  3. Forecasting exchange rates: a robust regression approach

    OpenAIRE

    Preminger, Arie; Franck, Raphael

    2005-01-01

    The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...

  4. It's what's inside that counts: egg contaminant concentrations are influenced by estimates of egg density, egg volume, and fresh egg mass.

    Science.gov (United States)

    Herzog, Mark P; Ackerman, Joshua T; Eagles-Smith, Collin A; Hartman, C Alex

    2016-05-01

    In egg contaminant studies, it is necessary to calculate egg contaminant concentrations on a fresh wet weight basis and this requires accurate estimates of egg density and egg volume. We show that the inclusion or exclusion of the eggshell can influence egg contaminant concentrations, and we provide estimates of egg density (both with and without the eggshell) and egg-shape coefficients (used to estimate egg volume from egg morphometrics) for American avocet (Recurvirostra americana), black-necked stilt (Himantopus mexicanus), and Forster's tern (Sterna forsteri). Egg densities (g/cm(3)) estimated for whole eggs (1.056 ± 0.003) were higher than egg densities estimated for egg contents (1.024 ± 0.001), and were 1.059 ± 0.001 and 1.025 ± 0.001 for avocets, 1.056 ± 0.001 and 1.023 ± 0.001 for stilts, and 1.053 ± 0.002 and 1.025 ± 0.002 for terns. The egg-shape coefficients for egg volume (K v ) and egg mass (K w ) also differed depending on whether the eggshell was included (K v  = 0.491 ± 0.001; K w  = 0.518 ± 0.001) or excluded (K v  = 0.493 ± 0.001; K w  = 0.505 ± 0.001), and varied among species. Although egg contaminant concentrations are rarely meant to include the eggshell, we show that the typical inclusion of the eggshell in egg density and egg volume estimates results in egg contaminant concentrations being underestimated by 6-13 %. Our results demonstrate that the inclusion of the eggshell significantly influences estimates of egg density, egg volume, and fresh egg mass, which leads to egg contaminant concentrations that are biased low. We suggest that egg contaminant concentrations be calculated on a fresh wet weight basis using only internal egg-content densities, volumes, and masses appropriate for the species. For the three waterbirds in our study, these corrected coefficients are 1.024 ± 0.001 for egg density, 0.493 ± 0.001 for K v , and 0.505 ± 0.001 for K w .

  5. Determining random counts in liquid scintillation counting

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1979-01-01

    During measurements involving coincidence counting techniques, errors can arise due to the detection of chance or random coincidences in the multiple detectors used. A method and the electronic circuits necessary are here described for eliminating this source of error in liquid scintillation detectors used in coincidence counting. (UK)

  6. Clean Hands Count

    Medline Plus

    Full Text Available ... Like this video? Sign in to make your opinion count. Sign in 131 2 Don't like this video? Sign in to make your opinion count. Sign in 3 Loading... Loading... Transcript The ...

  7. Estimating 137Cs ingestion doses to Saamis in Kautokeino (Norway) using whole body counting vs. dietary survey results and food samples

    International Nuclear Information System (INIS)

    Skuterud, L.; Bergan, T.; Mehli, H.

    2002-01-01

    From 1965 to 1990 whole body measurements were carried out on an annual basis. Since then, 3-year cycles have been followed. In most years, the reindeer keepers have provided samples of reindeer meat for radiocaesium analysis. In 1989-1990 and 1999 dietary surveys were performed in conjunction with the whole-body monitoring. Earlier diet information is available from a separate study in 1963. Rough estimates of the radiocaesium intake by the studied population in Kautokeino have indicated that the dietary surveys have overestimated the radiocaesium intake. The aim of the present study was to evaluate the available information from Kautokeino, and to derive some conclusions regarding the reindeer meat consumption by today's reindeer keepers, and what 137 Cs ingestion doses they are exposed to. (LN)

  8. Temporal trends in sperm count

    DEFF Research Database (Denmark)

    Levine, Hagai; Jørgensen, Niels; Martino-Andrade, Anderson

    2017-01-01

    a predefined protocol 7518 abstracts were screened and 2510 full articles reporting primary data on SC were reviewed. A total of 244 estimates of SC and TSC from 185 studies of 42 935 men who provided semen samples in 1973-2011 were extracted for meta-regression analysis, as well as information on years.......006, respectively). WIDER IMPLICATIONS: This comprehensive meta-regression analysis reports a significant decline in sperm counts (as measured by SC and TSC) between 1973 and 2011, driven by a 50-60% decline among men unselected by fertility from North America, Europe, Australia and New Zealand. Because......BACKGROUND: Reported declines in sperm counts remain controversial today and recent trends are unknown. A definitive meta-analysis is critical given the predictive value of sperm count for fertility, morbidity and mortality. OBJECTIVE AND RATIONALE: To provide a systematic review and meta-regression...

  9. Aspects of robust linear regression

    NARCIS (Netherlands)

    Davies, P.L.

    1993-01-01

    Section 1 of the paper contains a general discussion of robustness. In Section 2 the influence function of the Hampel-Rousseeuw least median of squares estimator is derived. Linearly invariant weak metrics are constructed in Section 3. It is shown in Section 4 that $S$-estimators satisfy an exact

  10. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  11. Identification and red blood cell automated counting from blood smear images using computer-aided system.

    Science.gov (United States)

    Acharya, Vasundhara; Kumar, Preetham

    2018-03-01

    Red blood cell count plays a vital role in identifying the overall health of the patient. Hospitals use the hemocytometer to count the blood cells. Conventional method of placing the smear under microscope and counting the cells manually lead to erroneous results, and medical laboratory technicians are put under stress. A computer-aided system will help to attain precise results in less amount of time. This research work proposes an image-processing technique for counting the number of red blood cells. It aims to examine and process the blood smear image, in order to support the counting of red blood cells and identify the number of normal and abnormal cells in the image automatically. K-medoids algorithm which is robust to external noise is used to extract the WBCs from the image. Granulometric analysis is used to separate the red blood cells from the white blood cells. The red blood cells obtained are counted using the labeling algorithm and circular Hough transform. The radius range for the circle-drawing algorithm is estimated by computing the distance of the pixels from the boundary which automates the entire algorithm. A comparison is done between the counts obtained using the labeling algorithm and circular Hough transform. Results of the work showed that circular Hough transform was more accurate in counting the red blood cells than the labeling algorithm as it was successful in identifying even the overlapping cells. The work also intends to compare the results of cell count done using the proposed methodology and manual approach. The work is designed to address all the drawbacks of the previous research work. The research work can be extended to extract various texture and shape features of abnormal cells identified so that diseases like anemia of inflammation and chronic disease can be detected at the earliest.

  12. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  13. A measurement technique for counting processes

    International Nuclear Information System (INIS)

    Cantoni, V.; Pavia Univ.; De Lotto, I.; Valenziano, F.

    1980-01-01

    A technique for the estimation of first and second order properties of a stationary counting process is presented here which uses standard instruments for analysis of a continuous stationary random signal. (orig.)

  14. Robust canonical correlations: A comparative study

    OpenAIRE

    Branco, JA; Croux, Christophe; Filzmoser, P; Oliveira, MR

    2005-01-01

    Several approaches for robust canonical correlation analysis will be presented and discussed. A first method is based on the definition of canonical correlation analysis as looking for linear combinations of two sets of variables having maximal (robust) correlation. A second method is based on alternating robust regressions. These methods axe discussed in detail and compared with the more traditional approach to robust canonical correlation via covariance matrix estimates. A simulation study ...

  15. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    Science.gov (United States)

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  17. An Adaptive Smoother for Counting Measurements

    International Nuclear Information System (INIS)

    Kondrasovs Vladimir; Coulon Romain; Normand Stephane

    2013-06-01

    Counting measurements associated with nuclear instruments are tricky to carry out due to the stochastic process of the radioactivity. Indeed events counting have to be processed and filtered in order to display a stable count rate value and to allow variations monitoring in the measured activity. Smoothers (as the moving average) are adjusted by a time constant defined as a compromise between stability and response time. A new approach has been developed and consists in improving the response time while maintaining count rate stability. It uses the combination of a smoother together with a detection filter. A memory of counting data is processed to calculate several count rate estimates using several integration times. These estimates are then sorted into the memory from short to long integration times. A measurement position, in terms of integration time, is then chosen into this memory after a detection test. An inhomogeneity into the Poisson counting process is detected by comparison between current position estimate and the other estimates contained into the memory in respect with the associated statistical variance calculated with homogeneous assumption. The measurement position (historical time) and the ability to forget an obsolete data or to keep in memory a useful data are managed using the detection test result. The proposed smoother is then an adaptive and a learning algorithm allowing an optimization of the response time while maintaining measurement counting stability and converging efficiently to the best counting estimate after an effective change in activity. This algorithm has also the specificity to be low recursive and thus easily embedded into DSP electronics based on FPGA or micro-controllers meeting 'real life' time requirements. (authors)

  18. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  19. Robust estimation of the expected survival probabilities from high-dimensional Cox models with biomarker-by-treatment interactions in randomized clinical trials

    Directory of Open Access Journals (Sweden)

    Nils Ternès

    2017-05-01

    Full Text Available Abstract Background Thanks to the advances in genomics and targeted treatments, more and more prediction models based on biomarkers are being developed to predict potential benefit from treatments in a randomized clinical trial. Despite the methodological framework for the development and validation of prediction models in a high-dimensional setting is getting more and more established, no clear guidance exists yet on how to estimate expected survival probabilities in a penalized model with biomarker-by-treatment interactions. Methods Based on a parsimonious biomarker selection in a penalized high-dimensional Cox model (lasso or adaptive lasso, we propose a unified framework to: estimate internally the predictive accuracy metrics of the developed model (using double cross-validation; estimate the individual survival probabilities at a given timepoint; construct confidence intervals thereof (analytical or bootstrap; and visualize them graphically (pointwise or smoothed with spline. We compared these strategies through a simulation study covering scenarios with or without biomarker effects. We applied the strategies to a large randomized phase III clinical trial that evaluated the effect of adding trastuzumab to chemotherapy in 1574 early breast cancer patients, for which the expression of 462 genes was measured. Results In our simulations, penalized regression models using the adaptive lasso estimated the survival probability of new patients with low bias and standard error; bootstrapped confidence intervals had empirical coverage probability close to the nominal level across very different scenarios. The double cross-validation performed on the training data set closely mimicked the predictive accuracy of the selected models in external validation data. We also propose a useful visual representation of the expected survival probabilities using splines. In the breast cancer trial, the adaptive lasso penalty selected a prediction model with 4

  20. Platelet Count and Plateletcrit

    African Journals Online (AJOL)

    strated that neonates with late onset sepsis (bacteremia after 3 days of age) had a dramatic increase in MPV and. PDW18. We hypothesize that as the MPV and PDW increase and platelet count and PCT decrease in sick children, intui- tively, the ratio of MPV to PCT; MPV to Platelet count,. PDW to PCT, PDW to platelet ...

  1. EcoCount

    Directory of Open Access Journals (Sweden)

    Phillip P. Allen

    2014-05-01

    Full Text Available Techniques that analyze biological remains from sediment sequences for environmental reconstructions are well established and widely used. Yet, identifying, counting, and recording biological evidence such as pollen grains remain a highly skilled, demanding, and time-consuming task. Standard procedure requires the classification and recording of between 300 and 500 pollen grains from each representative sample. Recording the data from a pollen count requires significant effort and focused resources from the palynologist. However, when an adaptation to the recording procedure is utilized, efficiency and time economy improve. We describe EcoCount, which represents a development in environmental data recording procedure. EcoCount is a voice activated fully customizable digital count sheet that allows the investigator to continuously interact with a field of view during the data recording. Continuous viewing allows the palynologist the opportunity to remain engaged with the essential task, identification, for longer, making pollen counting more efficient and economical. EcoCount is a versatile software package that can be used to record a variety of environmental evidence and can be installed onto different computer platforms, making the adoption by users and laboratories simple and inexpensive. The user-friendly format of EcoCount allows any novice to be competent and functional in a very short time.

  2. Clean Hands Count

    Medline Plus

    Full Text Available ... starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with ... ads? Get YouTube Red. Working... Not now Try it free Find out why Close Clean Hands Count ...

  3. Counting It Twice.

    Science.gov (United States)

    Schattschneider, Doris

    1991-01-01

    Provided are examples from many domains of mathematics that illustrate the Fubini Principle in its discrete version: the value of a summation over a rectangular array is independent of the order of summation. Included are: counting using partitions as in proof by pictures, combinatorial arguments, indirect counting as in the inclusion-exclusion…

  4. Robust Portfolio Optimization Using Pseudodistances

    Science.gov (United States)

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  5. A New Fuzzy Sliding Mode Controller with a Disturbance Estimator for Robust Vibration Control of a Semi-Active Vehicle Suspension System

    Directory of Open Access Journals (Sweden)

    Byung-Keun Song

    2017-10-01

    Full Text Available This paper presents a new fuzzy sliding mode controller (FSMC to improve control performances in the presence of uncertainties related to model errors and external disturbance (UAD. As a first step, an adaptive control law is designed using Lyapunov stability analysis. The control law can update control parameters of the FSMC with a disturbance estimator (DE in which the closed-loop stability and finite-time convergence of tracking error are guaranteed. A solution for estimating the compensative quantity of the impact of UAD on a control system and a set of solutions are then presented in order to avoid the singular cases of the fuzzy-based function approximation, increase convergence ability, and reduce the calculating cost. Subsequently, the effectiveness of the proposed controller is verified through the investigation of vibration control performances of a semi-active vehicle suspension system featuring a magnetorheological damper (MRD. It is shown that the proposed controller can provide better control ability of vibration control with lower consumed power compared with two existing fuzzy sliding mode controllers.

  6. Robust Inference with Multi-way Clustering

    OpenAIRE

    A. Colin Cameron; Jonah B. Gelbach; Douglas L. Miller; Doug Miller

    2009-01-01

    In this paper we propose a variance estimator for the OLS estimator as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our...

  7. Point Count Length and Detection of Forest Neotropical Migrant Birds

    Science.gov (United States)

    Deanna K. Dawson; David R. Smith; Chandler S. Robbins

    1995-01-01

    Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences...

  8. Robustness of estimation of differential renal function in infants and children with unilateral prenatal diagnosis of a hydronephrotic kidney on dynamic renography: How real is the supranormal kidney?

    Energy Technology Data Exchange (ETDEWEB)

    Ozcan, Zehra [University Faculty of Medicine, Nuclear Medicine Department of Ege, Bornova, Izmir (Turkey); Anderson, Peter J.; Gordon, Isky [Great Ormond Street Hospital For Children, Department of Radiology, London (United Kingdom)

    2006-06-15

    The two methods recommended for estimation of differential renal function (DRF) in the renography guidelines published by the European Association of Nuclear Medicine are the area under the background-subtracted time-activity curves (AUCs) (often called the integral method) and the regression slope of the background-subtracted Rutland/Patlak plot analysis. The current study investigated the agreement/disagreement of DRF estimations obtained using these two techniques. This report also focusses on the occurrence of supranormal function of the affected kidney (defined as DRF >55%) and reviews the related technical and physiological factors. A total of 394 renographic studies in 101 children with a prenatal diagnosis of unilateral renal pelvic dilatation confirmed on postnatal studies were retrieved from optical disc and reprocessed by one author. DRF was calculated using the Rutland/Patlak plot and the AUC over the time period 40-120 s following an injection of{sup 99m}Tc-mercaptoacetyltriglycine. The difference in DRF between the methods (Rutland/Patlak minus AUC) and 95% limits of agreement were calculated. The age distribution of the difference between the methods was also analysed. For all 394 measurements the mean difference was -0.8% (range -21.0% to 16.9%, SD 3.9%). The 95% limits of agreement were -7.0% to 8.6%. Analysis of the data revealed that greater spread of DRF between the techniques was seen in studies performed at a younger age: a discrepancy of >5% DRF was significantly more common in those <1 year of age than in those >1 year old (25.3% vs 9.9%; chi-square, p<0.0005). Supranormal function was found less frequently using the Rutland/Patlak method than with the AUC method (8.4% vs 11.2%; chi-square, p<0.0005). The frequency of this diagnosis was reduced to 4.6% when both methods were required to be in agreement. (orig.)

  9. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  10. Algorithm for counting large directed loops

    Energy Technology Data Exchange (ETDEWEB)

    Bianconi, Ginestra [Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste (Italy); Gulbahce, Natali [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, NM 87545 (United States)

    2008-06-06

    We derive a Belief-Propagation algorithm for counting large loops in a directed network. We evaluate the distribution of the number of small loops in a directed random network with given degree sequence. We apply the algorithm to a few characteristic directed networks of various network sizes and loop structures and compare the algorithm with exhaustive counting results when possible. The algorithm is adequate in estimating loop counts for large directed networks and can be used to compare the loop structure of directed networks and their randomized counterparts.

  11. Clean Hands Count

    Medline Plus

    Full Text Available ... Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed Unsubscribe 65K ...

  12. Clean Hands Count

    Medline Plus

    Full Text Available ... Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed Unsubscribe 66K ...

  13. Housing Inventory Count

    Data.gov (United States)

    Department of Housing and Urban Development — This report displays the data communities reported to HUD about the nature of their dedicated homeless inventory, referred to as their Housing Inventory Count (HIC)....

  14. Scintillation counting apparatus

    International Nuclear Information System (INIS)

    Noakes, J.E.

    1978-01-01

    Apparatus is described for the accurate measurement of radiation by means of scintillation counters and in particular for the liquid scintillation counting of both soft beta radiation and gamma radiation. Full constructional and operating details are given. (UK)

  15. Allegheny County Traffic Counts

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Traffic sensors at over 1,200 locations in Allegheny County collect vehicle counts for the Pennsylvania Department of Transportation. Data included in the Health...

  16. Counting Knights and Knaves

    Science.gov (United States)

    Levin,Oscar; Roberts, Gerri M.

    2013-01-01

    To understand better some of the classic knights and knaves puzzles, we count them. Doing so reveals a surprising connection between puzzles and solutions, and highlights some beautiful combinatorial identities.

  17. Clean Hands Count

    Medline Plus

    Full Text Available ... why Close Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed ...

  18. Clean Hands Count

    Medline Plus

    Full Text Available ... stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. Working... Not now Try it free Find ...

  19. Deep 3 GHz number counts from a P(D) fluctuation analysis

    Science.gov (United States)

    Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.

    2014-05-01

    Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.

  20. Robust inference in the negative binomial regression model with an application to falls data.

    Science.gov (United States)

    Aeberhard, William H; Cantoni, Eva; Heritier, Stephane

    2014-12-01

    A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.

  1. Low White Blood Cell Count

    Science.gov (United States)

    Symptoms Low white blood cell count By Mayo Clinic Staff A low white blood cell count (leukopenia) is a decrease ... of white blood cell (neutrophil). The definition of low white blood cell count varies from one medical ...

  2. Inverse probability weighting and doubly robust methods in correcting the effects of non-response in the reimbursed medication and self-reported turnout estimates in the ATH survey.

    Science.gov (United States)

    Härkänen, Tommi; Kaikkonen, Risto; Virtala, Esa; Koskinen, Seppo

    2014-11-06

    To assess the nonresponse rates in a questionnaire survey with respect to administrative register data, and to correct the bias statistically. The Finnish Regional Health and Well-being Study (ATH) in 2010 was based on a national sample and several regional samples. Missing data analysis was based on socio-demographic register data covering the whole sample. Inverse probability weighting (IPW) and doubly robust (DR) methods were estimated using the logistic regression model, which was selected using the Bayesian information criteria. The crude, weighted and true self-reported turnout in the 2008 municipal election and prevalences of entitlements to specially reimbursed medication, and the crude and weighted body mass index (BMI) means were compared. The IPW method appeared to remove a relatively large proportion of the bias compared to the crude prevalence estimates of the turnout and the entitlements to specially reimbursed medication. Several demographic factors were shown to be associated with missing data, but few interactions were found. Our results suggest that the IPW method can improve the accuracy of results of a population survey, and the model selection provides insight into the structure of missing data. However, health-related missing data mechanisms are beyond the scope of statistical methods, which mainly rely on socio-demographic information to correct the results.

  3. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  4. Robust Control Charts for Time Series Data

    NARCIS (Netherlands)

    Croux, C.; Gelper, S.; Mahieu, K.

    2010-01-01

    This article presents a control chart for time series data, based on the one-step- ahead forecast errors of the Holt-Winters forecasting method. We use robust techniques to prevent that outliers affect the estimation of the control limits of the chart. Moreover, robustness is important to maintain

  5. Alpha scintillation radon counting

    International Nuclear Information System (INIS)

    Lucas, H.F. Jr.

    1977-01-01

    Radon counting chambers which utilize the alpha-scintillation properties of silver activated zinc sulfide are simple to construct, have a high efficiency, and, with proper design, may be relatively insensitive to variations in the pressure or purity of the counter filling. Chambers which were constructed from glass, metal, or plastic in a wide variety of shapes and sizes were evaluated for the accuracy and the precision of the radon counting. The principles affecting the alpha-scintillation radon counting chamber design and an analytic system suitable for a large scale study of the 222 Rn and 226 Ra content of either air or other environmental samples are described. Particular note is taken of those factors which affect the accuracy and the precision of the method for monitoring radioactivity around uranium mines

  6. Principles of correlation counting

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1975-01-01

    A review is given of the various applications which have been made of correlation techniques in the field of nuclear physics, in particular for absolute counting. Whereas in most cases the usual coincidence method will be preferable for its simplicity, correlation counting may be the only possible approach in such cases where the two radiations of the cascade cannot be well separated or when there is a longliving intermediate state. The measurement of half-lives and of count rates of spurious pulses is also briefly discussed. The various experimental situations lead to different ways the correlation method is best applied (covariance technique with one or with two detectors, application of correlation functions, etc.). Formulae are given for some simple model cases, neglecting dead-time corrections

  7. Interpretation of galaxy counts

    International Nuclear Information System (INIS)

    Tinsely, B.M.

    1980-01-01

    New models are presented for the interpretation of recent counts of galaxies to 24th magnitude, and predictions are shown to 28th magnitude for future comparison with data from the Space Telescope. The results supersede earlier, more schematic models by the author. Tyson and Jarvis found in their counts a ''local'' density enhancement at 17th magnitude, on comparison with the earlier models; the excess is no longer significant when a more realistic mixture of galaxy colors is used. Bruzual and Kron's conclusion that Kron's counts show evidence for evolution at faint magnitudes is confirmed, and it is predicted that some 23d magnitude galaxies have redshifts greater than unity. These may include spheroidal systems, elliptical galaxies, and the bulges of early-type spirals and S0's, seen during their primeval rapid star formation

  8. Robust Models for Operator Workload Estimation

    Science.gov (United States)

    2015-03-01

    piloted aircraft (RPA) simultaneously, a vast improvement in resource utilization compared to existing operations that require several operators per...into distinct cognitive channels (visual, auditory, spatial, etc.) based on our ability to multitask effectively as long as no one channel is

  9. Computerized radioautographic grain counting

    International Nuclear Information System (INIS)

    McKanna, J.A.; Casagrande, V.A.

    1985-01-01

    In recent years, radiolabeling techniques have become fundamental assays in physiology and biochemistry experiments. They also have assumed increasingly important roles in morphologic studies. Characteristically, radioautographic analysis of structure has been qualitative rather than quantitative, however, microcomputers have opened the door to several methods for quantifying grain counts and density. The overall goal of this chapter is to describe grain counting using the Bioquant, an image analysis package based originally on the Apple II+, and now available for several popular microcomputers. The authors discuss their image analysis procedures by applying them to a study of development in the central nervous system

  10. Rainflow counting revisited

    Energy Technology Data Exchange (ETDEWEB)

    Soeker, H [Deutsches Windenergie-Institut (Germany)

    1996-09-01

    As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)

  11. TasselNet: counting maize tassels in the wild via local counts regression network.

    Science.gov (United States)

    Lu, Hao; Cao, Zhiguo; Xiao, Yang; Zhuang, Bohan; Shen, Chunhua

    2017-01-01

    Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting ( MTC ) dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large margins and achieves the overall best counting

  12. TasselNet: counting maize tassels in the wild via local counts regression network

    Directory of Open Access Journals (Sweden)

    Hao Lu

    2017-11-01

    Full Text Available Abstract Background Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. Results This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting (MTC dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large

  13. Regression Models For Multivariate Count Data.

    Science.gov (United States)

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  14. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 824 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 409,492 ...

  15. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 786 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,702 ...

  16. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 414 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...

  17. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 869 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  18. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 460 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...

  19. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 741 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  20. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 029 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,974 ...

  1. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 396 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...

  2. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 094 views 1:19 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,974 ...

  3. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 319 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  4. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 585 views 3:10 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 413,097 ...

  5. Detection and counting systems

    International Nuclear Information System (INIS)

    Abreu, M.A.N. de

    1976-01-01

    Detection devices based on gaseous ionization are analysed, such as: electroscopes ionization chambers, proportional counters and Geiger-Mueller counters. Scintillation methods are also commented. A revision of the basic concepts in electronics is done and the main equipment for counting is detailed. In the study of gama spectrometry, scintillation and semiconductor detectors are analysed [pt

  6. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 384 views 1:19 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Thomas Jefferson University & Jefferson ...

  7. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 285 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  8. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 033 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 410,052 ...

  9. Reticulocyte Count Test

    Science.gov (United States)

    ... htm. (2004 Summer). Immature Reticulocyte Fraction(IRF). The Pathology Center Newsletter v9(1). [On-line information]. Available ... Company, Philadelphia, PA [18th Edition]. Levin, M. (2007 March 8, Updated). Reticulocyte Count. MedlinePlus Medical Encyclopedia [On- ...

  10. Clean Hands Count

    Medline Plus

    Full Text Available ... is starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos ... empower patients to play a role in their care by asking or reminding healthcare providers to clean ...

  11. Radiation intensity counting system

    International Nuclear Information System (INIS)

    Peterson, R.J.

    1982-01-01

    A method is described of excluding the natural dead time of the radiation detector (or eg Geiger-Mueller counter) in a ratemeter counting circuit, thus eliminating the need for dead time corrections. Using a pulse generator an artificial dead time is introduced which is longer than the natural dead time of the detector. (U.K.)

  12. Clean Hands Count

    Medline Plus

    Full Text Available ... Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with zero ads? Get YouTube Red. ... 043 views 1:36 Wash 'Em - Hand Hygiene Music Video - Duration: 5:46. Jefferson Health 411,292 ...

  13. Calorie count - fast food

    Science.gov (United States)

    ... GO About MedlinePlus Site Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Medical Encyclopedia → Calorie count - fast food URL of this page: //medlineplus.gov/ency/patientinstructions/ ...

  14. Fast counting electronics for neutron coincidence counting

    International Nuclear Information System (INIS)

    Swansen, J.E.

    1987-01-01

    This patent describes a high speed circuit for accurate neutron coincidence counting comprising: neutron detecting means for providing an above-threshold signal upon neutron detection; amplifying means inputted by the neutron detecting means for providing a pulse output having a pulse width of about 0.5 microseconds upon the input of each above threshold signal; digital processing means inputted by the pulse output of the amplifying means for generating a pulse responsive to each input pulse from the amplifying means and having a pulse width of about 50 nanoseconds effective for processing an expected neutron event rate of about 1 Mpps: pulse stretching means inputted by the digital processing means for producing a pulse having a pulse width of several milliseconds for each pulse received form the digital processing means; visual indicating means inputted by the pulse stretching means for producing a visual output for each pulse received from the digital processing means; and derandomizing means effective to receive the 50 ns neutron event pulses from the digital processing means for storage at a rate up to the neutron event rate of 1 Mpps and having first counter means for storing the input neutron event pulses

  15. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  16. An Overview of the Adaptive Robust DFT

    Directory of Open Access Journals (Sweden)

    Djurović Igor

    2010-01-01

    Full Text Available Abstract This paper overviews basic principles and applications of the robust DFT (RDFT approach, which is used for robust processing of frequency-modulated (FM signals embedded in non-Gaussian heavy-tailed noise. In particular, we concentrate on the spectral analysis and filtering of signals corrupted by impulsive distortions using adaptive and nonadaptive robust estimators. Several adaptive estimators of location parameter are considered, and it is shown that their application is preferable with respect to non-adaptive counterparts. This fact is demonstrated by efficiency comparison of adaptive and nonadaptive RDFT methods for different noise environments.

  17. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  18. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  19. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  20. Using observation-level random effects to model overdispersion in count data in ecology and evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2014-10-01

    Full Text Available Overdispersion is common in models of count data in ecology and evolutionary biology, and can occur due to missing covariates, non-independent (aggregated data, or an excess frequency of zeroes (zero-inflation. Accounting for overdispersion in such models is vital, as failing to do so can lead to biased parameter estimates, and false conclusions regarding hypotheses of interest. Observation-level random effects (OLRE, where each data point receives a unique level of a random effect that models the extra-Poisson variation present in the data, are commonly employed to cope with overdispersion in count data. However studies investigating the efficacy of observation-level random effects as a means to deal with overdispersion are scarce. Here I use simulations to show that in cases where overdispersion is caused by random extra-Poisson noise, or aggregation in the count data, observation-level random effects yield more accurate parameter estimates compared to when overdispersion is simply ignored. Conversely, OLRE fail to reduce bias in zero-inflated data, and in some cases increase bias at high levels of overdispersion. There was a positive relationship between the magnitude of overdispersion and the degree of bias in parameter estimates. Critically, the simulations reveal that failing to account for overdispersion in mixed models can erroneously inflate measures of explained variance (r2, which may lead to researchers overestimating the predictive power of variables of interest. This work suggests use of observation-level random effects provides a simple and robust means to account for overdispersion in count data, but also that their ability to minimise bias is not uniform across all types of overdispersion and must be applied judiciously.

  1. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  2. CalCOFI Egg Counts

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...

  3. Detection limits for radioanalytical counting techniques

    International Nuclear Information System (INIS)

    Hartwell, J.K.

    1975-06-01

    In low-level radioanalysis it is usually necessary to test the sample net counts against some ''Critical Level'' in order to determine if a given result indicates detection. This is an interpretive review of the work by Nicholson (1963), Currie (1968) and Gilbert (1974). Nicholson's evaluation of three different computational formulas for estimation of the ''Critical Level'' is discussed. The details of Nicholson's evaluation are presented along with a basic discussion of the testing procedures used. Recommendations are presented for calculation of confidence intervals, for reporting of analytical results, and for extension of the derived formula to more complex cases such as multiple background counts, multiple use of a single background count, and gamma spectrometric analysis

  4. Counting statistics in radioactivity measurements

    International Nuclear Information System (INIS)

    Martin, J.

    1975-01-01

    The application of statistical methods to radioactivity measurement problems is analyzed in several chapters devoted successively to: the statistical nature of radioactivity counts; the application to radioactive counting of two theoretical probability distributions, Poisson's distribution law and the Laplace-Gauss law; true counting laws; corrections related to the nature of the apparatus; statistical techniques in gamma spectrometry [fr

  5. A Robust Controller Structure for Pico-Satellite Applications

    DEFF Research Database (Denmark)

    Kragelund, Martin Nygaard; Green, Martin; Kristensen, Mads

    This paper describes the development of a robust controller structure for use in pico-satellite missions. The structure relies on unknown disturbance estimation and use of robust control theory to implement a system that is robust to both unmodeled disturbances and parameter uncertainties. As one...

  6. Do your syringes count?

    International Nuclear Information System (INIS)

    Brewster, K.

    2002-01-01

    Full text: This study was designed to investigate anecdotal evidence that residual Sestamibi (MIBI) activity vaned in certain situations. For rest studies different brands of syringes were tested to see if the residuals varied. The period of time MIBI doses remained in the syringe between dispensing and injection was also considered as a possible source of increased residual counts. Stress Mibi syringe residual activities were measured to assess if the method of stress test affected residual activity. MIBI was reconstituted using 13 Gbq of Technetium in 3mls of normal saline then boiled for 10 minutes. Doses were dispensed according to department protocol and injected via cannula. Residual syringes were collected for three syringe types. In each case the barrel and plunger were measured separately. As the syringe is flushed during the exercise stress test and not the pharmacological stress test the chosen method was recorded. No relationship was demonstrated between the time MIBI remained in a syringe prior to injection and residual activity. Residual activity was not affected by method of stress test used. Actual injected activity can be calculated if the amount of activity remaining in the syringe post injection is known. Imaging time can be adjusted for residual activity to optimise count statistics. Preliminary results in this study indicate there is no difference in residual activity between syringe brands.Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  7. Photon counting and fluctuation of molecular movement

    International Nuclear Information System (INIS)

    Inohara, Koichi

    1978-01-01

    The direct measurement of the fluctuation of molecular motions, which provides with useful information on the molecular movement, was conducted by introducing photon counting method. The utilization of photon counting makes it possible to treat the molecular system consisting of a small number of molecules like a radioisotope in the detection of a small number of atoms, which are significant in biological systems. This method is based on counting the number of photons of the definite polarization emitted in a definite time interval from the fluorescent molecules excited by pulsed light, which are bound to the marked large molecules found in a definite spatial region. Using the probability of finding a number of molecules oriented in a definite direction in the definite spatial region, the probability of counting a number of photons in a definite time interval can be calculated. Thus the measurable count rate of photons can be related with the fluctuation of molecular movement. The measurement was carried out under the condition, in which the probability of the simultaneous arrival of more than two photons at a detector is less than 1/100. As the experimental results, the resolving power of photon-counting apparatus, the frequency distribution of the number of photons of some definite polarization counted for 1 nanosecond are shown. In the solution, the variance of the number of molecules of 500 on the average is 1200, which was estimated from the experimental data by assuming normal distribution. This departure from the Poisson distribution means that a certain correlation does exist in molecular movement. In solid solution, no significant deviation was observed. The correlation existing in molecular movement can be expressed in terms of the fluctuation of the number of molecules. (Nakai, Y.)

  8. Bike-Ped Portal : development of an online nonmotorized traffic count archive.

    Science.gov (United States)

    2017-05-01

    Robust bicycle and pedestrian data on a national scale would serve numerous purposes. Access to a centralized nonmotorized traffic count : archive can open the door for innovation through research, design and planning; provide safety researchers with...

  9. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  10. Protecting count queries in study design.

    Science.gov (United States)

    Vinterbo, Staal A; Sarwate, Anand D; Boxwala, Aziz A

    2012-01-01

    Today's clinical research institutions provide tools for researchers to query their data warehouses for counts of patients. To protect patient privacy, counts are perturbed before reporting; this compromises their utility for increased privacy. The goal of this study is to extend current query answer systems to guarantee a quantifiable level of privacy and allow users to tailor perturbations to maximize the usefulness according to their needs. A perturbation mechanism was designed in which users are given options with respect to scale and direction of the perturbation. The mechanism translates the true count, user preferences, and a privacy level within administrator-specified bounds into a probability distribution from which the perturbed count is drawn. Users can significantly impact the scale and direction of the count perturbation and can receive more accurate final cohort estimates. Strong and semantically meaningful differential privacy is guaranteed, providing for a unified privacy accounting system that can support role-based trust levels. This study provides an open source web-enabled tool to investigate visually and numerically the interaction between system parameters, including required privacy level and user preference settings. Quantifying privacy allows system administrators to provide users with a privacy budget and to monitor its expenditure, enabling users to control the inevitable loss of utility. While current measures of privacy are conservative, this system can take advantage of future advances in privacy measurement. The system provides new ways of trading off privacy and utility that are not provided in current study design systems.

  11. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  12. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  13. Robust statistics and geochemical data analysis

    International Nuclear Information System (INIS)

    Di, Z.

    1987-01-01

    Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures

  14. Can Detectability Analysis Improve the Utility of Point Counts for Temperate Forest Raptors?

    Science.gov (United States)

    Temperate forest breeding raptors are poorly represented in typical point count surveys because these birds are cryptic and typically breed at low densities. In recent years, many new methods for estimating detectability during point counts have been developed, including distanc...

  15. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  16. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  17. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  18. Modal Logics with Counting

    Science.gov (United States)

    Areces, Carlos; Hoffmann, Guillaume; Denis, Alexandre

    We present a modal language that includes explicit operators to count the number of elements that a model might include in the extension of a formula, and we discuss how this logic has been previously investigated under different guises. We show that the language is related to graded modalities and to hybrid logics. We illustrate a possible application of the language to the treatment of plural objects and queries in natural language. We investigate the expressive power of this logic via bisimulations, discuss the complexity of its satisfiability problem, define a new reasoning task that retrieves the cardinality bound of the extension of a given input formula, and provide an algorithm to solve it.

  19. Digital coincidence counting

    International Nuclear Information System (INIS)

    Buckman, S.M.; Ius, D.

    1996-01-01

    This paper reports on the development of a digital coincidence-counting system which comprises a custom-built data acquisition card and associated PC software. The system has been designed to digitise the pulse-trains from two radiation detectors at a rate of 20 MSamples/s with 12-bit resolution. Through hardware compression of the data, the system can continuously record both individual pulse-shapes and the time intervals between pulses. Software-based circuits are used to process the stored pulse trains. These circuits are constructed simply by linking together icons representing various components such as coincidence mixers, time delays, single-channel analysers, deadtimes and scalers. This system enables a pair of pulse trains to be processed repeatedly using any number of different methods. Some preliminary results are presented in order to demonstrate the versatility and efficiency of this new method. (orig.)

  20. Digital coincidence counting

    Science.gov (United States)

    Buckman, S. M.; Ius, D.

    1996-02-01

    This paper reports on the development of a digital coincidence-counting system which comprises a custom-built data acquisition card and associated PC software. The system has been designed to digitise the pulse-trains from two radiation detectors at a rate of 20 MSamples/s with 12-bit resolution. Through hardware compression of the data, the system can continuously record both individual pulse-shapes and the time intervals between pulses. Software-based circuits are used to process the stored pulse trains. These circuits are constructed simply by linking together icons representing various components such as coincidence mixers, time delays, single-channel analysers, deadtimes and scalers. This system enables a pair of pulse trains to be processed repeatedly using any number of different methods. Some preliminary results are presented in order to demonstrate the versatility and efficiency of this new method.

  1. Robust Approaches to Forecasting

    OpenAIRE

    Jennifer Castle; David Hendry; Michael P. Clements

    2014-01-01

    We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...

  2. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  3. Automatic vehicle counting system for traffic monitoring

    Science.gov (United States)

    Crouzil, Alain; Khoudour, Louahdi; Valiere, Paul; Truong Cong, Dung Nghy

    2016-09-01

    The article is dedicated to the presentation of a vision-based system for road vehicle counting and classification. The system is able to achieve counting with a very good accuracy even in difficult scenarios linked to occlusions and/or presence of shadows. The principle of the system is to use already installed cameras in road networks without any additional calibration procedure. We propose a robust segmentation algorithm that detects foreground pixels corresponding to moving vehicles. First, the approach models each pixel of the background with an adaptive Gaussian distribution. This model is coupled with a motion detection procedure, which allows correctly location of moving vehicles in space and time. The nature of trials carried out, including peak periods and various vehicle types, leads to an increase of occlusions between cars and between cars and trucks. A specific method for severe occlusion detection, based on the notion of solidity, has been carried out and tested. Furthermore, the method developed in this work is capable of managing shadows with high resolution. The related algorithm has been tested and compared to a classical method. Experimental results based on four large datasets show that our method can count and classify vehicles in real time with a high level of performance (>98%) under different environmental situations, thus performing better than the conventional inductive loop detectors.

  4. Fitting a distribution to miccrobial counts: making sense of zeros

    DEFF Research Database (Denmark)

    Ribeiro Duarte, Ana Sofia; Stockmarr, Anders; Nauta, Maarten

    and standard deviation) and the prevalence of contaminated food units (one minus the proportion of “true zeros”) from a set of microbial counts. By running the model with in silico generated concentration and count data, we could evaluate the performance of this method in terms of estimation of the three......Non-detects or left-censored results are inherent to the traditional methods of microbial enumeration in foods. Typically, a low concentration of microorganisms in a food unit goes undetected in plate counts or most probable number (MPN) counts, and produces “artificial zeros”. However......, these “artificial zeros” are only a share of the total number of zero counts resulting from a sample, as their number adds up to the number of “true zeros” resulting from uncontaminated units. In the process of fitting a probability distribution to microbial counts, “artificial” and “true” zeros are usually...

  5. Atmospheric mold spore counts in relation to meteorological parameters

    Science.gov (United States)

    Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.

    Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (Pmodel was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.

  6. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  7. METRIC CHARACTERISTICS OF VARIOUS METHODS FOR NUMERICAL DENSITY ESTIMATION IN TRANSMISSION LIGHT MICROSCOPY – A COMPUTER SIMULATION

    Directory of Open Access Journals (Sweden)

    Miroslav Kališnik

    2011-05-01

    Full Text Available In the introduction the evolution of methods for numerical density estimation of particles is presented shortly. Three pairs of methods have been analysed and compared: (1 classical methods for particles counting in thin and thick sections, (2 original and modified differential counting methods and (3 physical and optical disector methods. Metric characteristics such as accuracy, efficiency, robustness, and feasibility of methods have been estimated and compared. Logical, geometrical and mathematical analysis as well as computer simulations have been applied. In computer simulations a model of randomly distributed equal spheres with maximal contrast against surroundings has been used. According to our computer simulation all methods give accurate results provided that the sample is representative and sufficiently large. However, there are differences in their efficiency, robustness and feasibility. Efficiency and robustness increase with increasing slice thickness in all three pairs of methods. Robustness is superior in both differential and both disector methods compared to both classical methods. Feasibility can be judged according to the additional equipment as well as to the histotechnical and counting procedures necessary for performing individual counting methods. However, it is evident that not all practical problems can efficiently be solved with models.

  8. Counts of low-Redshift SDSS quasar candidates

    International Nuclear Information System (INIS)

    Zeljko Ivezic

    2004-01-01

    We analyze the counts of low-redshift quasar candidates selected using nine-epoch SDSS imaging data. The co-added catalogs are more than 1 mag deeper than single-epoch SDSS data, and allow the selection of low-redshift quasar candidates using UV-excess and also variability techniques. The counts of selected candidates are robustly determined down to g = 21.5. This is about 2 magnitudes deeper than the position of a change in the slope of the counts reported by Boyle (and others) (1990, 2000) for a sample selected by UV-excess, and questioned by Hawkins and Veron (1995), who utilized a variability-selected sample. Using SDSS data, we confirm a change in the slope of the counts for both UV-excess and variability selected samples, providing strong support for the Boyle (and others) results

  9. Robust bayesian analysis of an autoregressive model with ...

    African Journals Online (AJOL)

    In this work, robust Bayesian analysis of the Bayesian estimation of an autoregressive model with exponential innovations is performed. Using a Bayesian robustness methodology, we show that, using a suitable generalized quadratic loss, we obtain optimal Bayesian estimators of the parameters corresponding to the ...

  10. Evaluation of cyanobacteria cell count detection derived from ...

    Science.gov (United States)

    Inland waters across the United States (US) are at potential risk for increased outbreaks of toxic cyanobacteria (Cyano) harmful algal bloom (HAB) events resulting from elevated water temperatures and extreme hydrologic events attributable to climate change and increased nutrient loadings associated with intensive agricultural practices. Current monitoring efforts are limited in scope due to resource limitations, analytical complexity, and data integration efforts. The goals of this study were to validate a new ocean color algorithm for satellite imagery that could potentially be used to monitor CyanoHAB events in near real-time to provide a compressive monitoring capability for freshwater lakes (>100 ha). The algorithm incorporated narrow spectral bands specific to the European Space Agency’s (ESA’s) MEdium Resolution Imaging Spectrometer (MERIS) instrument that were optimally oriented at phytoplankton pigment absorption features including phycocyanin at 620 nm. A validation of derived Cyano cell counts was performed using available in situ data assembled from existing monitoring programs across eight states in the eastern US over a 39-month period (2009–2012). Results indicated that MERIS provided robust estimates for Low (10,000–109,000 cells/mL) and Very High (>1,000,000 cells/mL) cell enumeration ranges (approximately 90% and 83%, respectively). However, the results for two intermediate ranges (110,000–299,000 and 300,000–1,000,000 cells/mL)

  11. Let's Make Data Count

    Science.gov (United States)

    Budden, A. E.; Abrams, S.; Chodacki, J.; Cruse, P.; Fenner, M.; Jones, M. B.; Lowenberg, D.; Rueda, L.; Vieglais, D.

    2017-12-01

    The impact of research has traditionally been measured by citations to journal publications and used extensively for evaluation and assessment in academia, but this process misses the impact and reach of data and software as first-class scientific products. For traditional publications, Article-Level Metrics (ALM) capture the multitude of ways in which research is disseminated and used, such as references and citations within social media and other journal articles. Here we present on the extension of usage and citation metrics collection to include other artifacts of research, namely datasets. The Make Data Count (MDC) project will enable measuring the impact of research data in a manner similar to what is currently done with publications. Data-level metrics (DLM) are a multidimensional suite of indicators measuring the broad reach and use of data as legitimate research outputs. By making data metrics openly available for reuse in a number of different ways, the MDC project represents an important first step on the path towards the full integration of data metrics into the research data management ecosystem. By assuring researchers that their contributions to scholarly progress represented by data corpora are acknowledged, data level metrics provide a foundation for streamlining the advancement of knowledge by actively promoting desirable best practices regarding research data management, publication, and sharing.

  12. LAWRENCE RADIATION LABORATORY COUNTING HANDBOOK

    Energy Technology Data Exchange (ETDEWEB)

    Group, Nuclear Instrumentation

    1966-10-01

    The Counting Handbook is a compilation of operational techniques and performance specifications on counting equipment in use at the Lawrence Radiation Laboratory, Berkeley. Counting notes have been written from the viewpoint of the user rather than that of the designer or maintenance man. The only maintenance instructions that have been included are those that can easily be performed by the experimenter to assure that the equipment is operating properly.

  13. SUMS Counts-Related Projects

    Data.gov (United States)

    Social Security Administration — Staging Instance for all SUMs Counts related projects including: Redeterminations/Limited Issue, Continuing Disability Resolution, CDR Performance Measures, Initial...

  14. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  15. Robust Manufacturing Control

    CERN Document Server

    2013-01-01

    This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...

  16. Compton suppression gamma-counting: The effect of count rate

    Science.gov (United States)

    Millard, H.T.

    1984-01-01

    Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.

  17. Robust plasmonic substrates

    DEFF Research Database (Denmark)

    Kostiučenko, Oksana; Fiutowski, Jacek; Tamulevicius, Tomas

    2014-01-01

    Robustness is a key issue for the applications of plasmonic substrates such as tip-enhanced Raman spectroscopy, surface-enhanced spectroscopies, enhanced optical biosensing, optical and optoelectronic plasmonic nanosensors and others. A novel approach for the fabrication of robust plasmonic...... substrates is presented, which relies on the coverage of gold nanostructures with diamond-like carbon (DLC) thin films of thicknesses 25, 55 and 105 nm. DLC thin films were grown by direct hydrocarbon ion beam deposition. In order to find the optimum balance between optical and mechanical properties...

  18. Robust surgery loading

    NARCIS (Netherlands)

    Hans, Elias W.; Wullink, Gerhard; van Houdenhoven, Mark; Kazemier, Geert

    2008-01-01

    We consider the robust surgery loading problem for a hospital’s operating theatre department, which concerns assigning surgeries and sufficient planned slack to operating room days. The objective is to maximize capacity utilization and minimize the risk of overtime, and thus cancelled patients. This

  19. Robustness Envelopes of Networks

    NARCIS (Netherlands)

    Trajanovski, S.; Martín-Hernández, J.; Winterbach, W.; Van Mieghem, P.

    2013-01-01

    We study the robustness of networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. Whilst both of these have been studied in the literature, existing approaches tend to study random failure in terms of average-case

  20. Track counting in radon dosimetry

    International Nuclear Information System (INIS)

    Fesenbeck, Ingo; Koehler, Bernd; Reichert, Klaus-Martin

    2013-01-01

    The newly developed, computer-controlled track counting system is capable of imaging and analyzing the entire area of nuclear track detectors. The high optical resolution allows a new analysis approach for the process of automated counting using digital image processing technologies. This way, higher exposed detectors can be evaluated reliably by an automated process as well. (orig.)

  1. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    Science.gov (United States)

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  2. Study on advancement of in vivo counting using mathematical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kinase, Sakae [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-05-01

    To obtain an assessment of the committed effective dose, individual monitoring for the estimation of intakes of radionuclides is required. For individual monitoring of exposure to intakes of radionuclides, direct measurement of radionuclides in the body - in vivo counting- is very useful. To advance in a precision in vivo counting which fulfills the requirements of ICRP 1990 recommendations, some problems, such as the investigation of uncertainties in estimates of body burdens by in vivo counting, and the selection of the way to improve the precision, have been studied. In the present study, a calibration technique for in vivo counting application using Monte Carlo simulation was developed. The advantage of the technique is that counting efficiency can be obtained for various shapes and sizes that are very difficult to change for phantoms. To validate the calibration technique, the response functions and counting efficiencies of a whole-body counter installed in JAERI were evaluated using the simulation and measurements. Consequently, the calculations are in good agreement with the measurements. The method for the determination of counting efficiency curves as a function of energy was developed using the present technique and a physiques correction equation was derived from the relationship between parameters of correction factor and counting efficiencies of the JAERI whole-body counter. The uncertainties in body burdens of {sup 137}Cs estimated with the JAERI whole-body counter were also investigated using the Monte Carlo simulation and measurements. It was found that the uncertainties of body burdens estimated with the whole-body counter are strongly dependent on various sources of uncertainty such as radioactivity distribution within the body and counting statistics. Furthermore, the evaluation method of the peak efficiencies of a Ge semi-conductor detector was developed by Monte Carlo simulation for optimum arrangement of Ge semi-conductor detectors for

  3. Galaxy number counts: Pt. 2

    International Nuclear Information System (INIS)

    Metcalfe, N.; Shanks, T.; Fong, R.; Jones, L.R.

    1991-01-01

    Using the Prime Focus CCD Camera at the Isaac Newton Telescope we have determined the form of the B and R galaxy number-magnitude count relations in 12 independent fields for 21 m ccd m and 19 m ccd m 5. The average galaxy count relations lie in the middle of the wide range previously encompassed by photographic data. The field-to-field variation of the counts is small enough to define the faint (B m 5) galaxy count to ±10 per cent and this variation is consistent with that expected from galaxy clustering considerations. Our new data confirm that the B, and also the R, galaxy counts show evidence for strong galaxy luminosity evolution, and that the majority of the evolving galaxies are of moderately blue colour. (author)

  4. Hanford whole body counting manual

    International Nuclear Information System (INIS)

    Palmer, H.E.; Brim, C.P.; Rieksts, G.A.; Rhoads, M.C.

    1987-05-01

    This document, a reprint of the Whole Body Counting Manual, was compiled to train personnel, document operation procedures, and outline quality assurance procedures. The current manual contains information on: the location, availability, and scope of services of Hanford's whole body counting facilities; the administrative aspect of the whole body counting operation; Hanford's whole body counting facilities; the step-by-step procedure involved in the different types of in vivo measurements; the detectors, preamplifiers and amplifiers, and spectroscopy equipment; the quality assurance aspect of equipment calibration and recordkeeping; data processing, record storage, results verification, report preparation, count summaries, and unit cost accounting; and the topics of minimum detectable amount and measurement accuracy and precision. 12 refs., 13 tabs

  5. Robust bayesian inference of generalized Pareto distribution ...

    African Journals Online (AJOL)

    En utilisant une etude exhaustive de Monte Carlo, nous prouvons que, moyennant une fonction perte generalisee adequate, on peut construire un estimateur Bayesien robuste du modele. Key words: Bayesian estimation; Extreme value; Generalized Fisher information; Gener- alized Pareto distribution; Monte Carlo; ...

  6. Finite Algorithms for Robust Linear Regression

    DEFF Research Database (Denmark)

    Madsen, Kaj; Nielsen, Hans Bruun

    1990-01-01

    The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may...

  7. Double hard scattering without double counting

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Gaunt, Jonathan R. [VU Univ. Amsterdam (Netherlands). NIKHEF Theory Group; Schoenwald, Kay [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2017-02-15

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  8. How fast can quantum annealers count?

    International Nuclear Information System (INIS)

    Hen, Itay

    2014-01-01

    We outline an algorithm for the quantum counting problem using adiabatic quantum computation (AQC). We show that the mechanism of quantum-adiabatic evolution may be utilized toward estimating the number of solutions to a problem, and not only to find them. Using local adiabatic evolution, a process in which the adiabatic procedure is performed at a variable rate, the problem of counting the number of marked items in an unstructured database is solved quadratically faster than the corresponding classical algorithm. The above algorithm provides further evidence for the potentially powerful capabilities of AQC as a paradigm for more efficient problem solving on a quantum computer, and may be used as the basis for solving more sophisticated problems. (paper)

  9. Double hard scattering without double counting

    International Nuclear Information System (INIS)

    Diehl, Markus; Gaunt, Jonathan R.

    2017-02-01

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  10. A robust classic.

    Science.gov (United States)

    Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus

    2011-01-01

    In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.

  11. Robust Airline Schedules

    OpenAIRE

    Eggenberg, Niklaus; Salani, Matteo; Bierlaire, Michel

    2010-01-01

    Due to economic pressure industries, when planning, tend to focus on optimizing the expected profit or the yield. The consequence of highly optimized solutions is an increased sensitivity to uncertainty. This generates additional "operational" costs, incurred by possible modifications of the original plan to be performed when reality does not reflect what was expected in the planning phase. The modern research trend focuses on "robustness" of solutions instead of yield or profit. Although ro...

  12. The Crane Robust Control

    Directory of Open Access Journals (Sweden)

    Marek Hicar

    2004-01-01

    Full Text Available The article is about a control design for complete structure of the crane: crab, bridge and crane uplift.The most important unknown parameters for simulations are burden weight and length of hanging rope. We will use robustcontrol for crab and bridge control to ensure adaptivity for burden weight and rope length. Robust control will be designed for current control of the crab and bridge, necessary is to know the range of unknown parameters. Whole robust will be splitto subintervals and after correct identification of unknown parameters the most suitable robust controllers will be chosen.The most important condition at the crab and bridge motion is avoiding from burden swinging in the final position. Crab and bridge drive is designed by asynchronous motor fed from frequency converter. We will use crane uplift with burden weightobserver in combination for uplift, crab and bridge drive with cooperation of their parameters: burden weight, rope length and crab and bridge position. Controllers are designed by state control method. We will use preferably a disturbance observerwhich will identify burden weight as a disturbance. The system will be working in both modes at empty hook as well asat maximum load: burden uplifting and dropping down.

  13. In vivo counting of uranium

    International Nuclear Information System (INIS)

    Palmer, H.E.

    1985-03-01

    A state-of-the-art radiation detector system consisting of six individually mounted intrinsic germanium planar detectors, each 20 cm 2 by 13 mm thick, mounted together such that the angle of the whole system can be changed to match the slope of the chest of the person being counted, is described. The sensitivity of the system for counting uranium and plutonium in vivo and the precedures used in calibrating the system are also described. Some results of counts done on uranium mill workers are presented. 15 figs., 2 tabs

  14. Assessing the composition of fragmented agglutinated foraminiferal assemblages in ancient sediments: comparison of counting and area-based methods in Famennian samples (Late Devonian)

    Science.gov (United States)

    Girard, Catherine; Dufour, Anne-Béatrice; Charruault, Anne-Lise; Renaud, Sabrina

    2018-01-01

    Benthic foraminifera have been used as proxies for various paleoenvironmental variables such as food availability, carbon flux from surface waters, microhabitats, and indirectly water depth. Estimating assemblage composition based on morphotypes, as opposed to genus- or species-level identification, potentially loses important ecological information but opens the way to the study of ancient time periods. However, the ability to accurately constrain benthic foraminiferal assemblages has been questioned when the most abundant foraminifera are fragile agglutinated forms, particularly prone to fragmentation. Here we test an alternate method for accurately estimating the composition of fragmented assemblages. The cumulated area per morphotype method is assessed, i.e., the sum of the area of all tests or fragments of a given morphotype in a sample. The percentage of each morphotype is calculated as a portion of the total cumulated area. Percentages of different morphotypes based on counting and cumulated area methods are compared one by one and analyzed using principal component analyses, a co-inertia analysis, and Shannon diversity indices. Morphotype percentages are further compared to an estimate of water depth based on microfacies description. Percentages of the morphotypes are not related to water depth. In all cases, counting and cumulated area methods deliver highly similar results, suggesting that the less time-consuming traditional counting method may provide robust estimates of assemblages. The size of each morphotype may deliver paleobiological information, for instance regarding biomass, but should be considered carefully due to the pervasive issue of fragmentation.

  15. Robust methods and asymptotic theory in nonlinear econometrics

    CERN Document Server

    Bierens, Herman J

    1981-01-01

    This Lecture Note deals with asymptotic properties, i.e. weak and strong consistency and asymptotic normality, of parameter estimators of nonlinear regression models and nonlinear structural equations under various assumptions on the distribution of the data. The estimation methods involved are nonlinear least squares estimation (NLLSE), nonlinear robust M-estimation (NLRME) and non­ linear weighted robust M-estimation (NLWRME) for the regression case and nonlinear two-stage least squares estimation (NL2SLSE) and a new method called minimum information estimation (MIE) for the case of structural equations. The asymptotic properties of the NLLSE and the two robust M-estimation methods are derived from further elaborations of results of Jennrich. Special attention is payed to the comparison of the asymptotic efficiency of NLLSE and NLRME. It is shown that if the tails of the error distribution are fatter than those of the normal distribution NLRME is more efficient than NLLSE. The NLWRME method is appropriate ...

  16. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  17. Evaluation of Deterministic and Stochastic Components of Traffic Counts

    Directory of Open Access Journals (Sweden)

    Ivan Bošnjak

    2012-10-01

    Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.

  18. Complete Blood Count (For Parents)

    Science.gov (United States)

    ... Kids Deal With Injections and Blood Tests Blood Culture Anemia Blood Test: Basic Metabolic Panel (BMP) Blood Test: Hemoglobin Basic Blood Chemistry Tests Word! Complete Blood Count (CBC) Medical Tests and Procedures ( ...

  19. Make My Trip Count 2015

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — The Make My Trip Count (MMTC) commuter survey, conducted in September and October 2015 by GBA, the Pittsburgh 2030 District, and 10 other regional transportation...

  20. Counting Triangles to Sum Squares

    Science.gov (United States)

    DeMaio, Joe

    2012-01-01

    Counting complete subgraphs of three vertices in complete graphs, yields combinatorial arguments for identities for sums of squares of integers, odd integers, even integers and sums of the triangular numbers.