WorldWideScience

Sample records for density estimation methods

  1. Concrete density estimation by rebound hammer method

    Science.gov (United States)

    Ismail, Mohamad Pauzi bin; Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri; Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin

    2016-01-01

    Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite.

  2. Concrete density estimation by rebound hammer method

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Mohamad Pauzi bin, E-mail: pauzi@nm.gov.my; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri [NDT Group, Nuclear Malaysia, Bangi, Kajang, Selangor (Malaysia); Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin [Material Technology Program, Faculty of Applied Sciences, UiTM, Shah Alam, Selangor (Malaysia); Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin [Pusat Penyelidikan Mineral, Jabatan Mineral dan Geosains, Ipoh, Perak (Malaysia)

    2016-01-22

    Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite.

  3. Comparison of density estimation methods for astronomical datasets

    NARCIS (Netherlands)

    Ferdosi, B.J.; Buddelmeijer, H.; Trager, S.C.; Wilkinson, M.H.F.; Roerdink, J.B.T.M.

    2011-01-01

    Context. Galaxies are strongly influenced by their environment. Quantifying the galaxy density is a difficult but critical step in studying the properties of galaxies. Aims. We aim to determine differences in density estimation methods and their applicability in astronomical problems. We study the p

  4. A method for density estimation based on expectation identities

    Science.gov (United States)

    Peralta, Joaquín; Loyola, Claudia; Loguercio, Humberto; Davis, Sergio

    2017-06-01

    We present a simple and direct method for non-parametric estimation of a one-dimensional probability density, based on the application of the recent conjugate variables theorem. The method expands the logarithm of the probability density ln P(x|I) in terms of a complete basis and numerically solves for the coefficients of the expansion using a linear system of equations. No Monte Carlo sampling is needed. We present preliminary results that show the practical usefulness of the method for modeling statistical data.

  5. Accurate photometric redshift probability density estimation - method comparison and application

    CERN Document Server

    Rau, Markus Michael; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-01-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which vastly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, that can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitudes less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular Neural Network code (ANNz). In our use case, this improvemen...

  6. An Adaptive Background Subtraction Method Based on Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    Mignon Park

    2012-09-01

    Full Text Available In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.

  7. A projection and density estimation method for knowledge discovery.

    Science.gov (United States)

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  8. A projection and density estimation method for knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Adam Stanski

    Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  9. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    Directory of Open Access Journals (Sweden)

    D.O. Smallwood

    1996-01-01

    Full Text Available It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  10. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope

  11. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    Directory of Open Access Journals (Sweden)

    Hendra Gunawan

    2014-06-01

    Full Text Available http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting of an effect of topography, an effect of intracrustal, and an isostatic compensation. Based on simulation results, Bouguer density estimates were then investigated for a gravity survey of 2005 on La Soufriere Volcano-Guadeloupe area (Antilles Islands. The Bouguer density based on the Parasnis approach is 2.71 g/cm3 for the whole area, except the edifice area where average topography density estimates are 2.21 g/cm3 where Bouguer density estimates from previous gravity survey of 1975 are 2.67 g/cm3. The Bouguer density in La Soufriere Volcano was uncertainly estimated to be 0.1 g/cm3. For the studied area, the density deduced from refraction seismic data is coherent with the recent Bouguer density estimates. New Bouguer anomaly map based on these Bouguer density values allows to a better geological intepretation.    

  12. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    Directory of Open Access Journals (Sweden)

    Hendra Gunawan

    2014-06-01

    Full Text Available http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting of an effect of topography, an effect of intracrustal, and an isostatic compensation. Based on simulation results, Bouguer density estimates were then investigated for a gravity survey of 2005 on La Soufriere Volcano-Guadeloupe area (Antilles Islands. The Bouguer density based on the Parasnis approach is 2.71 g/cm3 for the whole area, except the edifice area where average topography density estimates are 2.21 g/cm3 where Bouguer density estimates from previous gravity survey of 1975 are 2.67 g/cm3. The Bouguer density in La Soufriere Volcano was uncertainly estimated to be 0.1 g/cm3. For the studied area, the density deduced from refraction seismic data is coherent with the recent Bouguer density estimates. New Bouguer anomaly map based on these Bouguer density values allows to a better geological intepretation.    

  13. ESTIMATING NUMBER DENSITY NV – A COMPARISON OF AN IMPROVED SALTYKOV ESTIMATOR AND THE DISECTOR METHOD

    Directory of Open Access Journals (Sweden)

    Ashot Davtian

    2011-05-01

    Full Text Available Two methods for the estimation of number per unit volume NV of spherical particles are discussed: the (physical disector (Sterio, 1984 and Saltykov's estimator (Saltykov, 1950; Fullman, 1953. A modification of Saltykov's estimator is proposed which reduces the variance. Formulae for bias and variance are given for both disector and improved Saltykov estimator for the case of randomly positioned particles. They enable the comparison of the two estimators with respect to their precision in terms of mean squared error.

  14. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    Directory of Open Access Journals (Sweden)

    Darren Kidney

    Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will

  15. A method to estimate plant density and plant spacing heterogeneity: application to wheat crops.

    Science.gov (United States)

    Liu, Shouyang; Baret, Fred; Allard, Denis; Jin, Xiuliang; Andrieu, Bruno; Burger, Philippe; Hemmerlé, Matthieu; Comar, Alexis

    2017-01-01

    Plant density and its non-uniformity drive the competition among plants as well as with weeds. They need thus to be estimated with small uncertainties accuracy. An optimal sampling method is proposed to estimate the plant density in wheat crops from plant counting and reach a given precision. Three experiments were conducted in 2014 resulting in 14 plots across varied sowing density, cultivars and environmental conditions. The coordinates of the plants along the row were measured over RGB high resolution images taken from the ground level. Results show that the spacing between consecutive plants along the row direction are independent and follow a gamma distribution under the varied conditions experienced. A gamma count model was then derived to define the optimal sample size required to estimate plant density for a given precision. Results suggest that measuring the length of segments containing 90 plants will achieve a precision better than 10%, independently from the plant density. This approach appears more efficient than the usual method based on fixed length segments where the number of plants are counted: the optimal length for a given precision on the density estimation will depend on the actual plant density. The gamma count model parameters may also be used to quantify the heterogeneity of plant spacing along the row by exploiting the variability between replicated samples. Results show that to achieve a 10% precision on the estimates of the 2 parameters of the gamma model, 200 elementary samples corresponding to the spacing between 2 consecutive plants should be measured. This method provides an optimal sampling strategy to estimate the plant density and quantify the plant spacing heterogeneity along the row.

  16. Contingent kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Scott Fortmann-Roe

    Full Text Available Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a "contingent kernel density estimation" technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method.

  17. Retrieval of mesospheric electron densities using an optimal estimation inverse method

    Science.gov (United States)

    Grant, J.; Grainger, R. G.; Lawrence, B. N.; Fraser, G. J.; von Biel, H. A.; Heuff, D. N.; Plank, G. E.

    2004-03-01

    We present a new method to determine mesospheric electron densities from partially reflected medium frequency radar pulses. The technique uses an optimal estimation inverse method and retrieves both an electron density profile and a gradient electron density profile. As well as accounting for the absorption of the two magnetoionic modes formed by ionospheric birefringence of each radar pulse, the forward model of the retrieval parameterises possible Fresnel scatter of each mode by fine electronic structure, phase changes of each mode due to Faraday rotation and the dependence of the amplitudes of the backscattered modes upon pulse width. Validation results indicate that known profiles can be retrieved and that χ2 tests upon retrieval parameters satisfy validity criteria. Application to measurements shows that retrieved electron density profiles are consistent with accepted ideas about seasonal variability of electron densities and their dependence upon nitric oxide production and transport.

  18. Use of prediction methods to estimate true density of active pharmaceutical ingredients.

    Science.gov (United States)

    Cao, Xiaoping; Leyva, Norma; Anderson, Stephen R; Hancock, Bruno C

    2008-05-01

    True density is a fundamental and important property of active pharmaceutical ingredients (APIs). Using prediction methods to estimate the API true density can be very beneficial in pharmaceutical research and development, especially when experimental measurements cannot be made due to lack of material or sample handling restrictions. In this paper, two empirical prediction methods developed by Girolami and Immirzi and Perini were used to estimate the true density of APIs, and the estimation results were compared with experimentally measured values by helium pycnometry. The Girolami method is simple and can be used for both liquids and solids. For the tested APIs, the Girolami method had a maximum error of -12.7% and an average percent error of -3.0% with a 95% CI of (-3.8, -2.3%). The Immirzi and Perini method is more involved and is mainly used for solid crystals. In general, it gives better predictions than the Girolami method. For the tested APIs, the Immirzi and Perini method had a maximum error of 9.6% and an average percent error of 0.9% with a 95% CI of (0.3, 1.6%).

  19. Airborne Crowd Density Estimation

    Science.gov (United States)

    Meynberg, O.; Kuschk, G.

    2013-10-01

    This paper proposes a new method for estimating human crowd densities from aerial imagery. Applications benefiting from an accurate crowd monitoring system are mainly found in the security sector. Normally crowd density estimation is done through in-situ camera systems mounted on high locations although this is not appropriate in case of very large crowds with thousands of people. Using airborne camera systems in these scenarios is a new research topic. Our method uses a preliminary filtering of the whole image space by suitable and fast interest point detection resulting in a number of image regions, possibly containing human crowds. Validation of these candidates is done by transforming the corresponding image patches into a low-dimensional and discriminative feature space and classifying the results using a support vector machine (SVM). The feature space is spanned by texture features computed by applying a Gabor filter bank with varying scale and orientation to the image patches. For evaluation, we use 5 different image datasets acquired by the 3K+ aerial camera system of the German Aerospace Center during real mass events like concerts or football games. To evaluate the robustness and generality of our method, these datasets are taken from different flight heights between 800 m and 1500 m above ground (keeping a fixed focal length) and varying daylight and shadow conditions. The results of our crowd density estimation are evaluated against a reference data set obtained by manually labeling tens of thousands individual persons in the corresponding datasets and show that our method is able to estimate human crowd densities in challenging realistic scenarios.

  20. METAPHOR: A machine learning based method for the probability density estimation of photometric redshifts

    CERN Document Server

    Cavuoti, Stefano; Brescia, Massimo; Vellucci, Civita; Tortora, Crescenzo; Longo, Giuseppe

    2016-01-01

    A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z's). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine learning based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z Probability Density Function (PDF), due to the fact that the analytical relation mapping the photometric parameters onto the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use...

  1. New Density Estimation Methods for Charged Particle Beams With Applications to Microbunching Instability

    Energy Technology Data Exchange (ETDEWEB)

    Balsa Terzic, Gabriele Bassi

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  2. A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars

    Science.gov (United States)

    Zou, Hong; Ye, Yu Guang; Wang, Jin Song; Nielsen, Erling; Cui, Jun; Wang, Xiao Dong

    2016-04-01

    A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars is introduced in this study. The neutral densities at 130 km can be derived from the ionospheric and atmospheric measurements of the Radio Science experiment on board Mars Global Surveyor (MGS). The derived neutral densities cover a large longitude range in northern high latitudes from summer to late autumn during 3 Martian years, which fills the gap of the previous observations for the upper atmosphere of Mars. The simulations of the Laboratoire de Météorologie Dynamique Mars global circulation model can be corrected with a simple linear equation to fit the neutral densities derived from the first MGS/RS (Radio Science) data sets (EDS1). The corrected simulations with the same correction parameters as for EDS1 match the derived neutral densities from two other MGS/RS data sets (EDS2 and EDS3) very well. The derived neutral density from EDS3 shows a dust storm effect, which is in accord with the Mars Express (MEX) Spectroscopy for Investigation of Characteristics of the Atmosphere of Mars measurement. The neutral density derived from the MGS/RS measurements can be used to validate the Martian atmospheric models. The method presented in this study can be applied to other radio occultation measurements, such as the result of the Radio Science experiment on board MEX.

  3. Daniell method for power spectral density estimation in atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Labuda, Aleksander [Asylum Research an Oxford Instruments Company, Santa Barbara, California 93117 (United States)

    2016-03-15

    An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to a more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.

  4. ANNz2 - Photometric redshift and probability density function estimation using machine learning methods

    CERN Document Server

    Sadeh, Iftach; Lahav, Ofer

    2015-01-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solution...

  5. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  6. Divisive latent class modeling as a density estimation method for categorical data

    NARCIS (Netherlands)

    van der Palm, D.W.; van der Ark, L.A.; Vermunt, J.K.

    2016-01-01

    Traditionally latent class (LC) analysis is used by applied researchers as a tool for identifying substantively meaningful clusters. More recently, LC models have also been used as a density estimation tool for categorical variables. We introduce a divisive LC (DLC) model as a density estimation too

  7. METAPHOR: a machine-learning-based method for the probability density estimation of photometric redshifts

    Science.gov (United States)

    Cavuoti, S.; Amaro, V.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-02-01

    A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine-learning-based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z probability density function (PDF), due to the fact that the analytical relation mapping the photometric parameters on to the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use of the MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule), with the possibility to easily replace the specific machine-learning model chosen to predict photo-z. We present a summary of results on SDSS-DR9 galaxy data, used also to perform a direct comparison with PDFs obtained by the LE PHARE spectral energy distribution template fitting. We show that METAPHOR is capable to estimate the precision and reliability of photometric redshifts obtained with three different self-adaptive techniques, i.e. MLPQNA, Random Forest and the standard K-Nearest Neighbors models.

  8. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    Science.gov (United States)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  9. X-Ray Methods to Estimate Breast Density Content in Breast Tissue

    Science.gov (United States)

    Maraghechi, Borna

    This work focuses on analyzing x-ray methods to estimate the fat and fibroglandular contents in breast biopsies and in breasts. The knowledge of fat in the biopsies could aid in their wide-angle x-ray scatter analyses. A higher mammographic density (fibrous content) in breasts is an indicator of higher cancer risk. Simulations for 5 mm thick breast biopsies composed of fibrous, cancer, and fat and for 4.2 cm thick breast fat/fibrous phantoms were done. Data from experimental studies using plastic biopsies were analyzed. The 5 mm diameter 5 mm thick plastic samples consisted of layers of polycarbonate (lexan), polymethyl methacrylate (PMMA-lucite) and polyethylene (polyet). In terms of the total linear attenuation coefficients, lexan ≡ fibrous, lucite ≡ cancer and polyet ≡ fat. The detectors were of two types, photon counting (CdTe) and energy integrating (CCD). For biopsies, three photon counting methods were performed to estimate the fat (polyet) using simulation and experimental data, respectively. The two basis function method that assumed the biopsies were composed of two materials, fat and a 50:50 mixture of fibrous (lexan) and cancer (lucite) appears to be the most promising method. Discrepancies were observed between the results obtained via simulation and experiment. Potential causes are the spectrum and the attenuation coefficient values used for simulations. An energy integrating method was compared to the two basis function method using experimental and simulation data. A slight advantage was observed for photon counting whereas both detectors gave similar results for the 4.2 cm thick breast phantom simulations. The percentage of fibrous within a 9 cm diameter circular phantom of fibrous/fat tissue was estimated via a fan beam geometry simulation. Both methods yielded good results. Computed tomography (CT) images of the circular phantom were obtained using both detector types. The radon transforms were estimated via four energy integrating

  10. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    Science.gov (United States)

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  11. A new method for estimating the critical current density of a superconductor from its hysteresis loop

    Energy Technology Data Exchange (ETDEWEB)

    Lal, Ratan, E-mail: rlal_npl_3543@yahoo.i [Superconductivity Division, National Physical Laboratory, Council of Scientific and Industrial Research, Dr. K.S. Krishnan Road, New Delhi 110012 (India)

    2010-02-15

    The critical current density J{sub c} of some of the superconducting samples, calculated on the basis of the Bean's model, shows negative curvature for low magnetic field with a downward bending near H = 0. To avoid this problem Kim's expression of the critical current density, J{sub c} = k/(H{sub 0} + H), where J{sub c} has positive curvature for all H, has been employed by connecting the positive constants k and H{sub 0} with the features of the hysteresis loop of a superconductor. A relation between the full penetration field H{sub p} and the magnetic field H{sub min}, at which the magnetization is minimum, is obtained from the Kim's theory. Taking the value of J{sub c} at H = H{sub p} according to the actual loop width, as in the Bean's theory, and at H = 0 according to an enhanced loop width due to the local internal field, values of k and H{sub 0} are obtained in terms of the magnetization values M{sup +}(-H{sub min}), M{sup -}(H{sub min}), M{sup +}(H{sub p}) and M{sup -}(H{sub p}). The resulting method of estimating J{sub c} from the hysteresis loop turns out to be as simple as the Bean's method.

  12. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  13. Application of Density Estimation Methods to Datasets Collected From a Glider

    Science.gov (United States)

    2015-09-30

    buoyancy. The methodology employed in this study to estimate population density of marine mammals is based on the works of Zimmer et al. (2008), Marques ...estimation modalities (Thomas and Marques , 2012), such as individual or group counting. In this sense, bearings to received sounds on both hydrophones will...the sea trial. Figure 2. Left: Image showing the area of REP14-MED sea-trial (red box) in the context of the Western Mediterranean Sea and

  14. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    Science.gov (United States)

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification

  15. Remotely sensed estimation of forest canopy density: A comparison of the performance of four methods

    NARCIS (Netherlands)

    Joshi, C.; Leeuw, de J.; Skidmore, A.K.; Duren, van I.C.; Oosten, van H.

    2006-01-01

    In recent years, a number of alternative methods have been proposed to predict forest canopy density from remotely sensed data. To date, however, it remains difficult to decide which method to use, since their relative performance has never been evaluated. In this study the performance of: (1) an ar

  16. An empirical method for estimating probability density functions of gridded daily minimum and maximum temperature

    Science.gov (United States)

    Lussana, C.

    2013-04-01

    The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.

  17. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM Using Monte Carlo Simulation.

    Directory of Open Access Journals (Sweden)

    Md Nabiul Islam Khan

    Full Text Available In the Point-Centred Quarter Method (PCQM, the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1 and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns plant populations and empirical ones.PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3 show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition. If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1/(π ∑ R2 but not 12N/(π ∑ R2, of PCQM2 is 4(8N - 1/(π ∑ R2 but not 28N/(π ∑ R2 and of PCQM3 is 4(12N - 1/(π ∑ R2 but not 44N/(π ∑ R2 as published.If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process

  18. Estimating sap flux densities in date palm trees using the heat dissipation method and weighing lysimeters.

    Science.gov (United States)

    Sperling, Or; Shapira, Or; Cohen, Shabtai; Tripler, Effi; Schwartz, Amnon; Lazarovitch, Naftali

    2012-09-01

    In a world of diminishing water reservoirs and a rising demand for food, the practice and development of water stress indicators and sensors are in rapid progress. The heat dissipation method, originally established by Granier, is herein applied and modified to enable sap flow measurements in date palm trees in the southern Arava desert of Israel. A long and tough sensor was constructed to withstand insertion into the date palm's hard exterior stem. This stem is wide and fibrous, surrounded by an even tougher external non-conducting layer of dead leaf bases. Furthermore, being a monocot species, water flow does not necessarily occur through the outer part of the palm's stem, as in most trees. Therefore, it is highly important to investigate the variations of the sap flux densities and determine the preferable location for sap flow sensing within the stem. Once installed into fully grown date palm trees stationed on weighing lysimeters, sap flow as measured by the modified sensors was compared with the actual transpiration. Sap flow was found to be well correlated with transpiration, especially when using a recent calibration equation rather than the original Granier equation. Furthermore, inducing the axial variability of the sap flux densities was found to be highly important for accurate assessments of transpiration by sap flow measurements. The sensors indicated no transpiration at night, a high increase of transpiration from 06:00 to 09:00, maximum transpiration at 12:00, followed by a moderate reduction until 08:00; when transpiration ceased. These results were reinforced by the lysimeters' output. Reduced sap flux densities were detected at the stem's mantle when compared with its center. These results were reinforced by mechanistic measurements of the stem's specific hydraulic conductivity. Variance on the vertical axis was also observed, indicating an accelerated flow towards the upper parts of the tree and raising a hypothesis concerning dehydrating

  19. Varying kernel density estimation on ℝ+

    Science.gov (United States)

    Mnatsakanov, Robert; Sarkisian, Khachatur

    2015-01-01

    In this article a new nonparametric density estimator based on the sequence of asymmetric kernels is proposed. This method is natural when estimating an unknown density function of a positive random variable. The rates of Mean Squared Error, Mean Integrated Squared Error, and the L1-consistency are investigated. Simulation studies are conducted to compare a new estimator and its modified version with traditional kernel density construction. PMID:26740729

  20. METRIC CHARACTERISTICS OF VARIOUS METHODS FOR NUMERICAL DENSITY ESTIMATION IN TRANSMISSION LIGHT MICROSCOPY – A COMPUTER SIMULATION

    Directory of Open Access Journals (Sweden)

    Miroslav Kališnik

    2011-05-01

    Full Text Available In the introduction the evolution of methods for numerical density estimation of particles is presented shortly. Three pairs of methods have been analysed and compared: (1 classical methods for particles counting in thin and thick sections, (2 original and modified differential counting methods and (3 physical and optical disector methods. Metric characteristics such as accuracy, efficiency, robustness, and feasibility of methods have been estimated and compared. Logical, geometrical and mathematical analysis as well as computer simulations have been applied. In computer simulations a model of randomly distributed equal spheres with maximal contrast against surroundings has been used. According to our computer simulation all methods give accurate results provided that the sample is representative and sufficiently large. However, there are differences in their efficiency, robustness and feasibility. Efficiency and robustness increase with increasing slice thickness in all three pairs of methods. Robustness is superior in both differential and both disector methods compared to both classical methods. Feasibility can be judged according to the additional equipment as well as to the histotechnical and counting procedures necessary for performing individual counting methods. However, it is evident that not all practical problems can efficiently be solved with models.

  1. Core Power Control of the fast nuclear reactors with estimation of the delayed neutron precursor density using Sliding Mode method

    Energy Technology Data Exchange (ETDEWEB)

    Ansarifar, G.R., E-mail: ghr.ansarifar@ast.ui.ac.ir; Nasrabadi, M.N.; Hassanvand, R.

    2016-01-15

    Highlights: • We present a S.M.C. system based on the S.M.O for control of a fast reactor power. • A S.M.O has been developed to estimate the density of delayed neutron precursor. • The stability analysis has been given by means Lyapunov approach. • The control system is guaranteed to be stable within a large range. • The comparison between S.M.C. and the conventional PID controller has been done. - Abstract: In this paper, a nonlinear controller using sliding mode method which is a robust nonlinear controller is designed to control a fast nuclear reactor. The reactor core is simulated based on the point kinetics equations and one delayed neutron group. Considering the limitations of the delayed neutron precursor density measurement, a sliding mode observer is designed to estimate it and finally a sliding mode control based on the sliding mode observer is presented. The stability analysis is given by means Lyapunov approach, thus the control system is guaranteed to be stable within a large range. Sliding Mode Control (SMC) is one of the robust and nonlinear methods which have several advantages such as robustness against matched external disturbances and parameter uncertainties. The employed method is easy to implement in practical applications and moreover, the sliding mode control exhibits the desired dynamic properties during the entire output-tracking process independent of perturbations. Simulation results are presented to demonstrate the effectiveness of the proposed controller in terms of performance, robustness and stability.

  2. Evaluation of Sampling Methods and Development of Sample Plans for Estimating Predator Densities in Cotton

    Science.gov (United States)

    The cost-reliability of five sampling methods (visual search, drop cloth, beat bucket, shake bucket and sweep net) was determined for predatory arthropods on cotton plants. The beat bucket sample method was the most cost-reliable while the visual sample method was the least cost-reliable. The beat ...

  3. Estimation of critical current density and grain connectivity in superconducting MgB 2 bulk using Campbell’s method

    Science.gov (United States)

    Ni, B.; Morita, Y.; Liu, Z.; Liu, C.; Himeki, K.; Otabe, E. S.; Kiuchi, M.; Matsushita, T.

    2008-09-01

    Many recent reports on the critical current density ( Jc) in superconducting MgB 2 bulks indicated that improving the grain connectivity is important, since the obtained Jc values were generally much lower than those in other metallic superconductors and it was ascribed to the poor connectivity between grains in polycrystalline MgB 2. In this study, we focused on the estimation of the global critical current density, super-current path, grain connectivity and their relationships with the faults volume fraction in the MgB 2 bulks prepared by a modified PIT (powder in tube) method. Campbell’s method was applied for the purpose of obtaining the penetrating AC flux profile and the characteristic of AC magnetic field vs. penetration depth from the sample’s surface. A computer simulation on the penetrating AC flux profile in MgB 2 bulks with randomly distributed voids, oxidized grains and other faults was also carried out. Jc obtained by Campbell’s method turned out to be smaller than that obtained from the SQUID measurement, implying that the global super-current was reduced by the existence of various faults and the lack of the electrical connectivity. It was verified that the relationship between the global critical current characteristics and the faults contained in MgB 2 samples can be quantitatively clarified by comparing the simulated critical current densities and other factors with the experimental results.

  4. Parallel Multiscale Autoregressive Density Estimation

    OpenAIRE

    Reed, Scott; Oord, Aäron van den; Kalchbrenner, Nal; Colmenarejo, Sergio Gómez; Wang, Ziyu; Belov, Dan; de Freitas, Nando

    2017-01-01

    PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density e...

  5. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...

  6. Methods for estimating the density of Elaphostrongylus rangiferi Mitskevich (Nematoda, Metastrongyloidea larvae in faeces from reindeer, Rangifer tarandus L.

    Directory of Open Access Journals (Sweden)

    Odd Halvorsen

    1983-05-01

    Full Text Available A method for estimating the density of Elaphostrongylus rangiferi larvae in reindeer faeces that have been deep frozen is described. The method involves the use of an inverted microscope with plankton counting chambers. Statistical data on the efficiency and sensitivity of the method are given. With fresh faeces, the results obtained with the method were not significantly different from those obtained with the Baermann technique. With faeces that had been stored in deep freeze, the method detected on average 30 per cent more larvae than the Baermann technique.Metoder for å estimere tettheten av hjernemarklarver i avføring fra reinsdyr.Abstract in Norwegian / Sammendrag: En metode for å estimere tettheten av hjernemarklarver i avføring som har vært dypfryst blir beskrevet. Anvendelse av et invertert mikroskop med plankton tellekammer inngår i metoden. Det blir gitt statistiske data for metodens effektivitet og følsomhet. Ved undersøkelse av fersk avføring skilte ikke de resultatene metoden ga seg fra de som ble oppnådd med Baermanns metode. Ved undersøkelse av avføring som hadde vært lagret dypfrosset ga metoden i gjennomsnitt 30 prosent flere larver enn Baermanns metode.

  7. A Novel Method for Estimation of Femoral Neck Bone Mineral Density Using Forearm Images from Peripheral Cone Beam Computed Tomography

    Directory of Open Access Journals (Sweden)

    Kwanmoon Jeong

    2016-04-01

    Full Text Available The main goal of osteoporosis treatment is prevention of osteoporosis-induced bone fracture. Dual-energy X-ray absorptiometry (DXA and quantitative computed tomographic imaging (QCT are widely used for assessment of bone mineral density (BMD. However, they have limitations in patients with special conditions. This study evaluated a method for diagnosis of osteoporosis using peripheral cone beam computed tomography (CBCT to estimate BMD. We investigated the correlation between the ratio of cortical and total bone areas of the forearm and femoral neck BMD. Based on the correlation, we established a linear transformation between the ratio and femoral neck BMD. We obtained forearm images using CBCT and femoral neck BMDs using dual-energy X-ray absorptiometry (DXA for 23 subjects. We first calculated the ratio of the cortical to the total bone area in the forearm from the CBCT images, and investigated the relationship with the femoral neck BMDs obtained from DXA. Based on this relationship, we further investigated the optimal forearm region to provide the highest correlation coefficient. We used the optimized forearm region to establish a linear transformation of the form to estimate femoral neck BMD from the calculated ratio. We observed the correlation factor of r = 0.857 (root mean square error = 0.056435 g/cm2; mean absolute percentage error = 4.5105% between femoral neck BMD and the ratio of the cortical and total bone areas. The strongest correlation was observed for the average ratios of the mid-shaft regions of the ulna and radius. Our results suggest that femoral neck BMD can be estimated from forearm CBCT images and may be useful for screening osteoporosis, with patients in a convenient sitting position. We believe that peripheral CBCT image-based BMD estimation may have significant preventative value for early osteoporosis treatment and management.

  8. Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation.

    Science.gov (United States)

    Yamane, Ikko; Sasaki, Hiroaki; Sugiyama, Masashi

    2016-07-01

    Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first estimating the density and then taking its log gradient is unreliable because an accurate density estimate does not necessarily lead to an accurate log-density gradient estimate. To cope with this problem, a method to directly estimate the log-density gradient without density estimation has been explored and demonstrated to work much better than the two-step method. The objective of this letter is to improve the performance of this direct method in multidimensional cases. Our idea is to regard the problem of log-density gradient estimation in each dimension as a task and apply regularized multitask learning to the direct log-density gradient estimator. We experimentally demonstrate the usefulness of the proposed multitask method in log-density gradient estimation and mode-seeking clustering.

  9. Parametric Return Density Estimation for Reinforcement Learning

    CERN Document Server

    Morimura, Tetsuro; Kashima, Hisashi; Hachiya, Hirotaka; Tanaka, Toshiyuki

    2012-01-01

    Most conventional Reinforcement Learning (RL) algorithms aim to optimize decision- making rules in terms of the expected re- turns. However, especially for risk man- agement purposes, other risk-sensitive crite- ria such as the value-at-risk or the expected shortfall are sometimes preferred in real ap- plications. Here, we describe a parametric method for estimating density of the returns, which allows us to handle various criteria in a unified manner. We first extend the Bellman equation for the conditional expected return to cover a conditional probability density of the returns. Then we derive an extension of the TD-learning algorithm for estimating the return densities in an unknown environment. As test instances, several parametric density estimation algorithms are presented for the Gaussian, Laplace, and skewed Laplace dis- tributions. We show that these algorithms lead to risk-sensitive as well as robust RL paradigms through numerical experiments.

  10. Comparing spatial capture–recapture modeling and nest count methods to estimate orangutan densities in the Wehea Forest, East Kalimantan, Indonesia

    Science.gov (United States)

    Spehar, Stephanie N.; Loken, Brent; Rayadin, Yaya; Royle, J. Andrew

    2015-01-01

    Accurate information on the density and abundance of animal populations is essential for understanding species' ecology and for conservation planning, but is difficult to obtain. The endangered orangutan (Pongo spp.) is an example; due to its elusive behavior and low densities, researchers have relied on methods that convert nest counts to orangutan densities and require substantial effort for reliable results. Camera trapping and spatial capture–recapture (SCR) models could provide an alternative but have not been used for primates. We compared density estimates calculated using the two methods for orangutans in the Wehea Forest, East Kalimantan, Indonesia. Camera trapping/SCR modeling produced a density estimate of 0.16 ± 0.09–0.29 indiv/km2, and nest counts produced a density estimate of 1.05 ± 0.18–6.01 indiv/km2. The large confidence interval of the nest count estimate is probably due to high variance in nest encounter rates, indicating the need for larger sample size and the substantial effort required to produce reliable results using this method. The SCR estimate produced a very low density estimate and had a narrower but still fairly wide confidence interval. This was likely due to unmodeled heterogeneity and small sample size, specifically a low number of individual captures and recaptures. We propose methodological fixes that could address these issues and improve precision. A comparison of the overall costs and benefits of the two methods suggests that camera trapping/SCR modeling can potentially be a useful tool for assessing the densities of orangutans and other elusive primates, and warrant further investigation to determine broad applicability and methodological adjustments needed.

  11. Application of the Vertex Exchange Method to estimate a semi-parametric mixture model for the MIC density of Escherichia coli isolates tested for susceptibility against ampicillin.

    Science.gov (United States)

    Jaspers, Stijn; Verbeke, Geert; Böhning, Dankmar; Aerts, Marc

    2016-01-01

    In the last decades, considerable attention has been paid to the collection of antimicrobial resistance data, with the aim of monitoring non-wild-type isolates. This monitoring is performed based on minimum inhibition concentration (MIC) values, which are collected through dilution experiments. We present a semi-parametric mixture model to estimate the entire MIC density on the continuous scale. The parametric first component is extended with a non-parametric second component and a new back-fitting algorithm, based on the Vertex Exchange Method, is proposed. Our data example shows how to estimate the MIC density for Escherichia coli tested for ampicillin and how to use this estimate for model-based classification. A simulation study was performed, showing the promising behavior of the new method, both in terms of density estimation as well as classification.

  12. ESTIMATING DENSITY OF EDIBLE DORMOUSE GLIS GLIS (L IN FOREST HABITATS: WHICH METHOD SHOULD WE CHOSE WHEN THE MONITORING IS DONE IN A PRIMEVAL FOREST?

    Directory of Open Access Journals (Sweden)

    Ioan DUMA

    2008-01-01

    Full Text Available The present study is trying to reveal the density of edible dormouse in beech forests of different ages and at different altitudes on the Semenic-Cheile Carasului National Park. The density was estimated based on two methods: with the already well known dormouse nestboxes and by census method. The results are analyzed and compared in order to provide the best solutions for dormouse monitoring in National Parks of Romania which have a chronic lack of personnel and resources.

  13. A fast tree-based method for estimating column densities in Adaptive Mesh Refinement codes Influence of UV radiation field on the structure of molecular clouds

    CERN Document Server

    Valdivia, Valeska

    2014-01-01

    Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims. Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods. We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results. We find that the accuracy for the extinction of the tree-based method is better than 10%, while the ...

  14. Estimating stellar mean density through seismic inversions

    CERN Document Server

    Reese, D R; Goupil, M J; Thompson, M J; Deheuvels, S

    2012-01-01

    Determining the mass of stars is crucial both to improving stellar evolution theory and to characterising exoplanetary systems. Asteroseismology offers a promising way to estimate stellar mean density. When combined with accurate radii determinations, such as is expected from GAIA, this yields accurate stellar masses. The main difficulty is finding the best way to extract the mean density from a set of observed frequencies. We seek to establish a new method for estimating stellar mean density, which combines the simplicity of a scaling law while providing the accuracy of an inversion technique. We provide a framework in which to construct and evaluate kernel-based linear inversions which yield directly the mean density of a star. We then describe three different inversion techniques (SOLA and two scaling laws) and apply them to the sun, several test cases and three stars. The SOLA approach and the scaling law based on the surface correcting technique described by Kjeldsen et al. (2008) yield comparable result...

  15. Anisotropic Density Estimation in Global Illumination

    DEFF Research Database (Denmark)

    Schjøth, Lars

    2009-01-01

    Density estimation employed in multi-pass global illumination algorithms gives cause to a trade-off problem between bias and noise. The problem is seen most evident as blurring of strong illumination features. This thesis addresses the problem, presenting four methods that reduce both noise...... and bias in estimates. Good results are obtained by the use of anisotropic filtering. Two methods handles the most common cases; filtering illumination reflected from object surfaces. One methods extends filtering to the temporal domain and one performs filtering on illumination from participating media...

  16. Sampling method evaluation and empirical model fitting for count data to estimate densities of Oligonychus perseae (Acari: Tetranychidae) on 'Hass' avocado leaves in southern California.

    Science.gov (United States)

    Lara, Jesús R; Saremi, Naseem T; Castillo, Martin J; Hoddle, Mark S

    2016-04-01

    Oligonychus perseae (Acari: Tetranychidae) is an important foliar spider mite pest of 'Hass' avocados in several commercial production areas of the world. In California (USA), O. perseae densities in orchards can exceed more than 100 mites per leaf and this makes enumerative counting prohibitive for field sampling. In this study, partial enumerative mite counts along half a vein on an avocado leaf, an industry recommended practice known as the "half-vein method", was evaluated for accuracy using four data sets with a combined total of more than 485,913 motile O. perseae counted on 3849 leaves. Sampling simulations indicated that the half-vein method underestimated mite densities in a range of 15-60 %. This problem may adversely affect management of this pest in orchards and potentially compromise the results of field research requiring accurate mite density estimation. To address this limitation, four negative binomial regression models were fit to count data in an attempt to rescue the half-vein method for estimating mite densities. These models were incorporated into sampling plans and evaluated for their ability to estimate mite densities on whole leaves within 30-tree blocks of avocados. Model 3, a revised version of the original half-vein model, showed improvement in providing reliable estimates of O. perseae densities for making assessments of general leaf infestation densities across orchards in southern California. The implications of these results for customizing the revised half-vein method as a potential field sampling tool and for experimental research in avocado production in California are discussed.

  17. The method of separation for evolutionary spectral density estimation of multi-variate and multi-dimensional non-stationary stochastic processes

    KAUST Repository

    Schillinger, Dominik

    2013-07-01

    The method of separation can be used as a non-parametric estimation technique, especially suitable for evolutionary spectral density functions of uniformly modulated and strongly narrow-band stochastic processes. The paper at hand provides a consistent derivation of method of separation based spectrum estimation for the general multi-variate and multi-dimensional case. The validity of the method is demonstrated by benchmark tests with uniformly modulated spectra, for which convergence to the analytical solution is demonstrated. The key advantage of the method of separation is the minimization of spectral dispersion due to optimum time- or space-frequency localization. This is illustrated by the calibration of multi-dimensional and multi-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed. © 2013 Elsevier Ltd.

  18. Equatorial F-region plasma density estimation with incoherent scatter radar using a transverse-mode differential-phase method

    Science.gov (United States)

    Feng, Zhaomei

    This dissertation presents a novel data acquisition and analysis method for the Jicamarca incoherent scatter radar to measure high-precision drifts and ionospheric density simultaneously at F-region heights. Since high-precision drift measurements favor radar return signals with the narrowest possible frequency spectra, Jicamarca drifts observations are conducted using the linear-polarized transverse radar beams. Transverse-beam returns are collected using an orthogonal pair of linear-polarized antennas, and the average power as well as phase difference of the antenna outputs are fitted to appropriate data models developed based on the incoherent scatter theory and the magneto-ionic theory. The crude differential-phase model when B⃗o is characterized in terms of straight line fields is applied to the January 2000 data. The most complete differential-phase model, which takes into account the misaligned angle between the dipole axes and geomagnetic northeast and southeast directions, as well as the radar beam width and variation of magnetic fields, is applied to the January 2000 data and June 2002 data. We present and compare the inversion results obtained with different versions of the data models and conclude that the geometrical details have only a minor impact on the inversion. We also find that the differential-phase method works better for the 15-min integrated January 2000 data than 5-min integrated June 2002 data since the former has the bigger densities, larger SNR of the backscattered signals, and more usable phase data. Our inversion results show reasonable agreement with the ionosonde data. The full correlation method is formulated and applied to the June 2002 data. Compared to the differential-phase method, this method is different in the sense that it utilizes the real and imaginary parts of the cross-correlation of orthogonal antenna outputs at the high altitudes where SNR is low and the off-diagonal elements of the covariance matrix of measurement

  19. Bird population density estimated from acoustic signals

    Science.gov (United States)

    Dawson, D.K.; Efford, M.G.

    2009-01-01

    Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant

  20. Kernel current source density method.

    Science.gov (United States)

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  1. Density estimation from local structure

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2009-11-01

    Full Text Available Mixture Model (GMM) density function of the data and the log-likelihood scores are compared to the scores of a GMM trained with the expectation maximization (EM) algorithm on 5 real-world classification datasets (from the UCI collection). They show...

  2. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    Science.gov (United States)

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  3. A fast tree-based method for estimating column densities in adaptive mesh refinement codes. Influence of UV radiation field on the structure of molecular clouds

    Science.gov (United States)

    Valdivia, Valeska; Hennebelle, Patrick

    2014-11-01

    Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We

  4. Density Estimation Trees in High Energy Physics

    CERN Document Server

    Anderlini, Lucio

    2015-01-01

    Density Estimation Trees can play an important role in exploratory data analysis for multidimensional, multi-modal data models of large samples. I briefly discuss the algorithm, a self-optimization technique based on kernel density estimation, and some applications in High Energy Physics.

  5. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse

  6. 1D Current Source Density (CSD) Estimation in Inverse Theory: A Unified Framework for Higher-Order Spectral Regularization of Quadrature and Expansion-Type CSD Methods.

    Science.gov (United States)

    Kropf, Pascal; Shmuel, Amir

    2016-07-01

    Estimation of current source density (CSD) from the low-frequency part of extracellular electric potential recordings is an unstable linear inverse problem. To make the estimation possible in an experimental setting where recordings are contaminated with noise, it is necessary to stabilize the inversion. Here we present a unified framework for zero- and higher-order singular-value-decomposition (SVD)-based spectral regularization of 1D (linear) CSD estimation from local field potentials. The framework is based on two general approaches commonly employed for solving inverse problems: quadrature and basis function expansion. We first show that both inverse CSD (iCSD) and kernel CSD (kCSD) fall into the category of basis function expansion methods. We then use these general categories to introduce two new estimation methods, quadrature CSD (qCSD), based on discretizing the CSD integral equation with a chosen quadrature rule, and representer CSD (rCSD), an even-determined basis function expansion method that uses the problem's data kernels (representers) as basis functions. To determine the best candidate methods to use in the analysis of experimental data, we compared the different methods on simulations under three regularization schemes (Tikhonov, tSVD, and dSVD), three regularization parameter selection methods (NCP, L-curve, and GCV), and seven different a priori spatial smoothness constraints on the CSD distribution. This resulted in a comparison of 531 estimation schemes. We evaluated the estimation schemes according to their source reconstruction accuracy by testing them using different simulated noise levels, lateral source diameters, and CSD depth profiles. We found that ranking schemes according to the average error over all tested conditions results in a reproducible ranking, where the top schemes are found to perform well in the majority of tested conditions. However, there is no single best estimation scheme that outperforms all others under all tested

  7. Standardization of enterococci density estimates by EPA qPCR methods and comparison of beach action value exceedances in river waters with culture methods

    Science.gov (United States)

    The U.S.EPA has published recommendations for calibrator cell equivalent (CCE) densities of enterococci in recreational waters determined by a qPCR method in its 2012 Recreational Water Quality Criteria (RWQC). The CCE quantification unit stems from the calibration model used to ...

  8. Methods for age estimation

    Directory of Open Access Journals (Sweden)

    D. Sümeyra Demirkıran

    2014-03-01

    Full Text Available Concept of age estimation plays an important role on both civil law and regulation of criminal behaviors. In forensic medicine, age estimation is practiced for individual requests as well for request of the court. In this study it is aimed to compile the methods of age estimation and to make recommendations for the solution of the problems encountered. In radiological method the epiphyseal lines of the bones and views of the teeth are used. In order to estimate the age by comparing bone radiographs; Greulich-Pyle Atlas (GPA, Tanner-Whitehouse Atlas (TWA and “Adli Tıpta Yaş Tayini (ATYT” books are used. Bone age is found to be 2 years older averagely than chronologic age, especially in puberty, according to the forensic age estimations described in the ATYT book. For the age estimation with teeth, Demirjian method is used. In time different methods are developed by modifying Demirjian method. However no accurate method was found. Histopathological studies are done on bone marrow cellularity and dermis cells. No correlation was found between histopathoogical findings and choronologic age. Important ethical and legal issues are brought with current age estimation methods especially in teenage period. Therefore it is required to prepare atlases of bone age compatible with our society by collecting the findings of the studies in Turkey. Another recommendation could be to pay attention to the courts of age raising trials of teenage women and give special emphasis on birth and population records

  9. Generalized Agile Estimation Method

    Directory of Open Access Journals (Sweden)

    Shilpa Bahlerao

    2011-01-01

    Full Text Available Agile cost estimation process always possesses research prospects due to lack of algorithmic approaches for estimating cost, size and duration. Existing algorithmic approach i.e. Constructive Agile Estimation Algorithm (CAEA is an iterative estimation method that incorporates various vital factors affecting the estimates of the project. This method has lots of advantages but at the same time has some limitations also. These limitations may due to some factors such as number of vital factors and uncertainty involved in agile projects etc. However, a generalized agile estimation may generate realistic estimates and eliminates the need of experts. In this paper, we have proposed iterative Generalized Estimation Method (GEM and presented algorithm based on it for agile with case studies. GEM  based algorithm various project domain classes and vital factors with prioritization level. Further, it incorporates uncertainty factor to quantify the risk of project for estimating cost, size and duration. It also provides flexibility to project managers for deciding on number of vital factors, uncertainty level and project domains thereby maintaining the agility.

  10. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  11. Breast density estimation from high spectral and spatial resolution MRI.

    Science.gov (United States)

    Li, Hui; Weiss, William A; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M; Karczmar, Gregory S; Giger, Maryellen L

    2016-10-01

    A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists' breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 ([Formula: see text]) was obtained between left and right breast density estimations. An interclass correlation coefficient of 0.99 ([Formula: see text]) indicated high reliability for the inter-user variability of the HiSS-based breast density estimations. A moderate correlation coefficient of 0.55 ([Formula: see text]) was observed between HiSS-based breast density estimations and radiologists' BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy.

  12. Bayesian mixture models for spectral density estimation

    OpenAIRE

    Cadonna, Annalisa

    2017-01-01

    We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...

  13. Toward accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km(2) , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  14. Particle Size Estimation Based on Edge Density

    Institute of Scientific and Technical Information of China (English)

    WANG Wei-xing

    2005-01-01

    Given image sequences of closely packed particles, the underlying aim is to estimate diameters without explicit segmentation. In a way, this is similar to the task of counting objects without directly counting them. Such calculations may, for example, be useful fast estimation of particle size in different application areas. The topic is that of estimating average size (=average diameter) of packed particles, from formulas involving edge density, and the edges from moment-based thresholding are used. An average shape factor is involved in the calculations, obtained for some frames from crude partial segmentation. Measurement results from about 80 frames have been analyzed.

  15. Application of a maximum entropy method to estimate the probability density function of nonlinear or chaotic behavior in structural health monitoring data

    Science.gov (United States)

    Livingston, Richard A.; Jin, Shuang

    2005-05-01

    Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.

  16. Principal components analysis of Laplacian waveforms as a generic method for identifying ERP generator patterns: II. Adequacy of low-density estimates.

    Science.gov (United States)

    Kayser, Jürgen; Tenke, Craig E

    2006-02-01

    To evaluate the comparability of high- and low-density surface Laplacian estimates for determining ERP generator patterns of group data derived from a typical ERP sample size and paradigm. High-density ERP data (129 sites) recorded from 17 adults during tonal and phonetic oddball tasks were converted to a 10-20-system EEG montage (31 sites) using spherical spline interpolations. Current source density (CSD) waveforms were computed from the high- and low-density, but otherwise identical, ERPs, and correlated at corresponding locations. CSD data were submitted to separate covariance-based, unrestricted temporal PCAs (Varimax of covariance loadings) to identify and effectively summarize temporally and spatially overlapping CSD components. Solutions were compared by correlating factor loadings and scores, and by plotting ANOVA F statistics derived from corresponding high- and low-resolution factor scores using representative sites. High- and low-density CSD waveforms, PCA solutions, and F statistics were remarkably similar, yielding correlations of .9 91.6%). Low-density surface Laplacian estimates were shown to be accurate approximations of high-density CSDs at these locations, which adequately and quite sufficiently summarized group data. Moreover, reasonable approximations of many high-density scalp locations were obtained for group data from interpolations of low-density data. If group findings are the primary objective, as typical for cognitive ERP research, low-resolution CSD topographies may be as efficient, given the effective spatial smoothing when averaging across subjects and/or conditions. Conservative recommendations for restricting surface Laplacians to high-density recordings may not be appropriate for all ERP research applications, and should be re-evaluated considering objective, costs and benefits.

  17. Density Estimation in Several Populations With Uncertain Population Membership

    KAUST Repository

    Ma, Yanyuan

    2011-09-01

    We devise methods to estimate probability density functions of several populations using observations with uncertain population membership, meaning from which population an observation comes is unknown. The probability of an observation being sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate our methods with data from a nutrition study.

  18. Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina, E-mail: despina.kontos@uphs.upenn.edu [Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2013-12-15

    Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a

  19. Unbiased risk estimation method for covariance estimation

    CERN Document Server

    Lescornel, Hélène; Chabriac, Claudie

    2011-01-01

    We consider a model selection estimator of the covariance of a random process. Using the Unbiased Risk Estimation (URE) method, we build an estimator of the risk which allows to select an estimator in a collection of model. Then, we present an oracle inequality which ensures that the risk of the selected estimator is close to the risk of the oracle. Simulations show the efficiency of this methodology.

  20. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  1. Density estimates of monarch butterflies overwintering in central Mexico.

    Science.gov (United States)

    Thogmartin, Wayne E; Diffendorfer, Jay E; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X; Semmens, Darius; Taylor, Orley R; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9-60.9 million ha(-1). We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha(-1) (95% CI [2.4-80.7] million ha(-1)); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha(-1)). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  2. Density estimates of monarch butterflies overwintering in central Mexico

    Directory of Open Access Journals (Sweden)

    Wayne E. Thogmartin

    2017-04-01

    Full Text Available Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L. under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1; the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1. Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp. lost (0.86 billion stems in the northern US plus the amount of milkweed remaining (1.34 billion stems, we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  3. Density estimates of monarch butterflies overwintering in central Mexico

    Science.gov (United States)

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  4. Estimating maritime snow density from seasonal climate variables

    Science.gov (United States)

    Bormann, K. J.; Evans, J. P.; Westra, S.; McCabe, M. F.; Painter, T. H.

    2013-12-01

    Snow density is a complex parameter that influences thermal, optical and mechanical snow properties and processes. Depth-integrated properties of snowpacks, including snow density, remain very difficult to obtain remotely. Observations of snow density are therefore limited to in-situ point locations. In maritime snowfields such as those in Australia and in parts of the western US, snow densification rates are enhanced and inter-annual variability is high compared to continental snow regions. In-situ snow observation networks in maritime climates often cannot characterise the variability in snowpack properties at spatial and temporal resolutions required for many modelling and observations-based applications. Regionalised density-time curves are commonly used to approximate snow densities over broad areas. However, these relationships have limited spatial applicability and do not allow for interannual variability in densification rates, which are important in maritime environments. Physically-based density models are relatively complex and rely on empirical algorithms derived from limited observations, which may not represent the variability observed in maritime snow. In this study, seasonal climate factors were used to estimate late season snow densities using multiple linear regressions. Daily snow density estimates were then obtained by projecting linearly to fresh snow densities at the start of the season. When applied spatially, the daily snow density fields compare well to in-situ observations across multiple sites in Australia, and provide a new method for extrapolating existing snow density datasets in maritime snow environments. While the relatively simple algorithm for estimating snow densities has been used in this study to constrain snowmelt rates in a temperature-index model, the estimates may also be used to incorporate variability in snow depth to snow water equivalent conversion.

  5. SVM for density estimation and application to medical image segmentation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhao; ZHANG Su; ZHANG Chen-xi; CHEN Ya-zhu

    2006-01-01

    A method of medical image segmentation based on support vector machine (SVM) for density estimation is presented. We used this estimator to construct a prior model of the image intensity and curvature profile of the structure from training images. When segmenting a novel image similar to the training images, the technique of narrow level set method is used. The higher dimensional surface evolution metric is defined by the prior model instead of by energy minimization function. This method offers several advantages. First, SVM for density estimation is consistent and its solution is sparse. Second, compared to the traditional level set methods, this method incorporates shape information on the object to be segmented into the segmentation process.Segmentation results are demonstrated on synthetic images, MR images and ultrasonic images.

  6. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  7. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  8. Multivariate density estimation theory, practice, and visualization

    CERN Document Server

    Scott, David W

    2015-01-01

    David W. Scott, PhD, is Noah Harding Professor in the Department of Statistics at Rice University. The author of over 100 published articles, papers, and book chapters, Dr. Scott is also Fellow of the American Statistical Association (ASA) and the Institute of Mathematical Statistics. He is recipient of the ASA Founder's Award and the Army Wilks Award. His research interests include computational statistics, data visualization, and density estimation. Dr. Scott is also Coeditor of Wiley Interdisciplinary Reviews: Computational Statistics and previous Editor of the Journal of Computational and

  9. A morpho-density approach to estimating neural connectivity.

    Directory of Open Access Journals (Sweden)

    Michael P McAssey

    Full Text Available Neuronal signal integration and information processing in cortical neuronal networks critically depend on the organization of synaptic connectivity. Because of the challenges involved in measuring a large number of neurons, synaptic connectivity is difficult to determine experimentally. Current computational methods for estimating connectivity typically rely on the juxtaposition of experimentally available neurons and applying mathematical techniques to compute estimates of neural connectivity. However, since the number of available neurons is very limited, these connectivity estimates may be subject to large uncertainties. We use a morpho-density field approach applied to a vast ensemble of model-generated neurons. A morpho-density field (MDF describes the distribution of neural mass in the space around the neural soma. The estimated axonal and dendritic MDFs are derived from 100,000 model neurons that are generated by a stochastic phenomenological model of neurite outgrowth. These MDFs are then used to estimate the connectivity between pairs of neurons as a function of their inter-soma displacement. Compared with other density-field methods, our approach to estimating synaptic connectivity uses fewer restricting assumptions and produces connectivity estimates with a lower standard deviation. An important requirement is that the model-generated neurons reflect accurately the morphology and variation in morphology of the experimental neurons used for optimizing the model parameters. As such, the method remains subject to the uncertainties caused by the limited number of neurons in the experimental data set and by the quality of the model and the assumptions used in creating the MDFs and in calculating estimating connectivity. In summary, MDFs are a powerful tool for visualizing the spatial distribution of axonal and dendritic densities, for estimating the number of potential synapses between neurons with low standard deviation, and for obtaining

  10. Green's function based density estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kovesarki, Peter; Brock, Ian C.; Nuncio Quiroz, Adriana Elizabeth [Physikalisches Institut, Universitaet Bonn (Germany)

    2012-07-01

    A method was developed based on Green's function identities to estimate probability densities. This can be used for likelihood estimations and for binary classifications. It offers several advantages over neural networks, boosted decision trees and other, regression based classifiers. For example, it is less prone to overtraining, and it is much easier to combine several samples. Some capabilities are demonstrated using ATLAS data.

  11. Optimization of volumetric breast density estimation in digital mammograms

    NARCIS (Netherlands)

    Holland, K.; Gubern Merida, A.; Mann, R.M.; Karssemeijer, N.

    2017-01-01

    Fibroglandular tissue volume and percent density can be estimated in unprocessed mammograms using a physics-based method, which relies on an internal reference value representing the projection of fat only. However, pixels representing fat only may not be present in dense breasts, causing an

  12. Density Estimations in Laboratory Debris Flow Experiments

    Science.gov (United States)

    Queiroz de Oliveira, Gustavo; Kulisch, Helmut; Malcherek, Andreas; Fischer, Jan-Thomas; Pudasaini, Shiva P.

    2016-04-01

    Bulk density and its variation is an important physical quantity to estimate the solid-liquid fractions in two-phase debris flows. Here we present mass and flow depth measurements for experiments performed in a large-scale laboratory set up. Once the mixture is released and it moves down the inclined channel, measurements allow us to determine the bulk density evolution throughout the debris flow. Flow depths are determined by ultrasonic pulse reflection, and the mass is measured with a total normal force sensor. The data were obtained at 50 Hz. The initial two phase material was composed of 350 kg debris with water content of 40%. A very fine pebble with mean particle diameter of 3 mm, particle density of 2760 kg/m³ and bulk density of 1400 kg/m³ in dry condition was chosen as the solid material. Measurements reveal that the debris bulk density remains high from the head to the middle of the debris body whereas it drops substantially at the tail. This indicates lower water content at the tail, compared to the head and the middle portion of the debris body. This means that the solid and fluid fractions are varying strongly in a non-linear manner along the flow path, and from the head to the tail of the debris mass. Importantly, this spatial-temporal density variation plays a crucial role in determining the impact forces associated with the dynamics of the flow. Our setup allows for investigating different two phase material compositions, including large fluid fractions, with high resolutions. The considered experimental set up may enable us to transfer the observed phenomena to natural large-scale events. Furthermore, the measurement data allows evaluating results of numerical two-phase mass flow simulations. These experiments are parts of the project avaflow.org that intends to develop a GIS-based open source computational tool to describe wide spectrum of rapid geophysical mass flows, including avalanches and real two-phase debris flows down complex natural

  13. Optimization of volumetric breast density estimation in digital mammograms.

    Science.gov (United States)

    Holland, Katharina; Gubern-Mérida, Albert; Mann, Ritse M; Karssemeijer, Nico

    2017-05-07

    Fibroglandular tissue volume and percent density can be estimated in unprocessed mammograms using a physics-based method, which relies on an internal reference value representing the projection of fat only. However, pixels representing fat only may not be present in dense breasts, causing an underestimation of density measurements. In this work, we investigate alternative approaches for obtaining a tissue reference value to improve density estimations, particularly in dense breasts. Two of three investigated reference values (F1, F2) are percentiles of the pixel value distribution in the breast interior (the contact area of breast and compression paddle). F1 is determined in a small breast interior, which minimizes the risk that peripheral pixels are included in the measurement at the cost of increasing the chance that no proper reference can be found. F2 is obtained using a larger breast interior. The new approach which is developed for very dense breasts does not require the presence of a fatty tissue region. As reference region we select the densest region in the mammogram and assume that this represents a projection of entirely dense tissue embedded between the subcutaneous fatty tissue layers. By measuring the thickness of the fat layers a reference (F3) can be computed. To obtain accurate breast density estimates irrespective of breast composition we investigated a combination of the results of the three reference values. We collected 202 pairs of MRI's and digital mammograms from 119 women. We compared the percent dense volume estimates based on both modalities and calculated Pearson's correlation coefficients. With the references F1-F3 we found respectively a correlation of [Formula: see text], [Formula: see text] and [Formula: see text]. Best results were obtained with the combination of the density estimations ([Formula: see text]). Results show that better volumetric density estimates can be obtained with the hybrid method, in particular for dense

  14. Estimation of volumetric breast density for breast cancer risk prediction

    Science.gov (United States)

    Pawluczyk, Olga; Yaffe, Martin J.; Boyd, Norman F.; Jong, Roberta A.

    2000-04-01

    Mammographic density (MD) has been shown to be a strong risk predictor for breast cancer. Compared to subjective assessment by a radiologist, computer-aided analysis of digitized mammograms provides a quantitative and more reproducible method for assessing breast density. However, the current methods of estimating breast density based on the area of bright signal in a mammogram do not reflect the true, volumetric quantity of dense tissue in the breast. A computerized method to estimate the amount of radiographically dense tissue in the overall volume of the breast has been developed to provide an automatic, user-independent tool for breast cancer risk assessment. The procedure for volumetric density estimation consists of first correcting the image for inhomogeneity, then performing a volume density calculation. First, optical sensitometry is used to convert all images to the logarithm of relative exposure (LRE), in order to simplify the image correction operations. The field non-uniformity correction, which takes into account heel effect, inverse square law, path obliquity and intrinsic field and grid non- uniformity is obtained by imaging a spherical section PMMA phantom. The processed LRE image of the phantom is then used as a correction offset for actual mammograms. From information about the thickness and placement of the breast, as well as the parameters of a breast-like calibration step wedge placed in the mammogram, MD of the breast is calculated. Post processing and a simple calibration phantom enable user- independent, reliable and repeatable volumetric estimation of density in breast-equivalent phantoms. Initial results obtained on known density phantoms show the estimation to vary less than 5% in MD from the actual value. This can be compared to estimated mammographic density differences of 30% between the true and non-corrected values. Since a more simplistic breast density measurement based on the projected area has been shown to be a strong indicator

  15. Kernel density estimation using graphical processing unit

    Science.gov (United States)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  16. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  17. Causal Effect Estimation Methods

    OpenAIRE

    2014-01-01

    Relationship between two popular modeling frameworks of causal inference from observational data, namely, causal graphical model and potential outcome causal model is discussed. How some popular causal effect estimators found in applications of the potential outcome causal model, such as inverse probability of treatment weighted estimator and doubly robust estimator can be obtained by using the causal graphical model is shown. We confine to the simple case of binary outcome and treatment vari...

  18. Cheap DECAF: Density Estimation for Cetaceans from Acoustic Fixed Sensors Using Separate, Non-Linked Devices

    Science.gov (United States)

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Cheap DECAF: Density Estimation for Cetaceans from... cetaceans using passive fixed acoustics rely on large, dense arrays of cabled hydrophones and/or auxiliary information from animal tagging projects...estimating cetacean density. Therefore, the goal of Cheap DECAF is to focus on the development of cetacean density estimation methods using sensors that

  19. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    Science.gov (United States)

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  20. Current Source Density Estimation for Single Neurons

    Directory of Open Access Journals (Sweden)

    Dorottya Cserpán

    2014-03-01

    Full Text Available Recent developments of multielectrode technology made it possible to measure the extracellular potential generated in the neural tissue with spatial precision on the order of tens of micrometers and on submillisecond time scale. Combining such measurements with imaging of single neurons within the studied tissue opens up new experimental possibilities for estimating distribution of current sources along a dendritic tree. In this work we show that if we are able to relate part of the recording of extracellular potential to a specific cell of known morphology we can estimate the spatiotemporal distribution of transmembrane currents along it. We present here an extension of the kernel CSD method (Potworowski et al., 2012 applicable in such case. We test it on several model neurons of progressively complicated morphologies from ball-and-stick to realistic, up to analysis of simulated neuron activity embedded in a substantial working network (Traub et al, 2005. We discuss the caveats and possibilities of this new approach.

  1. Software Development Cost Estimation Methods

    Directory of Open Access Journals (Sweden)

    Bogdan Stepien

    2003-01-01

    Full Text Available Early estimation of project size and completion time is essential for successful project planning and tracking. Multiple methods have been proposed to estimate software size and cost parameters. Suitability of the estimation methods depends on many factors like software application domain, product complexity, availability of historical data, team expertise etc. Most common and widely used estimation techniques are described and analyzed. Current research trends in software estimation cost are also presented.

  2. Nonparametric estimation of population density for line transect sampling using FOURIER series

    Science.gov (United States)

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  3. Face Value: Towards Robust Estimates of Snow Leopard Densities.

    Directory of Open Access Journals (Sweden)

    Justine S Alexander

    Full Text Available When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01 individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87. Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.

  4. Face Value: Towards Robust Estimates of Snow Leopard Densities.

    Science.gov (United States)

    Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.

  5. Mammographic density estimation with automated volumetric breast density measurement.

    Science.gov (United States)

    Ko, Su Yeon; Kim, Eun-Kyung; Kim, Min Jung; Moon, Hee Jung

    2014-01-01

    To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p = 0.001 to 0.015). There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density.

  6. Mammography density estimation with automated volumetic breast density measurement

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Su Yeon; Kim, Eun Kyung; Kim, Min Jung; Moon, Hee Jung [Dept. of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2014-06-15

    To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p 0.001 to 0.015). There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density.

  7. An ensemble average method to estimate absolute TEC using radio beacon-based differential phase measurements: Applicability to regions of large latitudinal gradients in plasma density

    Science.gov (United States)

    Thampi, Smitha V.; Bagiya, Mala S.; Chakrabarty, D.; Acharya, Y. B.; Yamamoto, M.

    2014-12-01

    A GNU Radio Beacon Receiver (GRBR) system for total electron content (TEC) measurements using 150 and 400 MHz transmissions from Low-Earth Orbiting Satellites (LEOS) is fabricated in house and made operational at Ahmedabad (23.04°N, 72.54°E geographic, dip latitude 17°N) since May 2013. This system receives the 150 and 400 MHz transmissions from high-inclination LEOS. The first few days of observations are presented in this work to bring out the efficacy of an ensemble average method to convert the relative TECs to absolute TECs. This method is a modified version of the differential Doppler-based method proposed by de Mendonca (1962) and suitable even for ionospheric regions with large spatial gradients. Comparison of TECs derived from a collocated GPS receiver shows that the absolute TECs estimated by this method are reliable estimates over regions with large spatial gradient. This method is useful even when only one receiving station is available. The differences between these observations are discussed to bring out the importance of the spatial differences between the ionospheric pierce points of these satellites. A few examples of the latitudinal variation of TEC during different local times using GRBR measurements are also presented, which demonstrates the potential of radio beacon measurements in capturing the large-scale plasma transport processes in the low-latitude ionosphere.

  8. A Field Evaluation of the Time-of-Detection Method to Estimate Population Size and Density for Aural Avian Point Counts

    Directory of Open Access Journals (Sweden)

    Mathew W. Alldredge

    2007-12-01

    Full Text Available The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture-recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence, which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low homogenous rates per interval with those singing at (high and low heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant

  9. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    Science.gov (United States)

    2014-09-30

    interactions with human activity requires knowledge of how many animals are present in an area during a specific time period. Many marine mammal species ...Ocean at Wake Island will then be applied to the same species in the Indian Ocean at the CTBTO location at Diego Garcia. 1. Develop and implement...proposed density estimation method is also highly dependent on call rate inputs, which are used in the development of species specific multipliers for

  10. Simplified large African carnivore density estimators from track indices

    Directory of Open Access Journals (Sweden)

    Christiaan W. Winterbach

    2016-12-01

    Full Text Available Background The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. Methods We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. Results The Lion on Clay and Low Density on Sand models with intercept were not significant (P > 0.05. The other four models with intercept and the six models thorough origin were all significant (P < 0.05. The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Discussion Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26

  11. Open-cluster density profiles derived using a kernel estimator

    CERN Document Server

    Seleznev, Anton F

    2016-01-01

    Surface and spatial radial density profiles in open clusters are derived using a kernel estimator method. Formulae are obtained for the contribution of every star into the spatial density profile. The evaluation of spatial density profiles is tested against open-cluster models from N-body experiments with N = 500. Surface density profiles are derived for seven open clusters (NGC 1502, 1960, 2287, 2516, 2682, 6819 and 6939) using Two-Micron All-Sky Survey data and for different limiting magnitudes. The selection of an optimal kernel half-width is discussed. It is shown that open-cluster radius estimates hardly depend on the kernel half-width. Hints of stellar mass segregation and structural features indicating cluster non-stationarity in the regular force field are found. A comparison with other investigations shows that the data on open-cluster sizes are often underestimated. The existence of an extended corona around the open cluster NGC 6939 was confirmed. A combined function composed of the King density pr...

  12. Some Bayesian statistical techniques useful in estimating frequency and density

    Science.gov (United States)

    Johnson, D.H.

    1977-01-01

    This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

  13. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  14. Estimation of probability densities using scale-free field theories.

    Science.gov (United States)

    Kinney, Justin B

    2014-07-01

    The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

  15. Ant-inspired density estimation via random walks.

    Science.gov (United States)

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-09-19

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  16. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  17. Highway traffic model-based density estimation

    OpenAIRE

    Morarescu, Irinel - Constantin; CANUDAS DE WIT, Carlos

    2011-01-01

    International audience; The travel time spent in traffic networks is one of the main concerns of the societies in developed countries. A major requirement for providing traffic control and services is the continuous prediction, for several minutes into the future. This paper focuses on an important ingredient necessary for the traffic forecasting which is the real-time traffic state estimation using only a limited amount of data. Simulation results illustrate the performances of the proposed ...

  18. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...... is insensitive to the choice of estimation method, but also that the uncertainties of predictions were reduced when applying maximum likelihood....

  19. Validation of the Martin Method for Estimating Low-Density Lipoprotein Cholesterol Levels in Korean Adults: Findings from the Korea National Health and Nutrition Examination Survey, 2009-2011.

    Directory of Open Access Journals (Sweden)

    Jongseok Lee

    Full Text Available Despite the importance of accurate assessment for low-density lipoprotein cholesterol (LDL-C, the Friedewald formula has primarily been used as a cost-effective method to estimate LDL-C when triglycerides are less than 400 mg/dL. In a recent study, an alternative to the formula was proposed to improve estimation of LDL-C. We evaluated the performance of the novel method versus the Friedewald formula using a sample of 5,642 Korean adults with LDL-C measured by an enzymatic homogeneous assay (LDL-CD. Friedewald LDL-C (LDL-CF was estimated using a fixed factor of 5 for the ratio of triglycerides to very-low-density lipoprotein cholesterol (TG:VLDL-C ratio. However, the novel LDL-C (LDL-CN estimates were calculated using the N-strata-specific median TG:VLDL-C ratios, LDL-C5 and LDL-C25 from respective ratios derived from our data set, and LDL-C180 from the 180-cell table reported by the original study. Compared with LDL-CF, each LDL-CN estimate exhibited a significantly higher overall concordance in the NCEP-ATP III guideline classification with LDL-CD (p< 0.001 for each comparison. Overall concordance was 78.2% for LDL-CF, 81.6% for LDL-C5, 82.3% for LDL-C25, and 82.0% for LDL-C180. Compared to LDL-C5, LDL-C25 significantly but slightly improved overall concordance (p = 0.008. LDL-C25 and LDL-C180 provided almost the same overall concordance; however, LDL-C180 achieved superior improvement in classifying LDL-C < 70 mg/dL compared to the other estimates. In subjects with triglycerides of 200 to 399 mg/dL, each LDL-CN estimate showed a significantly higher concordance than that of LDL-CF (p< 0.001 for each comparison. The novel method offers a significant improvement in LDL-C estimation when compared with the Friedewald formula. However, it requires further modification and validation considering the racial differences as well as the specific character of the applied measuring method.

  20. A survey of the apes in the Dzanga-Ndoki National Park, Central African Republic: A comparison between the census and survey methods of estimating the gorilla (Gorilla gorilla gorilla) and chimpanzee (Pan troglodytes) nest group density

    NARCIS (Netherlands)

    Almasi, A.; Blom, A.; Heitkönig, I.M.A.; Kpanou, J.B.; Prins, H.H.T.

    2001-01-01

    A survey of apes was carried out between October 1996 and May 1997 in the Dzanga sector of the Dzanga-Ndoki National Park, Central African Republic (CAR), to estimate gorilla (Gorilla gorilla gorilla) and chimpanzee (Pan troglodytes) densities. The density estimates were based on nest counts. The st

  1. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  2. Efficient estimation of dynamic density functions with an application to outlier detection

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2012-01-01

    In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.

  3. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    Science.gov (United States)

    2015-06-10

    estimation exploiting, in concert, hard and soft information. Although our development, theoretical and numerical, makes no distinction based on sample...Fusion of Hard and Soft Information in Nonparametric Density Estimation∗ Johannes O. Royset Roger J-B Wets Department of Operations Research...univariate density estimation in situations when the sample ( hard information) is supplemented by “soft” information about the random phenomenon. These

  4. Density estimators in particle hydrodynamics - DTFE versus regular SPH

    NARCIS (Netherlands)

    Pelupessy, FI; Schaap, WE; van de Weygaert, R

    2003-01-01

    We present the results of a study comparing density maps reconstructed by the Delaunay Tessellation Field Estimator (DTFE) and by regular SPH kernel-based techniques. The density maps are constructed from the outcome of an SPH particle hydrodynamics simulation of a multiphase interstellar medium. Th

  5. A density gradient theory based method for surface tension calculations

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios

    2016-01-01

    The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

  6. Methods of gas hydrate concentration estimation with field examples

    Digital Repository Service at National Institute of Oceanography (India)

    Kumar, D.; Dash, R.; Dewangan, P.

    different methods of gas hydrate concentration estimation that make use of data from the measurements of the seismic properties, electrical resistivity, chlorinity, porosity, density, and temperature are summarized in this paper. We demonstrate the methods...

  7. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  8. Tunnel Cost-Estimating Methods.

    Science.gov (United States)

    1981-10-01

    8 ae1e 066 c LINING CALCULATES THE LINING COSTS AND THE FORMWORK COST FOR A 982928 ees C TUNNEL OR SHAFT SEGMENT 682636 0066...AD-AIO . 890 ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURGETC F/B 13/13 TUNNEL COST-ESTIMATING METNDS(U) OCT 81 R D BENNETT UNCLASSIFIED WES...TR/L-81-101-3lEEEEEE EIIIl-IIIIIIIu IIIIEIIIEIIIIE llllEEEEllEEEI EEEEEEEEEIIII C EllTE-CHNICAL RGPORT GL-81-10 LI10 TUNNEL COST-ESTIMATING METHODS by

  9. Estimation of Aqueous Solubility (-lgSw) of All Polychlorinated Biphenyl (PCB) Congeners by Density Functional Theory and Position of Cl Substitution (NPCS) Method

    Institute of Scientific and Technical Information of China (English)

    WEI Xiao-Yan; GE Zhi-Gang; WANG Zun-Yao; XU Jiao

    2007-01-01

    Optimization calculations of 209 polychlorinated biphenyls (PCBs) were carried out at the B3LYP/6-31G* level. It was found that there is significant correlation between the Cl substitution position and some structural parameters. Consequently, Cl substitution positions were taken as theoretical descriptors to establish a novel QSPR model for predicting -lgSw of all PCB congeners. The model achieved in this work contains four variables, of which r2 = 0.9527, q2 = 0.9490 and SD = 0.25 with large t values. In addition, the variation inflation factors (VIFs) of variables in this model are all less than 5.0, suggesting high accuracy of the -lgSw predicting model. And the results of cross-validation test and method validation also show that the model exhibits optimum stability and better predictive capability than that from the AM1 method.

  10. Echolocation detections and digital video surveys provide reliable estimates of the relative density of harbour porpoises

    National Research Council Canada - National Science Library

    Williamson, Laura D; Brookes, Kate L; Scott, Beth E; Graham, Isla M; Bradbury, Gareth; Hammond, Philip S; Thompson, Paul M; McPherson, Jana

    2016-01-01

    ...‐based visual surveys. Surveys of cetaceans using acoustic loggers or digital cameras provide alternative methods to estimate relative density that have the potential to reduce cost and provide a verifiable record of all detections...

  11. Cosmic Web Reconstruction through Density Ridges: Method and Algorithm

    CERN Document Server

    Chen, Yen-Chi; Freeman, Peter E; Genovese, Christopher R; Wasserman, Larry

    2015-01-01

    The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictates the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the Subspace Constrained Mean Shift (SCMS) algorithm (Ozertem and Erdogmus (2011); Genovese et al. (2012)) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS method to datasets sampled from the P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA and to LOWZ and CMASS data fro...

  12. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  13. 非参数密度估计在个体损失分布中的应用%Application of non-parameter kernel density method to the estimation of individual loss distribution

    Institute of Scientific and Technical Information of China (English)

    谭英平

    2003-01-01

    As an exploratory research, this paper tries to present a new approach through which the individualloss distribution can be analyzed. Unlike the traditionally parametric statistics idea, the author introducesthe whole procedure about how the nonparametric kernel density estimation can be utilized in the analysisof individual loss distribution. Further,the effect of the new estimation is testified.

  14. Cortical cell and neuron density estimates in one chimpanzee hemisphere.

    Science.gov (United States)

    Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H

    2016-01-19

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates.

  15. Kernel density estimation of a multidimensional efficiency profile

    CERN Document Server

    Poluektov, Anton

    2014-01-01

    Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the $\\Lambda_b^0\\to D^0p\\pi$ decay.

  16. Quantiles, parametric-select density estimation, and bi-information parameter estimators

    Science.gov (United States)

    Parzen, E.

    1982-01-01

    A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.

  17. Density estimates of monarch butterflies overwintering in central Mexico

    OpenAIRE

    Thogmartin, Wayne; Diffendorfer, Jay E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice Xavier; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There...

  18. Estimating neuronal connectivity from axonal and dendritic density fields

    Science.gov (United States)

    van Pelt, Jaap; van Ooyen, Arjen

    2013-01-01

    Neurons innervate space by extending axonal and dendritic arborizations. When axons and dendrites come in close proximity of each other, synapses between neurons can be formed. Neurons vary greatly in their morphologies and synaptic connections with other neurons. The size and shape of the arborizations determine the way neurons innervate space. A neuron may therefore be characterized by the spatial distribution of its axonal and dendritic “mass.” A population mean “mass” density field of a particular neuron type can be obtained by averaging over the individual variations in neuron geometries. Connectivity in terms of candidate synaptic contacts between neurons can be determined directly on the basis of their arborizations but also indirectly on the basis of their density fields. To decide when a candidate synapse can be formed, we previously developed a criterion defining that axonal and dendritic line pieces should cross in 3D and have an orthogonal distance less than a threshold value. In this paper, we developed new methodology for applying this criterion to density fields. We show that estimates of the number of contacts between neuron pairs calculated from their density fields are fully consistent with the number of contacts calculated from the actual arborizations. However, the estimation of the connection probability and the expected number of contacts per connection cannot be calculated directly from density fields, because density fields do not carry anymore the correlative structure in the spatial distribution of synaptic contacts. Alternatively, these two connectivity measures can be estimated from the expected number of contacts by using empirical mapping functions. The neurons used for the validation studies were generated by our neuron simulator NETMORPH. An example is given of the estimation of average connectivity and Euclidean pre- and postsynaptic distance distributions in a network of neurons represented by their population mean density

  19. Corruption clubs: empirical evidence from kernel density estimates

    NARCIS (Netherlands)

    Herzfeld, T.; Weiss, Ch.

    2007-01-01

    A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to

  20. Corruption clubs: empirical evidence from kernel density estimates

    NARCIS (Netherlands)

    Herzfeld, T.; Weiss, Ch.

    2007-01-01

    A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to

  1. Density estimation in tiger populations: combining information for strong inference.

    Science.gov (United States)

    Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W

    2012-07-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  2. Density estimation in tiger populations: combining information for strong inference

    Science.gov (United States)

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

    2012-01-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  3. State of the Art in Photon-Density Estimation

    DEFF Research Database (Denmark)

    Hachisuka, Toshiya; Jarosz, Wojciech; Georgiev, Iliyan

    2013-01-01

    Photon-density estimation techniques are a popular choice for simulating light transport in scenes with complicated geometry and materials. This class of algorithms can be used to accurately simulate inter-reflections, caustics, color bleeding, scattering in participating media, and subsurface sc...

  4. State of the Art in Photon Density Estimation

    DEFF Research Database (Denmark)

    Hachisuka, Toshiya; Jarosz, Wojciech; Bouchard, Guillaume

    2012-01-01

    Photon-density estimation techniques are a popular choice for simulating light transport in scenes with complicated geometry and materials. This class of algorithms can be used to accurately simulate inter-reflections, caustics, color bleeding, scattering in participating media, and subsurface sc...

  5. Estimation of the space density of low surface brightness galaxies

    NARCIS (Netherlands)

    Briggs, FH

    1997-01-01

    The space density of low surface brightness and tiny gas-rich dwarf galaxies are estimated for two recent catalogs: the Arecibo Survey of Northern Dwarf and Low Surface Brightness Galaxies and the Catalog of Low Surface Brightness Galaxies, List II. The goals are (1) to evaluate the additions to the

  6. State of the Art in Photon Density Estimation

    DEFF Research Database (Denmark)

    Hachisuka, Toshiya; Jarosz, Wojciech; Bouchard, Guillaume

    2012-01-01

    scattering. Since its introduction, photon-density estimation has been significantly extended in computer graphics with the introduction of: specialized techniques that intelligently modify the positions or bandwidths to reduce visual error using a small number of photons, approaches that eliminate error...

  7. Cetacean Density Estimation from Novel Acoustic Datasets by Acoustic Propagation Modeling

    Science.gov (United States)

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Cetacean Density Estimation from Novel Acoustic Datasets...OBJECTIVES The objectives of this research are to apply existing methods for cetacean density estimation from passive acoustic recordings made by single...sensors, to novel data sets and cetacean species, as well as refine the existing techniques in order to develop a more generalized model that can be

  8. Evaluating lidar point densities for effective estimation of aboveground biomass

    Science.gov (United States)

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason M.; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  9. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2016-11-08

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  10. Body Density Estimates from Upper-Body Skinfold Thicknesses Compared to Air-Displacement Plethysmography

    Science.gov (United States)

    Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

  11. Estimation of Enceladus Plume Density Using Cassini Flight Data

    Science.gov (United States)

    Wang, Eric K.; Lee, Allan Y.

    2011-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of water vapor plumes in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. During some of these Enceladus flybys, the spacecraft attitude was controlled by a set of three reaction wheels. When the disturbance torque imparted on the spacecraft was predicted to exceed the control authority of the reaction wheels, thrusters were used to control the spacecraft attitude. Using telemetry data of reaction wheel rates or thruster on-times collected from four low-altitude Enceladus flybys (in 2008-10), one can reconstruct the time histories of the Enceladus plume jet density. The 1 sigma uncertainty of the estimated density is 5.9-6.7% (depending on the density estimation methodology employed). These plume density estimates could be used to confirm measurements made by other onboard science instruments and to support the modeling of Enceladus plume jets.

  12. Technical Note: Cortical thickness and density estimation from clinical CT using a prior thickness-density relationship

    Energy Technology Data Exchange (ETDEWEB)

    Humbert, Ludovic, E-mail: ludohumberto@gmail.com [Galgo Medical, Barcelona 08036 (Spain); Hazrati Marangalou, Javad; Rietbergen, Bert van [Orthopaedic Biomechanics, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB (Netherlands); Río Barquero, Luis Miguel del [CETIR Centre Medic, Barcelona 08029 (Spain); Lenthe, G. Harry van [Biomechanics Section, KU Leuven–University of Leuven, Leuven 3001 (Belgium)

    2016-04-15

    Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was used as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 ± 0.19 mm (−18 ± 92 mg/cm{sup 3}) using the simulated clinical CT volumes with the smallest voxel size (0.33 × 0.33 × 0.5 mm{sup 3}), and 0.10 ± 0.24 mm (−10 ± 115 mg/cm{sup 3}) using the volumes with the largest voxel size (1.0 × 1.0 × 3.0 mm{sup 3}). A trend for the

  13. Using gravity data to estimate the density of surface rocks of Taiwan region

    Science.gov (United States)

    Lo, Y. T.; Horng-Yen, Y.

    2016-12-01

    Surface rock density within terrain correction step is one of the important parameters for obtaining Bouguer anomaly map. In the past study, we obtain the Bouguer anomaly map considering the average density correction of a wide range of the study area. In this study, we will be the better estimate for the correction of the density of each observation point. A correction density that coincides with surface geology is in order to improve the accuracy of the cloth cover anomaly map. The main idea of estimating correction of the density using gravity data statistics are two method, g-H relationship and Nettleton density profile method, respectively. The common advantages of these methods are in the following: First, density estimating is calculated using existing gravity observations data, it may be avoided the trouble of directly measure the rock density. Second, after the establishment the measuring point s of absolute gravity value, latitude, longitude and elevation into the database, you can always apply its database of information and terrain data with the value to calculate the average rock density on any range. In addition, each measuring point and numerical data of each terrain mesh are independent, if found to be more accurate gravity or terrain data, simply update a document data alone, without having to rebuild the entire database. According the results of estimating density distribution map, the trends are broadly distributed close to Taiwan Geology Division. The average density of the backbone mountain region is about 2.5 to 2.6 g/cm^3, the average density of east Central Mountain Range and Hsuehshan Range are about 2.3 to 2.5 g/cm^3, compared with the western foothills of 2.1-2.3 g/cm^3, the western plains is from 1.8 to 2.0 g/cm^3.

  14. Variational estimation of the drift for stochastic differential equations from the empirical density

    Science.gov (United States)

    Batz, Philipp; Ruttor, Andreas; Opper, Manfred

    2016-08-01

    We present a method for the nonparametric estimation of the drift function of certain types of stochastic differential equations from the empirical density. It is based on a variational formulation of the Fokker-Planck equation. The minimization of an empirical estimate of the variational functional using kernel based regularization can be performed in closed form. We demonstrate the performance of the method on second order, Langevin-type equations and show how the method can be generalized to other noise models.

  15. Variational estimation of the drift for stochastic differential equations from the empirical density

    CERN Document Server

    Batz, Philipp; Opper, Manfred

    2016-01-01

    We present a method for the nonparametric estimation of the drift function of certain types of stochastic differential equations from the empirical density. It is based on a variational formulation of the Fokker-Planck equation. The minimization of an empirical estimate of the variational functional using kernel based regularization can be performed in closed form. We demonstrate the performance of the method on second order, Langevin-type equations and show how the method can be generalized to other noise models.

  16. The Visualization and Analysis of POI Features under Network Space Supported by Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    YU Wenhao

    2015-01-01

    Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.

  17. Compressive and Noncompressive Power Spectral Density Estimation from Periodic Nonuniform Samples

    CERN Document Server

    Lexa, Michael A; Thompson, John S

    2011-01-01

    This paper presents a novel power spectral density estimation technique for bandlimited, wide-sense stationary signals from sub-Nyquist sampled data. The technique employs multi-coset sampling and applies to spectrally sparse and nonsparse power spectra alike. For sparse density functions, we apply compressed sensing theory and the resulting compressive estimates exhibit better tradeoffs among the estimator's resolution, system complexity, and average sampling rate compared to their noncompressive counterparts. Both compressive and noncompressive estimates, however, can be computed at arbitrarily low sampling rates. The estimator does not require signal reconstruction and can be directly obtained from solving either a least squares or a nonnegative least squares problem. The estimates are piecewise constant approximations whose resolutions (width of the piecewise constant segments) are controlled by the periodicity of the multi-coset sampling. The estimates are also statistically consistent. This method is wi...

  18. Bayesian error estimation in density-functional theory

    DEFF Research Database (Denmark)

    Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund

    2005-01-01

    We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...... for molecules and solids. Fluctuations within the ensemble can then be used to estimate errors relative to experiment on calculated quantities such as binding energies, bond lengths, and vibrational frequencies. It is demonstrated that the error bars on energy differences may vary by orders of magnitude...

  19. Photo-z Estimation: An Example of Nonparametric Conditional Density Estimation under Selection Bias

    CERN Document Server

    Izbicki, Rafael; Freeman, Peter E

    2016-01-01

    Redshift is a key quantity for inferring cosmological model parameters. In photometric redshift estimation, cosmologists use the coarse data collected from the vast majority of galaxies to predict the redshift of individual galaxies. To properly quantify the uncertainty in the predictions, however, one needs to go beyond standard regression and instead estimate the full conditional density f(z|x) of a galaxy's redshift z given its photometric covariates x. The problem is further complicated by selection bias: usually only the rarest and brightest galaxies have known redshifts, and these galaxies have characteristics and measured covariates that do not necessarily match those of more numerous and dimmer galaxies of unknown redshift. Unfortunately, there is not much research on how to best estimate complex multivariate densities in such settings. Here we describe a general framework for properly constructing and assessing nonparametric conditional density estimators under selection bias, and for combining two o...

  20. Covariance and correlation estimation in electron-density maps.

    Science.gov (United States)

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  1. Density estimation of Yangtze finless porpoises using passive acoustic sensors and automated click train detection.

    Science.gov (United States)

    Kimura, Satoko; Akamatsu, Tomonari; Li, Songhai; Dong, Shouyue; Dong, Lijun; Wang, Kexiong; Wang, Ding; Arai, Nobuaki

    2010-09-01

    A method is presented to estimate the density of finless porpoises using stationed passive acoustic monitoring. The number of click trains detected by stereo acoustic data loggers (A-tag) was converted to an estimate of the density of porpoises. First, an automated off-line filter was developed to detect a click train among noise, and the detection and false-alarm rates were calculated. Second, a density estimation model was proposed. The cue-production rate was measured by biologging experiments. The probability of detecting a cue and the area size were calculated from the source level, beam patterns, and a sound-propagation model. The effect of group size on the cue-detection rate was examined. Third, the proposed model was applied to estimate the density of finless porpoises at four locations from the Yangtze River to the inside of Poyang Lake. The estimated mean density of porpoises in a day decreased from the main stream to the lake. Long-term monitoring during 466 days from June 2007 to May 2009 showed variation in the density 0-4.79. However, the density was fewer than 1 porpoise/km(2) during 94% of the period. These results suggest a potential gap and seasonal migration of the population in the bottleneck of Poyang Lake.

  2. Current Developments in Nuclear Density Functional Methods

    CERN Document Server

    Dobaczewski, J

    2010-01-01

    Density functional theory (DFT) became a universal approach to compute ground-state and excited configurations of many-electron systems held together by an external one-body potential in condensed-matter, atomic, and molecular physics. At present, the DFT strategy is also intensely studied and applied in the area of nuclear structure. The nuclear DFT, a natural extension of the self-consistent mean-field theory, is a tool of choice for computations of ground-state properties and low-lying excitations of medium-mass and heavy nuclei. Over the past thirty-odd years, a lot of experience was accumulated in implementing, adjusting, and using the density-functional methods in nuclei. This research direction is still extremely actively pursued. In particular, current developments concentrate on (i) attempts to improve the performance and precision delivered by the nuclear density-functional methods, (ii) derivations of density functionals from first principles rooted in the low-energy chromodynamics and effective th...

  3. Estimating cetacean population density using fixed passive acoustic sensors: an example with Blainville's beaked whales.

    Science.gov (United States)

    Marques, Tiago A; Thomas, Len; Ward, Jessica; DiMarzio, Nancy; Tyack, Peter L

    2009-04-01

    Methods are developed for estimating the size/density of cetacean populations using data from a set of fixed passive acoustic sensors. The methods convert the number of detected acoustic cues into animal density by accounting for (i) the probability of detecting cues, (ii) the rate at which animals produce cues, and (iii) the proportion of false positive detections. Additional information is often required for estimation of these quantities, for example, from an acoustic tag applied to a sample of animals. Methods are illustrated with a case study: estimation of Blainville's beaked whale density over a 6 day period in spring 2005, using an 82 hydrophone wide-baseline array located in the Tongue of the Ocean, Bahamas. To estimate the required quantities, additional data are used from digital acoustic tags, attached to five whales over 21 deep dives, where cues recorded on some of the dives are associated with those received on the fixed hydrophones. Estimated density was 25.3 or 22.5 animals/1000 km(2), depending on assumptions about false positive detections, with 95% confidence intervals 17.3-36.9 and 15.4-32.9. These methods are potentially applicable to a wide variety of marine and terrestrial species that are hard to survey using conventional visual methods.

  4. Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.

    Science.gov (United States)

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2015-05-01

    Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology.

  5. A simple method for determining maize silage density on farms

    Directory of Open Access Journals (Sweden)

    Ana Maria Krüger

    2017-05-01

    Full Text Available Several methodologies have been tested to evaluate silage density, with direct methods most popular, whereas indirect methods that can be used under field conditions are still in development and improvement stages. This study aimed to establish relationships between estimates of maize silage density determined using a direct and an indirect method, in an endeavor to provide an alternative to direct measurement for use in the field. Measurements were performed on maize silage in 14 silos. The direct method involved the use of a metal cylinder with a saw-tooth cutting edge attached to a chainsaw to extract a core of silage. Density of the silage was determined taking into consideration the cylinder volume and dry matter weight of silage removed at 5 points on the silage face. With the indirect method, a digital penetrometer was used to estimate silage density by measuring the penetration resistance at 2 points adjacent to the spots where the silage cores were taken, i.e. 10 readings per silo. Values of penetration resistance (measured in MPa were correlated with the values of silage mass (kg/m3 obtained by direct measurement through polynomial regression analysis. A positive quadratic relationship was observed between penetration resistance and silage density for both natural matter and dry matter (R² = 0.57 and R² = 0.80, respectively, showing that the penetrometer was a reasonably reliable and simple indirect method to determine the density of dry matter in maize silage. Further testing of the machine on other silos is needed to verify these results. Keywords: Ensiled matter, penetrometer, resistance, silos evaluation.

  6. Estimation of tiger densities in India using photographic captures and recaptures

    Science.gov (United States)

    Karanth, U.; Nichols, J.D.

    1998-01-01

    Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

  7. Estimation of current density distribution under electrodes for external defibrillation

    Directory of Open Access Journals (Sweden)

    Papazov Sava P

    2002-12-01

    Full Text Available Abstract Background Transthoracic defibrillation is the most common life-saving technique for the restoration of the heart rhythm of cardiac arrest victims. The procedure requires adequate application of large electrodes on the patient chest, to ensure low-resistance electrical contact. The current density distribution under the electrodes is non-uniform, leading to muscle contraction and pain, or risks of burning. The recent introduction of automatic external defibrillators and even wearable defibrillators, presents new demanding requirements for the structure of electrodes. Method and Results Using the pseudo-elliptic differential equation of Laplace type with appropriate boundary conditions and applying finite element method modeling, electrodes of various shapes and structure were studied. The non-uniformity of the current density distribution was shown to be moderately improved by adding a low resistivity layer between the metal and tissue and by a ring around the electrode perimeter. The inclusion of openings in long-term wearable electrodes additionally disturbs the current density profile. However, a number of small-size perforations may result in acceptable current density distribution. Conclusion The current density distribution non-uniformity of circular electrodes is about 30% less than that of square-shaped electrodes. The use of an interface layer of intermediate resistivity, comparable to that of the underlying tissues, and a high-resistivity perimeter ring, can further improve the distribution. The inclusion of skin aeration openings disturbs the current paths, but an appropriate selection of number and size provides a reasonable compromise.

  8. Effect of Random Clustering on Surface Damage Density Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, M J; Feit, M D

    2007-10-29

    Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.

  9. A Concept of Approximated Densities for Efficient Nonlinear Estimation

    Directory of Open Access Journals (Sweden)

    Virginie F. Ruiz

    2002-10-01

    Full Text Available This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD. The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.

  10. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    Science.gov (United States)

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  11. Application of Kernel Density Estimation in Lamb Wave-Based Damage Detection

    Directory of Open Access Journals (Sweden)

    Long Yu

    2012-01-01

    Full Text Available The present work concerns the estimation of the probability density function (p.d.f. of measured data in the Lamb wave-based damage detection. Although there was a number of research work which focused on the consensus algorithm of combining all the results of individual sensors, the p.d.f. of measured data, which was the fundamental part of the probability-based method, was still given by experience in existing work. Based on the analysis about the noise-induced errors in measured data, it was learned that the type of distribution was related with the level of noise. In the case of weak noise, the p.d.f. of measured data could be considered as the normal distribution. The empirical methods could give satisfied estimating results. However, in the case of strong noise, the p.d.f. was complex and did not belong to any type of common distribution function. Nonparametric methods, therefore, were needed. As the most popular nonparametric method, kernel density estimation was introduced. In order to demonstrate the performance of the kernel density estimation methods, a numerical model was built to generate the signals of Lamb waves. Three levels of white Gaussian noise were intentionally added into the simulated signals. The estimation results showed that the nonparametric methods outperformed the empirical methods in terms of accuracy.

  12. Effect of compression paddle tilt correction on volumetric breast density estimation.

    Science.gov (United States)

    Kallenberg, Michiel G J; van Gils, Carla H; Lokate, Mariëtte; den Heeten, Gerard J; Karssemeijer, Nico

    2012-08-21

    For the acquisition of a mammogram, a breast is compressed between a compression paddle and a support table. When compression is applied with a flexible compression paddle, the upper plate may be tilted, which results in variation in breast thickness from the chest wall to the breast margin. Paddle tilt has been recognized as a major problem in volumetric breast density estimation methods. In previous work, we developed a fully automatic method to correct the image for the effect of compression paddle tilt. In this study, we investigated in three experiments the effect of paddle tilt and its correction on volumetric breast density estimation. Results showed that paddle tilt considerably affected accuracy of volumetric breast density estimation, but that effect could be reduced by tilt correction. By applying tilt correction, a significant increase in correspondence between mammographic density estimates and measurements on MRI was established. We argue that in volumetric breast density estimation, tilt correction is both feasible and essential when mammographic images are acquired with a flexible compression paddle.

  13. Estimation Prospects of the Source Number Density of Ultra-high-energy Cosmic Rays

    OpenAIRE

    Takami, Hajime; Sato, Katsuhiko

    2007-01-01

    We discuss the possibility of accurately estimating the source number density of ultra-high-energy cosmic rays (UHECRs) using small-scale anisotropy in their arrival distribution. The arrival distribution has information on their source and source distribution. We calculate the propagation of UHE protons in a structured extragalactic magnetic field (EGMF) and simulate their arrival distribution at the Earth using our previously developed method. The source number density that can best reprodu...

  14. Bulk density estimation using a 3-dimensional image acquisition and analysis system

    Directory of Open Access Journals (Sweden)

    Heyduk Adam

    2016-01-01

    Full Text Available The paper presents a concept of dynamic bulk density estimation of a particulate matter stream using a 3-d image analysis system and a conveyor belt scale. A method of image acquisition should be adjusted to the type of scale. The paper presents some laboratory results of static bulk density measurements using the MS Kinect time-of-flight camera and OpenCV/Matlab software. Measurements were made for several different size classes.

  15. Method of high-density foil fabrication

    Energy Technology Data Exchange (ETDEWEB)

    Blue, Craig A.; Sikka, Vinod K.; Ohriner, Evan K.

    2003-12-16

    A method for preparing flat foils having a high density includes the steps of mixing a powdered material with a binder to form a green sheet. The green sheet is exposed to a high intensity radiative source adapted to emit radiation of wavelengths corresponding to an absorption spectrum of the powdered material. The surface of the green sheet is heated while a lower sub-surface temperature is maintained. An apparatus for preparing a foil from a green sheet using a radiation source is also disclosed.

  16. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    Science.gov (United States)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  17. Volumetric breast density estimation from full-field digital mammograms.

    NARCIS (Netherlands)

    Engeland, S. van; Snoeren, P.R.; Huisman, H.J.; Boetes, C.; Karssemeijer, N.

    2006-01-01

    A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast

  18. Finding and characterising WHIM structures using the luminosity density method

    Science.gov (United States)

    Nevalainen, Jukka; Liivamägi, L. J.; Tempel, E.; Branchini, E.; Roncarelli, M.; Giocoli, C.; Heinämäki, P.; Saar, E.; Bonamente, M.; Einasto, M.; Finoguenov, A.; Kaastra, J.; Lindfors, E.; Nurmi, P.; Ueda, Y.

    2016-10-01

    We have developed a new method to approach the missing baryons problem. We assume that the missing baryons reside in a form of Warm Hot Intergalactic Medium, i.e. the WHIM. Our method consists of (a) detecting the coherent large scale structure in the spatial distribution of galaxies that traces the Cosmic Web and that in hydrodynamical simulations is associated to the WHIM, (b) mapping its luminosity into a galaxy luminosity density field, (c) using numerical simulations to relate the luminosity density to the density of the WHIM, (d) applying this relation to real data to trace the WHIM using the observed galaxy luminosities in the Sloan Digital Sky Survey and 2dF redshift surveys. In our application we find evidence for the WHIM along the line of sight to the Sculptor Wall, at redshifts consistent with the recently reported X-ray absorption line detections. Our indirect WHIM detection technique complements the standard method based on the detection of characteristic X-ray absorption lines, showing that the galaxy luminosity density is a reliable signpost for the WHIM. For this reason, our method could be applied to current galaxy surveys to optimise the observational strategies for detecting and studying the WHIM and its properties. Our estimates of the WHIM hydrogen column density N H in Sculptor agree with those obtained via the X-ray analysis. Due to the additional N H estimate, our method has potential for improving the constrains of the physical parameters of the WHIM as derived with X-ray absorption, and thus for improving the understanding of the missing baryons problem.

  19. Confidence estimates in simulation of phase noise or spectral density.

    Science.gov (United States)

    Ashby, Neil

    2017-02-13

    In this paper we apply the method of discrete simulation of power law noise, developed in [1],[3],[4], to the problem of simulating phase noise for a combination of power law noises. We derive analytic expressions for the probability of observing a value of phase noise L(f) or of any of the onesided spectral densities S(f); Sy(f), or Sx(f), for arbitrary superpositions of power law noise.

  20. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    Science.gov (United States)

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  1. Estimation of bone mineral density by digital X-ray radiogrammetry: theoretical background and clinical testing

    DEFF Research Database (Denmark)

    Rosholm, A; Hyldstrup, L; Backsgaard, L

    2002-01-01

    A new automated radiogrammetric method to estimate bone mineral density (BMD) from a single radiograph of the hand and forearm is described. Five regions of interest in radius, ulna and the three middle metacarpal bones are identified and approximately 1800 geometrical measurements from these bon...

  2. Stochastic estimation of level density in nuclear shell-model calculations

    Directory of Open Access Journals (Sweden)

    Shimizu Noritaka

    2016-01-01

    Full Text Available An estimation method of the nuclear level density stochastically based on nuclear shell-model calculations is introduced. In order to count the number of the eigen-values of the shell-model Hamiltonian matrix, we perform the contour integral of the matrix element of a resolvent. The shifted block Krylov subspace method enables us its efficient computation. Utilizing this method, the contamination of center-of-mass motion is clearly removed.

  3. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    Directory of Open Access Journals (Sweden)

    Alexander Richard Braczkowski

    Full Text Available Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96 or temporal activity of female (p = 0.12 or male leopards (p = 0.79, and the assumption of geographic closure was met for both surveys (p >0.05. The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90. Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2 were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2. The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.

  4. Free energy methods for efficient exploration of mixture posterior densities

    CERN Document Server

    Chopin, Nicolas; Stoltz, Gabriel

    2010-01-01

    Because of their multimodality, mixture posterior densities are difficult to sample with standard Markov chain Monte Carlo (MCMC) methods. We propose a strategy to enhance the sampling of MCMC in this context, using a biasing procedure which originates from computational statistical physics. The principle is first to choose a "reaction coordinate", that is, a direction in which the target density is multimodal. In a second step, the marginal log-density of the reaction coordinate is estimated; this quantity is called "free energy" in the computational statistical physics literature. To this end, we use adaptive biasing Markov chain algorithms which adapt their invariant distribution on the fly, in order to overcome sampling barriers along the chosen reaction coordinate. Finally, we perform an importance sampling step in order to remove the bias and recover the true posterior. A crucial point is the choice of the reaction coordinate. We show that a convenient and efficient reaction coordinate is the hyper-para...

  5. SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE

    Institute of Scientific and Technical Information of China (English)

    Ming HAN; Yuanyao DING

    2004-01-01

    This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.

  6. Failure Analysis of Wind Turbines by Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W.F.

    2013-01-01

    The aim of this study is to present an efficient and accurate method for estimation of the failure probability of wind turbine structures which work under turbulent wind load. The classical method for this is to fit one of the extreme value probability distribution functions to the extracted maxima....... This is not practical due to its excessive computational load. This problem can alternatively be tackled if the evolution of the probability density function (PDF) of the response process can be realized. The evolutionary PDF can then be integrated on the boundaries of the problem. For this reason we propose to use...... the Probability Density Evolution Method (PDEM). PDEM can alternatively be used to obtain the distribution of the extreme values of the response process by simulation. This approach requires less computational effort than integrating the evolution of the PDF; but may be less accurate. In this paper we present...

  7. Methods of Estimating Strategic Intentions

    Science.gov (United States)

    1982-05-01

    of events, coding categories. A V 𔃻 2. Weighting Data: polIcy capturIng, Bayesian methods, correlation and variance analysis. 3. Characterizing Data...memory aids, fuzzy sets, factor analysis. 4. Assessing Covariations: actuarial models, backcasting . bootstrapping. 5. Cause and Effect Assessment...causae search, causal analysis, search trees, stepping analysts, hypothesis, regression analysis. 6. Predictions: Backcast !ng, boot strapping, decision

  8. Some asymptotic results on density estimators by wavelet projections

    CERN Document Server

    Varron, Davit

    2012-01-01

    Let $(X_i)_{i\\geq 1}$ be an i.i.d. sample on $\\RRR^d$ having density $f$. Given a real function $\\phi$ on $\\RRR^d$ with finite variation and given an integer valued sequence $(j_n)$, let $\\fn$ denote the estimator of $f$ by wavelet projection based on $\\phi$ and with multiresolution level equal to $j_n$. We provide exact rates of almost sure convergence to 0 of the quantity $\\sup_{x\\in H}\\mid \\fn(x)-\\EEE(\\fn)(x)\\mid$, when $n2^{-dj_n}/\\log n \\rar \\infty$ and $H$ is a given hypercube of $\\RRR^d$. We then show that, if $n2^{-dj_n}/\\log n \\rar c$ for a constant $c>0$, then the quantity $\\sup_{x\\in H}\\mid \\fn(x)-f\\mid$ almost surely fails to converge to 0.

  9. Importance of tree basic density in biomass estimation and associated uncertainties

    DEFF Research Database (Denmark)

    Njana, Marco Andrew; Meilby, Henrik; Eid, Tron

    2016-01-01

    Key message Aboveground and belowground tree basic densities varied between and within the three mangrove species. If appropriately determined and applied, basic density may be useful in estimation of tree biomass. Predictive accuracy of the common (i.e. multi-species) models including aboveground...... of sustainable forest management, conservation and enhancement of carbon stocks (REDD+) initiatives offer an opportunity for sustainable management of forests including mangroves. In carbon accounting for REDD+, it is required that carbon estimates prepared for monitoring reporting and verification schemes...... and examine uncertainties in estimation of tree biomass using indirect methods. Methods This study focused on three dominant mangrove species (Avicennia marina (Forssk.) Vierh, Sonneratia alba J. Smith and Rhizophora mucronata Lam.) in Tanzania. A total of 120 trees were destructively sampled for aboveground...

  10. Extended force density method and its expressions

    CERN Document Server

    Miki, Masaaki

    2011-01-01

    The objective of this work can be divided into two parts. The first one is to propose an extension of the force density method (FDM)(H.J. Schek, 1974), a form-finding method for prestressed cable-net structures. The second one is to present a review of various form-finding methods for tension structures, in the relation with the extended FDM. In the first part, it is pointed out that the original FDM become useless when it is applied to the prestressed structures that consist of combinations of both tension and compression members, while the FDM is usually advantageous in form-finding analysis of cable-nets. To eliminate the limitation, a functional whose stationary problem simply represents the FDM is firstly proposed. Additionally, the existence of a variational principle in the FDM is also indicated. Then, the FDM is extensively redefined by generalizing the formulation of the functional. As the result, the generalized functionals enable us to find the forms of tension structures that consist of combinatio...

  11. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    Science.gov (United States)

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  12. Multiscale functional connectivity estimation on low-density neuronal cultures recorded by high-density CMOS Micro Electrode Arrays.

    Science.gov (United States)

    Maccione, Alessandro; Garofalo, Matteo; Nieus, Thierry; Tedesco, Mariateresa; Berdondini, Luca; Martinoia, Sergio

    2012-06-15

    We used electrophysiological signals recorded by CMOS Micro Electrode Arrays (MEAs) at high spatial resolution to estimate the functional-effective connectivity of sparse hippocampal neuronal networks in vitro by applying a cross-correlation (CC) based method and ad hoc developed spatio-temporal filtering. Low-density cultures were recorded by a recently introduced CMOS-MEA device providing simultaneous multi-site acquisition at high-spatial (21 μm inter-electrode separation) as well as high-temporal resolution (8 kHz per channel). The method is applied to estimate functional connections in different cultures and it is refined by applying spatio-temporal filters that allow pruning of those functional connections not compatible with signal propagation. This approach permits to discriminate between possible causal influence and spurious co-activation, and to obtain detailed maps down to cellular resolution. Further, a thorough analysis of the links strength and time delays (i.e., amplitude and peak position of the CC function) allows characterizing the inferred interconnected networks and supports a possible discrimination of fast mono-synaptic propagations, and slow poly-synaptic pathways. By focusing on specific regions of interest we could observe and analyze microcircuits involving connections among a few cells. Finally, the use of the high-density MEA with low density cultures analyzed with the proposed approach enables to compare the inferred effective links with the network structure obtained by staining procedures.

  13. Productivity and population density estimates of the dengue vector mosquito Aedes aegypti (Stegomyia aegypti) in Australia.

    Science.gov (United States)

    Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A

    2013-09-01

    New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities.

  14. Estimation of dislocation density from precession electron diffraction data using the Nye tensor.

    Science.gov (United States)

    Leff, A C; Weinberger, C R; Taheri, M L

    2015-06-01

    The Nye tensor offers a means to estimate the geometrically necessary dislocation density of a crystalline sample based on measurements of the orientation changes within individual crystal grains. In this paper, the Nye tensor theory is applied to precession electron diffraction automated crystallographic orientation mapping (PED-ACOM) data acquired using a transmission electron microscope (TEM). The resulting dislocation density values are mapped in order to visualize the dislocation structures present in a quantitative manner. These density maps are compared with other related methods of approximating local strain dependencies in dislocation-based microstructural transitions from orientation data. The effect of acquisition parameters on density measurements is examined. By decreasing the step size and spot size during data acquisition, an increasing fraction of the dislocation content becomes accessible. Finally, the method described herein is applied to the measurement of dislocation emission during in situ annealing of Cu in TEM in order to demonstrate the utility of the technique for characterizing microstructural dynamics.

  15. Optimisation of in-situ dry density estimation

    Directory of Open Access Journals (Sweden)

    Morvan Mathilde

    2016-01-01

    Full Text Available Nowadays, field experiments are mostly used to determine the resistance and settlements of a soil before building. The needed devices were heavy so they cannot be used in every situation. It is the reason why Gourves et al (1998 developed a light dynamic penetrometer called Panda. For this penetrometer, a standardized hammer has to be blown on the head of the piston. For each blow, it measures the driving energy as well as the driving depth of the cone into the soil. The obtained penetrogram gives us the cone resistance variation with depth. For homogeneous soils, three parameters can determined: the critical depth zc, the initial cone resistance qd0 and the cone resistance in depth qd1. In parallel to the improvement of this apparatus, some researches were lead to obtain a relationship between the dry density of soil and the cone resistance in depth qd1. Knowing dry density of soil can allow to evaluate compaction efficiency for example. To achieve this point, a database of soils was initiated. Each of these soils was tested and classified using laboratory tests, among others, grain size distribution, proctor results, Atterberg limits. Penetrometer tests were also performed for three to five densities and for three to five water contents. Using this database, Chaigneau managed to obtain a logarithmic relation linking qd1 and dry density. But this relation varies with the water content. This article presents our recent researches on a mean to obtain a unified relation using water content, saturation degree or suction. To achieve this point, at first we studied the CNR silt responses with saturation degree and water content. Its water retention curve was realised using filter paper method so we can obtain suction. Then we verified the conclusion of this study to seven soils of the database to validate our hypotheses.

  16. An Independent Component Analysis Algorithm through Solving Gradient Equation Combined with Kernel Density Estimation

    Institute of Scientific and Technical Information of China (English)

    XUE Yun-feng; WANG Yu-jia; YANG Jie

    2009-01-01

    A new algorithm for linear instantaneous independent component analysis is proposed based on max-imizing the log-likelihood contrast function which can be changed into a gradient equation. An iterative method is introduced to solve this equation efficiently. The unknown probability density functions as well as their first and second derivatives in the gradient equation are estimated by kernel density method. Computer simulations on artificially generated signals and gray scale natural scene images confirm the efficiency and accuracy of the proposed algorithm.

  17. Stochastic Estimation of Nuclear Level Density in the Nuclear Shell Model: An Application to Parity-Dependent Level Density in $^{58}$Ni

    CERN Document Server

    Shimizu, Noritaka; Futamura, Yasunori; Sakurai, Tetsuya; Mizusaki, Takahiro; Otsuka, Takaharu

    2016-01-01

    We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of $J^\\pi=2^+$ and $2^-$ states in $^{58}$Ni in a unified manner.

  18. Stochastic estimation of nuclear level density in the nuclear shell model: An application to parity-dependent level density in 58Ni

    Directory of Open Access Journals (Sweden)

    Noritaka Shimizu

    2016-02-01

    Full Text Available We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of Jπ=2+ and 2− states in 58Ni in a unified manner.

  19. Estimated global nitrogen deposition using NO2 column density

    Science.gov (United States)

    Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao

    2013-01-01

    Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m−2 yr−1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m−2 yr−1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.

  20. Métodos não destrutivos para estimativa de densidade de área foliar em mangueira Non-destructive methods for estimating leaf area density in mango

    Directory of Open Access Journals (Sweden)

    Mario Zortéa Antunes Junior

    2009-12-01

    Full Text Available O objetivo deste trabalho foi estimar o número de folhas de ramos do dossel de cultivares de mangueira e estimar a densidade de área foliar utilizando, respectivamente, uma relação alométrica e um modelo de interceptação de luz. O trabalho foi conduzido com as cultivares Alfa, Roxa e Malind, na fazenda experimental da Universidade Federal de Mato Grosso, no Município de Santo Antônio do Leverger, MT. As equações testadas para a determinação do número de folhas apresentaram desempenho ótimo, com índices de confiança que variaram entre 0,85 e 0,94, e podem ser utilizadas como alternativa para a estimativa da área foliar das três cultivares. O modelo de interceptação da luz também apresentou desempenho ótimo e bom na estimativa da densidade foliar, com índices de confiança que variaram entre 0,97 e 0,99 e 0,68 e 0,95 para as cultivares de mangueira Roxa e Malind, respectivamente.The objective of this work was to estimate the number of leaves in the branches of mango cultivars canopies and to estimate the leaf area density using, respectively, an allometric relation and a light interception model. The work was carried out with the Alfa, Roxa and Malind cultivars, grown at the experimental farm of the Universidade Federal de Mato Grosso, in the municipality of Santo Antônio do Leverger, MT, Brazil. The equations tested for determining the number of leaves had excellent performance, with confidence indexes ranging from 0,85 to 0,94, and can be used as an alternative for estimating the leaf area of the three cultivars. The light interception model also had good performance in estimating leaf density, with confidence indexes ranging from 0,97 to 0,99 and from 0,68 to 0,95 for the Roxa and Malind mango cultivars respectively.

  1. Line Transect and Triangulation Surveys Provide Reliable Estimates of the Density of Kloss' Gibbons (Hylobates klossii) on Siberut Island, Indonesia.

    Science.gov (United States)

    Höing, Andrea; Quinten, Marcel C; Indrawati, Yohana Maria; Cheyne, Susan M; Waltert, Matthias

    2013-02-01

    Estimating population densities of key species is crucial for many conservation programs. Density estimates provide baseline data and enable monitoring of population size. Several different survey methods are available, and the choice of method depends on the species and study aims. Few studies have compared the accuracy and efficiency of different survey methods for large mammals, particularly for primates. Here we compare estimates of density and abundance of Kloss' gibbons (Hylobates klossii) using two of the most common survey methods: line transect distance sampling and triangulation. Line transect surveys (survey effort: 155.5 km) produced a total of 101 auditory and visual encounters and a density estimate of 5.5 gibbon clusters (groups or subgroups of primate social units)/km(2). Triangulation conducted from 12 listening posts during the same period revealed a similar density estimate of 5.0 clusters/km(2). Coefficients of variation of cluster density estimates were slightly higher from triangulation (0.24) than from line transects (0.17), resulting in a lack of precision in detecting changes in cluster densities of triangulation and triangulation method also may be appropriate.

  2. Age Estimation Methods in Forensic Odontology

    Directory of Open Access Journals (Sweden)

    Phuwadon Duangto

    2016-12-01

    Full Text Available Forensically, age estimation is a crucial step for biological identification. Currently, there are many methods with variable accuracy to predict the age for dead or living persons such as a physical examination, radiographs of the left hand, and dental assessment. Age estimation using radiographic tooth development has been found to be an accurate method because it is mainly genetically influenced and less affected by nutritional and environmental factors. The Demirjian et al. method has long been the most commonly used for dental age estimation using radiological technique in many populations. This method, based on tooth developmental changes, is an easy-to-apply method since different stages of tooth development is clearly defined. The aim of this article is to elaborate age estimation by using tooth development with a focus on the Demirjian et al. method.

  3. Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images

    Science.gov (United States)

    Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.

    2008-03-01

    Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.

  4. Density Estimation for Protein Conformation Angles Using a Bivariate von Mises Distribution and Bayesian Nonparametrics.

    Science.gov (United States)

    Lennox, Kristin P; Dahl, David B; Vannucci, Marina; Tsai, Jerry W

    2009-06-01

    Interest in predicting protein backbone conformational angles has prompted the development of modeling and inference procedures for bivariate angular distributions. We present a Bayesian approach to density estimation for bivariate angular data that uses a Dirichlet process mixture model and a bivariate von Mises distribution. We derive the necessary full conditional distributions to fit the model, as well as the details for sampling from the posterior predictive distribution. We show how our density estimation method makes it possible to improve current approaches for protein structure prediction by comparing the performance of the so-called "whole" and "half" position distributions. Current methods in the field are based on whole position distributions, as density estimation for the half positions requires techniques, such as ours, that can provide good estimates for small datasets. With our method we are able to demonstrate that half position data provides a better approximation for the distribution of conformational angles at a given sequence position, therefore providing increased efficiency and accuracy in structure prediction.

  5. Wavelet-based density estimation for noise reduction in plasma simulations using particles

    Science.gov (United States)

    van yen, Romain Nguyen; del-Castillo-Negrete, Diego; Schneider, Kai; Farge, Marie; Chen, Guangye

    2010-04-01

    For given computational resources, the accuracy of plasma simulations using particles is mainly limited by the noise due to limited statistical sampling in the reconstruction of the particle distribution function. A method based on wavelet analysis is proposed and tested to reduce this noise. The method, known as wavelet-based density estimation (WBDE), was previously introduced in the statistical literature to estimate probability densities given a finite number of independent measurements. Its novel application to plasma simulations can be viewed as a natural extension of the finite size particles (FSP) approach, with the advantage of estimating more accurately distribution functions that have localized sharp features. The proposed method preserves the moments of the particle distribution function to a good level of accuracy, has no constraints on the dimensionality of the system, does not require an a priori selection of a global smoothing scale, and its able to adapt locally to the smoothness of the density based on the given discrete particle data. Moreover, the computational cost of the denoising stage is of the same order as one time step of a FSP simulation. The method is compared with a recently proposed proper orthogonal decomposition based method, and it is tested with three particle data sets involving different levels of collisionality and interaction with external and self-consistent fields.

  6. Kernel Density Feature Points Estimator for Content-Based Image Retrieval

    CERN Document Server

    Zuva, Tranos; Ojo, Sunday O; Ngwira, Seleman M

    2012-01-01

    Research is taking place to find effective algorithms for content-based image representation and description. There is a substantial amount of algorithms available that use visual features (color, shape, texture). Shape feature has attracted much attention from researchers that there are many shape representation and description algorithms in literature. These shape image representation and description algorithms are usually not application independent or robust, making them undesirable for generic shape description. This paper presents an object shape representation using Kernel Density Feature Points Estimator (KDFPE). In this method, the density of feature points within defined rings around the centroid of the image is obtained. The KDFPE is then applied to the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of image representation shows improved retrieval rate when compared to Density Histogram Feature Points (DHFP) method. Analytic analysis is done to justify our m...

  7. Estimation of Plasma Density by Surface Plasmons for Surface-Wave Plasmas

    Institute of Scientific and Technical Information of China (English)

    CHEN Zhao-Quan; LIU Ming-Hai; LAN Chao-Hui; CHEN Wei; LUO Zhi-Qing; HU Xi-Wei

    2008-01-01

    @@ An estimation method of plasma density based on surface plasmons theory for surface-wave plasmas is proposed. The number of standing-wave is obtained directly from the discharge image, and the propagation constant is calculated with the trim size of the apparatus in this method, then plasma density can be determined with the value of 9.1 × 1017 m-3. Plasma density is measured using a Langmuir probe, the value is 8.1 × 1017 m-3 which is very close to the predicted value of surface plasmons theory. Numerical simulation is used to check the number of standing-wave by the finite-difference time-domain (FDTD) method also. All results are compatible both of theoretical analysis and experimental measurement.

  8. Digital Forensics Analysis of Spectral Estimation Methods

    CERN Document Server

    Mataracioglu, Tolga

    2011-01-01

    Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message. In today's world, it is widely used in order to secure the information. In this paper, the traditional spectral estimation methods are introduced. The performance analysis of each method is examined by comparing all of the spectral estimation methods. Finally, from utilizing those performance analyses, a brief pros and cons of the spectral estimation methods are given. Also we give a steganography demo by hiding information into a sound signal and manage to pull out the information (i.e, the true frequency of the information signal) from the sound by means of the spectral estimation methods.

  9. ESTIMATION OF THE NUMBER OF CORRELATED SOURCES WITH COMMON FREQUENCIES BASED ON POWER SPECTRAL DENSITY

    Institute of Scientific and Technical Information of China (English)

    LI Ning; SHI Tielin

    2007-01-01

    Blind source Separation and estimation of the number of sources usually demand that the number of sensors should be greater than or equal to that of the sources, which, however, is very difficult to satisfy for the complex Systems. A new estimating method based on power spectral density (PSD) is presented. When the relation between the number of sensors and that of sources is unknown, the PSD matrix is first obtained by the ratio of PSD of the observation signals, and then the bound of the number of correlated sources with common frequencies can be estimated by comparing every column vector of PSD matrix. The effectiveness of the proposed method is verified by theoretical analysis and experiments, and the influence of noise on the estimation of number of source is simulated.

  10. Estimation of electrical conductivity distribution within the human head from magnetic flux density measurement.

    Science.gov (United States)

    Gao, Nuo; Zhu, S A; He, Bin

    2005-06-01

    We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 +/- 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 +/- 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.

  11. Estimating the mass density of neutral gas at $z < 1$

    CERN Document Server

    Natarajan, P; Natarajan, Priyamvada; Pettini, Max

    1997-01-01

    We use the relationships between galactic HI mass and B-band luminosity determined by Rao & Briggs to recalculate the mass density of neutral gas at the present epoch based on more recent measures of the galaxy luminosity function than were available to those authors. We find $\\Omega_{gas}(z=0) value, suggesting that this quantity is now reasonably secure. We then show that, if the scaling between H I mass and B-band luminosity has remained approximately constant since $z = 1$, the evolution of the luminosity function found by the Canada-France redshift survey translates to an increase of obtained quite independently from consideration of the luminosity function of Mg II absorbers at $z = 0.65$. By combining these new estimates with data from damped \\lya systems at higher redshift, it is possible to assemble a rough sketch of the evolution of $Ømega_{gas}$ over the last 90% of the age of the universe. The consumption of H I gas with time is in broad agreement with models of chemical evolution which inclu...

  12. Estimating abundance and density of Amur tigers along the Sino-Russian border.

    Science.gov (United States)

    Xiao, Wenhong; Feng, Limin; Mou, Pu; Miquelle, Dale G; Hebblewhite, Mark; Goldberg, Joshua F; Robinson, Hugh S; Zhao, Xiaodan; Zhou, Bo; Wang, Tianming; Ge, Jianping

    2016-07-01

    As an apex predator the Amur tiger (Panthera tigris altaica) could play a pivotal role in maintaining the integrity of forest ecosystems in Northeast Asia. Due to habitat loss and harvest over the past century, tigers rapidly declined in China and are now restricted to the Russian Far East and bordering habitat in nearby China. To facilitate restoration of the tiger in its historical range, reliable estimates of population size are essential to assess effectiveness of conservation interventions. Here we used camera trap data collected in Hunchun National Nature Reserve from April to June 2013 and 2014 to estimate tiger density and abundance using both maximum likelihood and Bayesian spatially explicit capture-recapture (SECR) methods. A minimum of 8 individuals were detected in both sample periods and the documentation of marking behavior and reproduction suggests the presence of a resident population. Using Bayesian SECR modeling within the 11 400 km(2) state space, density estimates were 0.33 and 0.40 individuals/100 km(2) in 2013 and 2014, respectively, corresponding to an estimated abundance of 38 and 45 animals for this transboundary Sino-Russian population. In a maximum likelihood framework, we estimated densities of 0.30 and 0.24 individuals/100 km(2) corresponding to abundances of 34 and 27, in 2013 and 2014, respectively. These density estimates are comparable to other published estimates for resident Amur tiger populations in the Russian Far East. This study reveals promising signs of tiger recovery in Northeast China, and demonstrates the importance of connectivity between the Russian and Chinese populations for recovering tigers in Northeast China.

  13. A novel deep learning-based approach to high accuracy breast density estimation in digital mammography

    Science.gov (United States)

    Ahn, Chul Kyun; Heo, Changyong; Jin, Heongmin; Kim, Jong Hyo

    2017-03-01

    Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert's manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.

  14. A hybrid model of kernel density estimation and quantile regression for GEFCom2014 probabilistic load forecasting

    CERN Document Server

    Haben, Stephen

    2016-01-01

    We present a model for generating probabilistic forecasts by combining kernel density estimation (KDE) and quantile regression techniques, as part of the probabilistic load forecasting track of the Global Energy Forecasting Competition 2014. The KDE method is initially implemented with a time-decay parameter. We later improve this method by conditioning on the temperature or the period of the week variables to provide more accurate forecasts. Secondly, we develop a simple but effective quantile regression forecast. The novel aspects of our methodology are two-fold. First, we introduce symmetry into the time-decay parameter of the kernel density estimation based forecast. Secondly we combine three probabilistic forecasts with different weights for different periods of the month.

  15. Global parameter estimation methods for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Poovathingal Suresh

    2010-08-01

    Full Text Available Abstract Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter

  16. Extension of the statistical modal energy distribution analysis for estimating energy density in coupled subsystems

    Science.gov (United States)

    Totaro, N.; Guyader, J. L.

    2012-06-01

    The present article deals with an extension of the Statistical modal Energy distribution Analysis (SmEdA) method to estimate kinetic and potential energy density in coupled subsystems. The SmEdA method uses the modal bases of uncoupled subsystems and focuses on the modal energies rather than the global energies of subsystems such as SEA (Statistical Energy Analysis). This method permits extending SEA to subsystems with low modal overlap or to localized excitations as it does not assume the existence of modal energy equipartition. We demonstrate that by using the modal energies of subsystems computed by SmEdA, it is possible to estimate energy distribution in subsystems. This approach has the same advantages of standard SEA, as it uses very short calculations to analyze damping effects. The estimation of energy distribution from SmEdA is applied to an academic case and an industrial example.

  17. Estimation of bone mineral density by digital X-ray radiogrammetry: theoretical background and clinical testing

    DEFF Research Database (Denmark)

    Rosholm, A; Hyldstrup, L; Backsgaard, L

    2002-01-01

    A new automated radiogrammetric method to estimate bone mineral density (BMD) from a single radiograph of the hand and forearm is described. Five regions of interest in radius, ulna and the three middle metacarpal bones are identified and approximately 1800 geometrical measurements from these bones......-ray absoptiometry (r = 0.86, p Relative to this age-related loss, the reported short...... sites and a precision that potentially allows for relatively short observation intervals. Udgivelsesdato: 2001-null...

  18. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  19. Kernel density estimation and marginalized-particle based probability hypothesis density filter for multi-target tracking

    Institute of Scientific and Technical Information of China (English)

    张路平; 王鲁平; 李飚; 赵明

    2015-01-01

    In order to improve the performance of the probability hypothesis density (PHD) algorithm based particle filter (PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.

  20. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    Science.gov (United States)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  1. Probabilistic density function estimation of geotechnical shear strength parameters using the second Chebyshev orthogonal polynomial

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A method to estimate the probabilistic density function (PDF) of shear strength parameters was proposed. The second Chebyshev orthogonal polynomial(SCOP) combined with sample moments (the originmoments)was used to approximate the PDF of parameters. χ2 test was adopted to verify the availability of the method. It is distribution-free because no classical theoretical distributions were assumed in advance and the inference result provides a universal form of probability density curves. Six most commonly-used theoretical distributions named normal, lognormal, extreme value Ⅰ , gama, beta and Weibull distributions were used to verify SCOP method. An example from the observed data of cohesion c of a kind of silt clay was presented for illustrative purpose. The results show that the acceptance levels in SCOP are all smaller than those in the classical finite comparative method and the SCOP function is more accurate and effective in the reliability analysis of geotechnical engineering.

  2. Uncertainty quantification techniques for population density estimates derived from sparse open source data

    Science.gov (United States)

    Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.

    2013-05-01

    The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  3. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Robert N [ORNL; White, Devin A [ORNL; Urban, Marie L [ORNL; Morton, April M [ORNL; Webster, Clayton G [ORNL; Stoyanov, Miroslav K [ORNL; Bright, Eddie A [ORNL; Bhaduri, Budhendra L [ORNL

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  4. A more appropriate white blood cell count for estimating malaria parasite density in Plasmodium vivax patients in northeastern Myanmar.

    Science.gov (United States)

    Liu, Huaie; Feng, Guohua; Zeng, Weilin; Li, Xiaomei; Bai, Yao; Deng, Shuang; Ruan, Yonghua; Morris, James; Li, Siman; Yang, Zhaoqing; Cui, Liwang

    2016-04-01

    The conventional method of estimating parasite densities employ an assumption of 8000 white blood cells (WBCs)/μl. However, due to leucopenia in malaria patients, this number appears to overestimate parasite densities. In this study, we assessed the accuracy of parasite density estimated using this assumed WBC count in eastern Myanmar, where Plasmodium vivax has become increasingly prevalent. From 256 patients with uncomplicated P. vivax malaria, we estimated parasite density and counted WBCs by using an automated blood cell counter. It was found that WBC counts were not significantly different between patients of different gender, axillary temperature, and body mass index levels, whereas they were significantly different between age groups of patients and the time points of measurement. The median parasite densities calculated with the actual WBC counts (1903/μl) and the assumed WBC count of 8000/μl (2570/μl) were significantly different. We demonstrated that using the assumed WBC count of 8000 cells/μl to estimate parasite densities of P. vivax malaria patients in this area would lead to an overestimation. For P. vivax patients aged five years and older, an assumed WBC count of 5500/μl best estimated parasite densities. This study provides more realistic assumed WBC counts for estimating parasite densities in P. vivax patients from low-endemicity areas of Southeast Asia.

  5. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  6. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim

    2016-05-11

    Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The

  7. Multivariate density estimation using dimension reducing information and tail flattening transformations for truncated or censored data

    DEFF Research Database (Denmark)

    Buch-Kromann, Tine; Nielsen, Jens

    2012-01-01

    This paper introduces a multivariate density estimator for truncated and censored data with special emphasis on extreme values based on survival analysis. A local constant density estimator is considered. We extend this estimator by means of tail flattening transformation, dimension reducing prio...

  8. Estimation of dislocation density from precession electron diffraction data using the Nye tensor

    Energy Technology Data Exchange (ETDEWEB)

    Leff, A.C. [Department of Materials Science & Engineering, Drexel University, Philadelphia, PA (United States); Weinberger, C.R. [Department of Mechanical Engineering and Mechanics, Drexel University, Philadelphia, PA (United States); Taheri, M.L., E-mail: mtaheri@coe.drexel.edu [Department of Materials Science & Engineering, Drexel University, Philadelphia, PA (United States)

    2015-06-15

    The Nye tensor offers a means to estimate the geometrically necessary dislocation density of a crystalline sample based on measurements of the orientation changes within individual crystal grains. In this paper, the Nye tensor theory is applied to precession electron diffraction automated crystallographic orientation mapping (PED-ACOM) data acquired using a transmission electron microscope (TEM). The resulting dislocation density values are mapped in order to visualize the dislocation structures present in a quantitative manner. These density maps are compared with other related methods of approximating local strain dependencies in dislocation-based microstructural transitions from orientation data. The effect of acquisition parameters on density measurements is examined. By decreasing the step size and spot size during data acquisition, an increasing fraction of the dislocation content becomes accessible. Finally, the method described herein is applied to the measurement of dislocation emission during in situ annealing of Cu in TEM in order to demonstrate the utility of the technique for characterizing microstructural dynamics. - Highlights: • Developed a method of mapping GND density using orientation mapping data from TEM. • As acquisition length-scale is decreased, all dislocations are considered GNDs. • Dislocation emission and corresponding grain rotation quantified.

  9. Methods for estimation loads transported by rivers

    Directory of Open Access Journals (Sweden)

    T. S. Smart

    1999-01-01

    Full Text Available Ten methods for estimating the loads of constituents in a river were tested using data from the River Don in North-East Scotland. By treating loads derived from flow and concentration data collected every 2 days as a truth to be predicted, the ten methods were assessed for use when concentration data are collected fortnightly or monthly by sub-sampling from the original data. Estimates of coefficients of variation, bias and mean squared errors of the methods were compared; no method consistently outperformed all others and different methods were appropriate for different constituents. The widely used interpolation methods can be improved upon substantially by modelling the relationship of concentration with flow or seasonality but only if these relationships are strong enough.

  10. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    Science.gov (United States)

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  11. Spectral density estimation for symmetric stable p-adic processes

    Directory of Open Access Journals (Sweden)

    Rachid Sabre

    2013-05-01

    Full Text Available Applications of p-adic numbers ar beming increasingly important espcially in the field of applied physics. The objective of this work is to study the estimation of the spectral of p-adic stable processes. An estimator formed by a smoothing periodogram is constructed. It is shwon that this estimator is asymptotically unbiased and consistent. Rates of convergences are also examined.

  12. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    Directory of Open Access Journals (Sweden)

    Marco Lombardo

    Full Text Available PURPOSE: To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. METHODS: Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL. The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr, the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. RESULTS: The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. CONCLUSIONS: The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi

  13. Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type

    Science.gov (United States)

    Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.

    2016-01-01

    Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international data sets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon type bulk densities using local known bulk density data sets. Then the best performing of the pedotransfer functions were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data were missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known data sets had the best predictions, using the recalibrated and validated pedotransfer functions.

  14. Axonal and dendritic density field estimation from incomplete single-slice neuronal reconstructions

    Directory of Open Access Journals (Sweden)

    Jaap evan Pelt

    2014-06-01

    Full Text Available Neuronal information processing in cortical networks critically depends on the organization of synaptic connectivity. Synaptic connections can form when axons and dendrites come in close proximity of each other. The spatial innervation of neuronal arborizations can be described by their axonal and dendritic density fields. Recently we showed that potential locations of synapses between neurons can be estimated from their overlapping axonal and dendritic density fields. However, deriving density fields from single-slice neuronal reconstructions is hampered by incompleteness because of cut branches.Here, we describe a method for recovering the lost axonal and dendritic mass. This so-called completion method is based on an estimation of the mass inside the slice and an extrapolation to the space outside the slice, assuming axial symmetry in the mass distribution. We validated the method using a set of neurons generated with our NETMORPH simulator. The model-generated neurons were artificially sliced and subsequently recovered by the completion method. Depending on slice thickness and arbor extent, branches that have lost their outside parents (orphan branches may occur inside the slice. Not connected anymore to the contiguous structure of the sliced neuron, orphan branches result in an underestimation of neurite mass. For 300 m thick slices, however, the validation showed a full recovery of dendritic and an almost full recovery of axonal mass.The completion method was applied to three experimental data sets of reconstructed rat cortical L2/3 pyramidal neurons. The results showed that in 300 m thick slices intracortical axons lost about 50% and dendrites about 16% of their mass. The completion method can be applied to single-slice reconstructions as long as axial symmetry can be assumed in the mass distribution. This opens up the possibility of using incomplete neuronal reconstructions from open-access data bases to determine population mean

  15. Statistical Method of Estimating Nigerian Hydrocarbon Reserves

    Directory of Open Access Journals (Sweden)

    Jeffrey O. Oseh

    2015-01-01

    Full Text Available Hydrocarbon reserves are basic to planning and investment decisions in Petroleum Industry. Therefore its proper estimation is of considerable importance in oil and gas production. The estimation of hydrocarbon reserves in the Niger Delta Region of Nigeria has been very popular, and very successful, in the Nigerian oil and gas industry for the past 50 years. In order to fully estimate the hydrocarbon potentials in Nigerian Niger Delta Region, a clear understanding of the reserve geology and production history should be acknowledged. Reserves estimation of most fields is often performed through Material Balance and Volumetric methods. Alternatively a simple Estimation Model and Least Squares Regression may be useful or appropriate. This model is based on extrapolation of additional reserve due to exploratory drilling trend and the additional reserve factor which is due to revision of the existing fields. This Estimation model used alongside with Linear Regression Analysis in this study gives improved estimates of the fields considered, hence can be used in other Nigerian Fields with recent production history

  16. Parameter estimation methods for chaotic intercellular networks.

    Science.gov (United States)

    Mariño, Inés P; Ullner, Ekkehard; Zaikin, Alexey

    2013-01-01

    We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.

  17. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  18. EuroMInd-D: A Density Estimate of Monthly Gross Domestic Product for the Euro Area

    DEFF Research Database (Denmark)

    Proietti, Tommaso; Marczak, Martyna; Mazzi, Gianluigi

    EuroMInd-D is a density estimate of monthly gross domestic product (GDP) constructed according to a bottom–up approach, pooling the density estimates of eleven GDP components, by output and expenditure type. The components density estimates are obtained from a medium-size dynamic factor model...... parameters, and conditional simulation filters for simulating from the predictive distribution of GDP. Both algorithms process sequentially the data as they become available in real time. The GDP density estimates for the output and expenditure approach are combined using alternative weighting schemes...

  19. Variational bayesian method of estimating variance components.

    Science.gov (United States)

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  20. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    Science.gov (United States)

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  1. Optimal diffusion MRI acquisition for fiber orientation density estimation: an analytic approach.

    Science.gov (United States)

    White, Nathan S; Dale, Anders M

    2009-11-01

    An important challenge in the design of diffusion MRI experiments is how to optimize statistical efficiency, i.e., the accuracy with which parameters can be estimated from the diffusion data in a given amount of imaging time. In model-based spherical deconvolution analysis, the quantity of interest is the fiber orientation density (FOD). Here, we demonstrate how the spherical harmonics (SH) can be used to form an explicit analytic expression for the efficiency of the minimum variance (maximally efficient) linear unbiased estimator of the FOD. Using this expression, we calculate optimal b-values for maximum FOD estimation efficiency with SH expansion orders of L = 2, 4, 6, and 8 to be approximately b = 1,500, 3,000, 4,600, and 6,200 s/mm(2), respectively. However, the arrangement of diffusion directions and scanner-specific hardware limitations also play a role in determining the realizable efficiency of the FOD estimator that can be achieved in practice. We show how some commonly used methods for selecting diffusion directions are sometimes inefficient, and propose a new method for selecting diffusion directions in MRI based on maximizing the statistical efficiency. We further demonstrate how scanner-specific hardware limitations generally lead to optimal b-values that are slightly lower than the ideal b-values. In summary, the analytic expression for the statistical efficiency of the unbiased FOD estimator provides important insight into the fundamental tradeoff between angular resolution, b-value, and FOD estimation accuracy.

  2. Contour Estimation by Array Processing Methods

    Directory of Open Access Journals (Sweden)

    Bourennane Salah

    2006-01-01

    Full Text Available This work is devoted to the estimation of rectilinear and distorted contours in images by high-resolution methods. In the case of rectilinear contours, it has been shown that it is possible to transpose this image processing problem to an array processing problem. The existing straight line characterization method called subspace-based line detection (SLIDE leads to models with orientations and offsets of straight lines as the desired parameters. Firstly, a high-resolution method of array processing leads to the orientation of the lines. Secondly, their offset can be estimated by either the well-known method of extension of the Hough transform or another method, namely, the variable speed propagation scheme, that belongs to the array processing applications field. We associate it with the method called "modified forward-backward linear prediction" (MFBLP. The signal generation process devoted to straight lines retrieval is retained for the case of distorted contours estimation. This issue is handled for the first time thanks to an inverse problem formulation and a phase model determination. The proposed method is initialized by means of the SLIDE algorithm.

  3. Non-destructive methods to estimate physical aging of plywood

    OpenAIRE

    Bobadilla Maldonado, Ignacio; Santirso, María Cristina; Herrero Giner, Daniel; Esteban Herrero, Miguel; Iñiguez Gonzalez, Guillermo

    2011-01-01

    This paper studies the relationship between aging, physical changes and the results of non-destructive testing of plywood. 176 pieces of plywood were tested to analyze their actual and estimated density using non-destructive methods (screw withdrawal force and ultrasound wave velocity) during a laboratory aging test. From the results of statistical analysis it can be concluded that there is a strong relationship between the non-destructive measurements carried out, and the decline in the phys...

  4. Lifetime estimation methods in power transformer insulation

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Taghikhani

    2012-10-01

    Full Text Available Mineral oil in the power transformer has an important role in the cooling, insulation aging and chemical reactions such as oxidation. Oil temperature increases will cause quality loss. The oil should be regularly control in necessary time. Studies have been done on power transformers oils that are used in different age in Iranian power grid to identify the true relationship between age and other characteristics of power transformer oil. In this paper the first method to estimate the life of power transformer insulation (oil is based on Arrhenius law. The Arrhenius law can provide loss of power transformer oil quality and estimates remaining life. The second method that is studies to estimate the life of power transformer is the paper insulation life prediction at temperature160 ° C.

  5. Evaluation of Modified Pycnometric Method for Accurately Measuring the Density of Molten Nickel

    Institute of Scientific and Technical Information of China (English)

    XIAO Feng; FANG Liang; FU Yuechao; YANG Lingchuan

    2004-01-01

    A modified pycnometric method has been developed to obtain accurate densities of molten nickel.The new method allows continuous measurement of density over a wide temperature range from a single experiment.The measurement error of the method was analyzed, and the total uncertainty of the measurement was estimated to be within ±0.34%. The measured density of molten nickel decreases linearly with increasing temperature over a range from the melting point to 1873K. The density at the melting point and the thermal expansion coefficient of molten nickel are 7.90Mg·m-3 and 1.92×10-4 K-1,respectively.

  6. A comprehensive estimation method for enterprise capability

    Directory of Open Access Journals (Sweden)

    Tetiana Kuzhda

    2015-11-01

    Full Text Available In today’s highly competitive business world, the need for efficient enterprise capability management is greater than ever. As more enterprises begin to compete on a global scale, the effective use of enterprise capability will become imperative for them to improve their business activities. The definition of socio-economic capability of the enterprise has been given and the main components of enterprise capability have been pointed out. The comprehensive method to estimate enterprise capability that takes into account both social and economic components has been offered. The methodical approach concerning integrated estimation of the enterprise capability has been developed. Novelty deals with the inclusion of summary measure of the social component of enterprise capability to define the integrated index of enterprise capability. The practical significance of methodological approach is that the method allows assessing the enterprise capability comprehensively through combining two kinds of estimates – social and economic and converts them into a single integrated indicator. It provides a comprehensive approach to socio-economic estimation of enterprise capability, sets a formal basis for making decisions and helps allocate enterprise resources reasonably. Practical implementation of this method will affect the current condition and trends of the enterprise, help to make forecasts and plans for its development and capability efficient use.

  7. Soil Organic Carbon Density in Hebei Province, China:Estimates and Uncertainty

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-Cun; SHI Xue-Zheng; YU Dong-Sheng; T. F. PAGELLA; SUN Wei-Xia; XU Xiang-Hua

    2005-01-01

    In order to improve the precision of soil organic carbon (SOC) estimates, the sources of uncertainty in soil organic carbon density (SOCD) estimates and SOC stocks were examined using 363 soil profiles in Hebei Province, China, with three methods: the soil profile statistics (SPS), GIS-based soil type (GST), and kriging interpolation (KI). The GST method, utilizing both pedological professional knowledge and GIS technology, was considered the most accurate method of the three estimations, with SOCD estimates for SPS 10% lower and KI 10% higher. The SOCD range for GST was 84% wider than KI as KI smoothing effect narrowed the SOCD range. Nevertheless, the coefficient of variation for SOCD with KI (41.7%) was less than GST and SPS. Comparing SOCD's lower estimates for SPS versus GST, the major sources of uncertainty were the conflicting area of proportional relations. Meanwhile, the fewer number of soil profiles and the necessity of using the smoothing effect with KI were its sources of uncertainty. Moreover, for local detailed variations of SOCD, GST was more advantageous in reflecting the distribution pattern than KI.

  8. Quantum mechanical method for estimating ionicity of spinel ferrites

    Energy Technology Data Exchange (ETDEWEB)

    Ji, D.H. [Hebei Advanced Thin Films Laboratory, Department of Physics, Hebei Normal University, Shijiazhuang City 050024 (China); Tang, G.D., E-mail: tanggd@mail.hebtu.edu.cn [Hebei Advanced Thin Films Laboratory, Department of Physics, Hebei Normal University, Shijiazhuang City 050024 (China); Li, Z.Z.; Hou, X.; Han, Q.J.; Qi, W.H.; Liu, S.R.; Bian, R.R. [Hebei Advanced Thin Films Laboratory, Department of Physics, Hebei Normal University, Shijiazhuang City 050024 (China)

    2013-01-15

    The ionicity (0.879) of cubic spinel ferrite Fe{sub 3}O{sub 4} has been determined, using both experimental magnetization and density of state calculations from the density functional theory. Furthermore, a quantum mechanical estimation method for the ionicity of spinel ferrites is proposed by comparing the results from Phillips' ionicity. On the basis of this, ionicities of the spinel ferrites MFe{sub 2}O{sub 4} (M=Mn, Fe, Co, Ni, Cu) are calculated. As an application, the ion distribution at (A) and [B] sites of (A)[B]{sub 2}O{sub 4} spinel ferrites MFe{sub 2}O{sub 4} (M=Fe, Co, Ni, Cu) are calculated using current ionicity values. - Highlights: Black-Right-Pointing-Pointer The ionicity of Fe{sub 3}O{sub 4} was determined as 0.879 by the density functional theory. Black-Right-Pointing-Pointer The ionicities of spinel ferrites were estimated by a quantum mechanical method. Black-Right-Pointing-Pointer A quantum mechanical method estimating ionicity is suitable for II-VI compounds. Black-Right-Pointing-Pointer The ion distributions of MFe{sub 2}O{sub 4} are calculated by current ionicities values.

  9. A new approach on seismic mortality estimations based on average population density

    Science.gov (United States)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  10. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  11. Density and hazard rate estimation for censored and a-mixing data using gamma kernels

    OpenAIRE

    2006-01-01

    In this paper we consider the nonparametric estimation for a density and hazard rate function for right censored -mixing survival time data using kernel smoothing techniques. Since survival times are positive with potentially a high concentration at zero, one has to take into account the bias problems when the functions are estimated in the boundary region. In this paper, gamma kernel estimators of the density and the hazard rate function are proposed. The estimators use adaptive weights depe...

  12. Estimating Leaf Bulk Density Distribution in a Tree Canopy Using Terrestrial LiDAR and a Straightforward Calibration Procedure

    Directory of Open Access Journals (Sweden)

    François Pimont

    2015-06-01

    Full Text Available Leaf biomass distribution is a key factor for modeling energy and carbon fluxes in forest canopies and for assessing fire behavior. We propose a new method to estimate 3D leaf bulk density distribution, based on a calibration of indices derived from T-LiDAR. We applied the method to four contrasted plots in a mature Quercus pubescens forest. Leaf bulk densities were measured inside 0.7 m-diameter spheres, referred to as Calibration Volumes. Indices were derived from LiDAR point clouds and calibrated over the Calibration Volume bulk densities. Several indices were proposed and tested to account for noise resulting from mixed pixels and other theoretical considerations. The best index and its calibration parameter were then used to estimate leaf bulk densities at the grid nodes of each plot. These LiDAR-derived bulk density distributions were used to estimate bulk density vertical profiles and loads and above four meters compared well with those assessed by the classical inventory-based approach. Below four meters, the LiDAR-based approach overestimated bulk densities since no distinction was made between wood and leaf returns. The results of our method are promising since they demonstrate the possibility to assess bulk density on small plots at a reasonable operational cost.

  13. EXACT MINIMAX ESTIMATION OF THE PREDICTIVE DENSITY IN SPARSE GAUSSIAN MODELS.

    Science.gov (United States)

    Mukherjee, Gourab; Johnstone, Iain M

    We consider estimating the predictive density under Kullback-Leibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen. Suboptimal performance of the class of plug-in density estimates reflects the predictive nature of the problem and optimal strategies need diversification of the future risk. We find that minimax optimal strategies lie outside the Gaussian family but can be constructed with threshold predictive density estimates. Novel minimax techniques involving simultaneous calibration of the sparsity adjustment and the risk diversification mechanisms are used to design optimal predictive density estimates.

  14. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    Science.gov (United States)

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2016-11-16

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  15. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    Science.gov (United States)

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems.

  16. Integrated Bayesian Estimation of Zeff in the TEXTOR Tokamak from Bremsstrahlung and CX Impurity Density Measurements

    Science.gov (United States)

    Verdoolaege, G.; Von Hellermann, M. G.; Jaspers, R.; Ichir, M. M.; Van Oost, G.

    2006-11-01

    The validation of diagnostic date from a nuclear fusion experiment is an important issue. The concept of an Integrated Data Analysis (IDA) allows the consistent estimation of plasma parameters from heterogeneous data sets. Here, the determination of the ion effective charge (Zeff) is considered. Several diagnostic methods exist for the determination of Zeff, but the results are in general not in agreement. In this work, the problem of Zeff estimation on the TEXTOR tokamak is approached from the perspective of IDA, in the framework of Bayesian probability theory. The ultimate goal is the estimation of a full Zeff profile that is consistent both with measured bremsstrahlung emissivities, as well as individual impurity spectral line intensities obtained from Charge Exchange Recombination Spectroscopy (CXRS). We present an overview of the various uncertainties that enter the calculation of a Zeff profile from bremsstrahlung date on the one hand, and line intensity data on the other hand. We discuss a simple linear and nonlinear Bayesian model permitting the estimation of a central value for Zeff and the electron density ne on TEXTOR from bremsstrahlung emissivity measurements in the visible, and carbon densities derived from CXRS. Both the central Zeff and ne are sampled using an MCMC algorithm. An outlook is given towards possible model improvements.

  17. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    1995-01-01

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  18. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  19. Current source density estimation and interpolation based on the spherical harmonic Fourier expansion.

    Science.gov (United States)

    Pascual-Marqui, R D; Gonzalez-Andino, S L; Valdes-Sosa, P A; Biscay-Lirio, R

    1988-12-01

    A method for the spatial analysis of EEG and EP data, based on the spherical harmonic Fourier expansion (SHE) of scalp potential measurements, is described. This model provides efficient and accurate formulas for: (1) the computation of the surface Laplacian and (2) the interpolation of electrical potentials, current source densities, test statistics and other derived variables. Physiologically based simulation experiments show that the SHE method gives better estimates of the surface Laplacian than the commonly used finite difference method. Cross-validation studies for the objective comparison of different interpolation methods demonstrate the superiority of the SHE over the commonly used methods based on the weighted (inverse distance) average of the nearest three and four neighbor values.

  20. Distal radius bone mineral density estimation using the filling factor of trabecular bone in the x-ray image.

    Science.gov (United States)

    Lee, Sooyeul; Jeong, Ji-Wook; Lee, Jeong Won; Yoo, Done-Sik; Kim, Seunghwan

    2006-01-01

    Osteoporosis is characterized by an abnormal loss of bone mineral content, which leads to a tendency to non-traumatic bone fractures or to structural deformations of bone. Thus, bone density measurement has been considered as a most reliable method to assess bone fracture risk due to osteoporosis. In past decades, X-ray images have been studied in connection with the bone mineral density estimation. However, the estimated bone mineral density from the X-ray image can undergo a relatively large accuracy or precision error. The most relevant origin of the accuracy or precision error may be unstable X-ray image acquisition condition. Thus, we focus our attentions on finding a bone mineral density estimation method that is relatively insensitive to the X-ray image acquisition condition. In this paper, we develop a simple technique for distal radius bone mineral density estimation using the trabecular bone filling factor in the X-ray image and apply the technique to the wrist X-ray images of 20 women. Estimated bone mineral density shows a high linear correlation with a dual-energy X-ray absorptiometry (r=0.87).

  1. Parameter estimation methods for chaotic intercellular networks.

    Directory of Open Access Journals (Sweden)

    Inés P Mariño

    Full Text Available We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.

  2. Inference-less Density Estimation using Copula Bayesian Networks

    CERN Document Server

    Elidan, Gal

    2012-01-01

    We consider learning continuous probabilistic graphical models in the face of missing data. For non-Gaussian models, learning the parameters and structure of such models depends on our ability to perform efficient inference, and can be prohibitive even for relatively modest domains. Recently, we introduced the Copula Bayesian Network (CBN) density model - a flexible framework that captures complex high-dimensional dependency structures while offering direct control over the univariate marginals, leading to improved generalization. In this work we show that the CBN model also offers significant computational advantages when training data is partially observed. Concretely, we leverage on the specialized form of the model to derive a computationally amenable learning objective that is a lower bound on the log-likelihood function. Importantly, our energy-like bound circumvents the need for costly inference of an auxiliary distribution, thus facilitating practical learning of highdimensional densities. We demonstr...

  3. Point estimation of root finding methods

    CERN Document Server

    2008-01-01

    This book sets out to state computationally verifiable initial conditions for predicting the immediate appearance of the guaranteed and fast convergence of iterative root finding methods. Attention is paid to iterative methods for simultaneous determination of polynomial zeros in the spirit of Smale's point estimation theory, introduced in 1986. Some basic concepts and Smale's theory for Newton's method, together with its modifications and higher-order methods, are presented in the first two chapters. The remaining chapters contain the recent author's results on initial conditions guaranteing convergence of a wide class of iterative methods for solving algebraic equations. These conditions are of practical interest since they depend only on available data, the information of a function whose zeros are sought and initial approximations. The convergence approach presented can be applied in designing a package for the simultaneous approximation of polynomial zeros.

  4. Seismic Hazard Analysis Using the Adaptive Kernel Density Estimation Technique for Chennai City

    Science.gov (United States)

    Ramanna, C. K.; Dodagoudar, G. R.

    2012-01-01

    Conventional method of probabilistic seismic hazard analysis (PSHA) using the Cornell-McGuire approach requires identification of homogeneous source zones as the first step. This criterion brings along many issues and, hence, several alternative methods to hazard estimation have come up in the last few years. Methods such as zoneless or zone-free methods, modelling of earth's crust using numerical methods with finite element analysis, have been proposed. Delineating a homogeneous source zone in regions of distributed seismicity and/or diffused seismicity is rather a difficult task. In this study, the zone-free method using the adaptive kernel technique to hazard estimation is explored for regions having distributed and diffused seismicity. Chennai city is in such a region with low to moderate seismicity so it has been used as a case study. The adaptive kernel technique is statistically superior to the fixed kernel technique primarily because the bandwidth of the kernel is varied spatially depending on the clustering or sparseness of the epicentres. Although the fixed kernel technique has proven to work well in general density estimation cases, it fails to perform in the case of multimodal and long tail distributions. In such situations, the adaptive kernel technique serves the purpose and is more relevant in earthquake engineering as the activity rate probability density surface is multimodal in nature. The peak ground acceleration (PGA) obtained from all the three approaches (i.e., the Cornell-McGuire approach, fixed kernel and adaptive kernel techniques) for 10% probability of exceedance in 50 years is around 0.087 g. The uniform hazard spectra (UHS) are also provided for different structural periods.

  5. THE USE OF MATHEMATICAL MODELS FOR ESTIMATING WOOD BASIC DENSITY OF Eucalyptus sp CLONES

    Directory of Open Access Journals (Sweden)

    Cláudio Roberto Thiersch

    2006-09-01

    Full Text Available This study aimed at identifying at what point in the stem, in the longitudinal and cardinal direction, the pylodin penetration depth should be measured, for determining wood basic density, envisaging forestry inventory Data base used in compassed 36 parcels of 400 m2. Around the parcels 216 trees were sealed. Two clones (hybrid of E. grandis and E. urophylla, at the ages of 3; 4, 5 and 6 years, belonging to three different sites in East Brazil, encompassing East and Northeast of Espirito Santo state and south of Bahia state. In each measuring height of diameters it was also measured the penetration depth of the pylodin (in mm. The average basic density of scaled trees, was determined, departing from the cheaps, using the immersion method. The main conclusions were: The density equation, as function of the pylodin measures, age, site, diameters at 1.3m of ground and total height, was more precise, exact and stable than the density equation as function of pylodin, age, site and diameter, which in turn was more exact and stable than the density equation, as function of pylodin measures, age, site, diameter at a 1.3m of the ground and of total height, was precise and exact for all ages and sites, in dependent on if the pylodin measurements were taken in the South or in North fares, or in the average position between them. The height for measurement with pylodin can also be taken in the more ergonomic position of 1.3m. The density estimation, as a function of the measures with the pylodin, or as a function of the use of the pylodin, age, average dominant tree height an diameter at 1.3m of the ground, for both clones, was more precise when the measure with the pylodin was taken at the North face. The average tree basic density must always be taken by a specific equation for each clone, given that these equations differ statistically.

  6. The estimation method of GPS instrumental biases

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A model of estimating the global positioning system (GPS) instrumental biases and the methods to calculate the relative instrumental biases of satellite and receiver are presented. The calculated results of GPS instrumental biases, the relative instrumental biases of satellite and receiver, and total electron content (TEC) are also shown. Finally, the stability of GPS instrumental biases as well as that of satellite and receiver instrumental biases are evaluated, indicating that they are very stable during a period of two months and a half.

  7. Lifetime estimation methods in power transformer insulation

    OpenAIRE

    Mohammad Ali Taghikhani

    2012-01-01

    Mineral oil in the power transformer has an important role in the cooling, insulation aging and chemical reactions such as oxidation. Oil temperature increases will cause quality loss. The oil should be regularly control in necessary time. Studies have been done on power transformers oils that are used in different age in Iranian power grid to identify the true relationship between age and other characteristics of power transformer oil. In this paper the first method to estimate the life of p...

  8. A simple method for determining maize silage density on farms

    National Research Council Canada - National Science Library

    Ana Maria Krüger; Clóves C. Jobim; Igor Q. de Carvalho; Julienne G. Moro

    2017-01-01

    Several methodologies have been tested to evaluate silage density, with direct methods most popular, whereas indirect methods that can be used under field conditions are still in development and improvement stages...

  9. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Directory of Open Access Journals (Sweden)

    Yongjun Ahn

    Full Text Available The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive

  10. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Science.gov (United States)

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  11. A posteriori error estimator for adaptive local basis functions to solve Kohn-Sham density functional theory

    CERN Document Server

    Kaye, Jason; Yang, Chao

    2014-01-01

    Kohn-Sham density functional theory is one of the most widely used electronic structure theories. The recently developed adaptive local basis functions form an accurate and systematically improvable basis set for solving Kohn-Sham density functional theory using discontinuous Galerkin methods, requiring a small number of basis functions per atom. In this paper we develop residual-based a posteriori error estimates for the adaptive local basis approach, which can be used to guide non-uniform basis refinement for highly inhomogeneous systems such as surfaces and large molecules. The adaptive local basis functions are non-polynomial basis functions, and standard a posteriori error estimates for $hp$-refinement using polynomial basis functions do not directly apply. We generalize the error estimates for $hp$-refinement to non-polynomial basis functions. We demonstrate the practical use of the a posteriori error estimator in performing three-dimensional Kohn-Sham density functional theory calculations for quasi-2D...

  12. New method of estimation of cosmic ray nucleus energy

    CERN Document Server

    Korotkova, N A; Postnikov, E B; Roganova, T M; Sveshnikova, L G; Turundaevskij, A N

    2002-01-01

    The new approach to estimation of primary cosmic nucleus energy is presented. It is based on measurement of spatial density of secondary particles, originated in nuclear interactions in the target and strengthened by thin converter layer. The proposed method allows creation of relatively lightweight apparatus of large square with large geometrical factor and can be applied in satellite and balloon experiments for all nuclei in a wide energy range of 10 sup 1 sup 1 -10 sup 1 sup 6 eV/particle. The physical basis of the method, full Monte Carlo simulation, the field of application are presented

  13. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    Science.gov (United States)

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.

  14. Near infrared spectroscopy for estimating wood basic density in Eucalyptus urophylla and Eucalyptus grandis

    Directory of Open Access Journals (Sweden)

    Paulo Ricardo Gherardi Hein

    2009-06-01

    Full Text Available Wood basic density is indicative of several other wood properties and is considered as a key feature for many industrialapplications. Near infrared spectroscopy (NIRS is a fast, efficient technique that is capable of estimating that property. However,it should be improved in order to complement the often time-consuming and costly conventional method. Research on woodtechnological properties using near infrared spectroscopy has shown promising results. Thus the aim of this study was to evaluatethe efficiency of near infrared spectroscopy for estimating wood basic density in both Eucalyptus urophylla and Eucalyptus grandis.The coefficients of determination of the predictive models for cross validation ranged between 0.74 and 0.86 and the ratio performanceof deviation (RPD ranged between 1.9 and 2.7. The application of spectral filter, detection and removal of outlier samples, andselection of variables (wavelength improved the adjustment of calibrations, thereby reducing the standard error of calibration (SECand cross validation (SECV as well as increasing the coefficient of determination (R² and the RPD value. The technique of nearinfrared spectroscopy can therefore, be used for predicting wood basic density in Eucalyptus urophylla and Eucalyptus grandis.

  15. Advancing methods for global crop area estimation

    Science.gov (United States)

    King, M. L.; Hansen, M.; Adusei, B.; Stehman, S. V.; Becker-Reshef, I.; Ernst, C.; Noel, J.

    2012-12-01

    Cropland area estimation is a challenge, made difficult by the variety of cropping systems, including crop types, management practices, and field sizes. A MODIS derived indicator mapping product (1) developed from 16-day MODIS composites has been used to target crop type at national scales for the stratified sampling (2) of higher spatial resolution data for a standardized approach to estimate cultivated area. A global prototype is being developed using soybean, a global commodity crop with recent LCLUC dynamic and a relatively unambiguous spectral signature, for the United States, Argentina, Brazil, and China representing nearly ninety percent of soybean production. Supervised classification of soy cultivated area is performed for 40 km2 sample blocks using time-series, Landsat imagery. This method, given appropriate data for representative sampling with higher spatial resolution, represents an efficient and accurate approach for large area crop type estimation. Results for the United States sample blocks have exhibited strong agreement with the National Agricultural Statistics Service's (NASS's) Cropland Data Layer (CDL). A confusion matrix showed a 91.56% agreement and a kappa of .67 between the two products. Field measurements and RapidEye imagery have been collected for the USA, Brazil and Argentina in further assessing product accuracies. The results of this research will demonstrate the value of MODIS crop type indicator products and Landsat sample data in estimating soybean cultivated area at national scales, enabling an internally consistent global assessment of annual soybean production.

  16. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    Science.gov (United States)

    Zeng, L.; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A.; Bobrek, M.

    2016-11-01

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.

  17. Estimation of population firing rates and current source densities from laminar electrode recordings.

    Science.gov (United States)

    Pettersen, Klas H; Hagen, Espen; Einevoll, Gaute T

    2008-06-01

    This model study investigates the validity of methods used to interpret linear (laminar) multielectrode recordings. In computer experiments extracellular potentials from a synaptically activated population of about 1,000 pyramidal neurons are calculated using biologically realistic compartmental neuron models combined with electrostatic forward modeling. The somas of the pyramidal neurons are located in a 0.4 mm high and wide columnar cylinder, mimicking a stimulus-evoked layer-5 population in a neocortical column. Current-source density (CSD) analysis of the low-frequency part (estimates of the true underlying CSD. The high-frequency part (>750 Hz) of the potentials (multi-unit activity, MUA) is found to scale approximately as the population firing rate to the power 3/4 and to give excellent estimates of the underlying population firing rate for trial-averaged data. The MUA signal is found to decay much more sharply outside the columnar populations than the LFP.

  18. Smart density: a more accurate method of measuring rural residential density for health-related research

    Directory of Open Access Journals (Sweden)

    Gibson Lucinda

    2010-02-01

    Full Text Available Abstract Background Studies involving the built environment have typically relied on US Census data to measure residential density. However, census geographic units are often unsuited to health-related research, especially in rural areas where development is clustered and discontinuous. Objective We evaluated the accuracy of both standard census methods and alternative GIS-based methods to measure rural density. Methods We compared residential density (units/acre in 335 Vermont school neighborhoods using conventional census geographic units (tract, block group and block with two GIS buffer measures: a 1-kilometer (km circle around the school and a 1-km circle intersected with a 100-meter (m road-network buffer. The accuracy of each method was validated against the actual residential density for each neighborhood based on the Vermont e911 database, which provides an exact geo-location for all residential structures in the state. Results Standard census measures underestimate residential density in rural areas. In addition, the degree of error is inconsistent so even the relative rank of neighborhood densities varies across census measures. Census measures explain only 61% to 66% of the variation in actual residential density. In contrast, GIS buffer measures explain approximately 90% of the variation. Combining a 1-km circle with a road-network buffer provides the closest approximation of actual residential density. Conclusion Residential density based on census units can mask clusters of development in rural areas and distort associations between residential density and health-related behaviors and outcomes. GIS-defined buffers, including a 1-km circle and a road-network buffer, can be used in conjunction with census data to obtain a more accurate measure of residential density.

  19. Method for solvent extraction with near-equal density solutions

    Science.gov (United States)

    Birdwell, Joseph F.; Randolph, John D.; Singh, S. Paul

    2001-01-01

    Disclosed is a modified centrifugal contactor for separating solutions of near equal density. The modified contactor has a pressure differential establishing means that allows the application of a pressure differential across fluid in the rotor of the contactor. The pressure differential is such that it causes the boundary between solutions of near-equal density to shift, thereby facilitating separation of the phases. Also disclosed is a method of separating solutions of near-equal density.

  20. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    Science.gov (United States)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  1. Estimating Predictability Redundancy and Surrogate Data Method

    CERN Document Server

    Pecen, L

    1995-01-01

    A method for estimating theoretical predictability of time series is presented, based on information-theoretic functionals---redundancies and surrogate data technique. The redundancy, designed for a chosen model and a prediction horizon, evaluates amount of information between a model input (e.g., lagged versions of the series) and a model output (i.e., a series lagged by the prediction horizon from the model input) in number of bits. This value, however, is influenced by a method and precision of redundancy estimation and therefore it is a) normalized by maximum possible redundancy (given by the precision used), and b) compared to the redundancies obtained from two types of the surrogate data in order to obtain reliable classification of a series as either unpredictable or predictable. The type of predictability (linear or nonlinear) and its level can be further evaluated. The method is demonstrated using a numerically generated time series as well as high-frequency foreign exchange data and the theoretical ...

  2. A generalized model for estimating the energy density of invertebrates

    Science.gov (United States)

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  3. Power spectral density of velocity fluctuations estimated from phase Doppler data

    Directory of Open Access Journals (Sweden)

    Jicha Miroslav

    2012-04-01

    Full Text Available Laser Doppler Anemometry (LDA and its modifications such as PhaseDoppler Particle Anemometry (P/DPA is point-wise method for optical nonintrusive measurement of particle velocity with high data rate. Conversion of the LDA velocity data from temporal to frequency domain – calculation of power spectral density (PSD of velocity fluctuations, is a non trivial task due to nonequidistant data sampling in time. We briefly discuss possibilities for the PSD estimation and specify limitations caused by seeding density and other factors of the flow and LDA setup. Arbitrary results of LDA measurements are compared with corresponding Hot Wire Anemometry (HWA data in the frequency domain. Slot correlation (SC method implemented in software program Kern by Nobach (2006 is used for the PSD estimation. Influence of several input parameters on resulting PSDs is described. Optimum setup of the software for our data of particle-laden air flow in realistic human airway model is documented. Typical character of the flow is described using PSD plots of velocity fluctuations with comments on specific properties of the flow. Some recommendations for improvements of future experiments to acquire better PSD results are given.

  4. Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities

    CERN Document Server

    Kinney, Justin B

    2014-01-01

    Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.

  5. Spectral density method to Anderson-Holstein model

    Science.gov (United States)

    Chebrolu, Narasimha Raju; Chatterjee, Ashok

    2015-06-01

    Two-parameter spectral density function of a magnetic impurity electron in a non-magnetic metal is calculated within the framework of the Anderson-Holstein model using the spectral density approximation method. The effect of electron-phonon interaction on the spectral function is investigated.

  6. Effective dysphonia detection using feature dimension reduction and kernel density estimation for patients with Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Shanshan Yang

    Full Text Available Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD, and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS and kernel principal component analysis (KPCA methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP decision rule and support vector machine (SVM with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified.

  7. Comparison of sampling methods for determining relative densities of Homalodisca vitripennis (Hemiptera: Cicadellidae) on Citrus

    Science.gov (United States)

    : Four sampling methods that included A-Vac, D-Vac, pole-bucket, and beat-net devices were evaluated for estimating relative densities of glassy-winged sharpshooter (Homalodisca vitripennis (Germar)) nymphs and adults on citrus trees. All four methods produced similar temporal and spatial distribut...

  8. Laser heating method for estimation of carbon nanotube purity

    Science.gov (United States)

    Terekhov, S. V.; Obraztsova, E. D.; Lobach, A. S.; Konov, V. I.

    A new method of a carbon nanotube purity estimation has been developed on the basis of Raman spectroscopy. The spectra of carbon soot containing different amounts of nanotubes were registered under heating from a probing laser beam with a step-by-step increased power density. The material temperature in the laser spot was estimated from a position of the tangential Raman mode demonstrating a linear thermal shift (-0.012 cm-1/K) from the position 1592 cm-1 (at room temperature). The rate of the material temperature rise versus the laser power density (determining the slope of a corresponding graph) appeared to correlate strongly with the nanotube content in the soot. The influence of the experimental conditions on the slope value has been excluded via a simultaneous measurement of a reference sample with a high nanotube content (95 vol.%). After the calibration (done by a comparison of the Raman and the transmission electron microscopy data for the nanotube percentage in the same samples) the Raman-based method is able to provide a quantitative purity estimation for any nanotube-containing material.

  9. "Prospecting Asteroids: Indirect technique to estimate overall density and inner composition"

    Science.gov (United States)

    Such, Pamela

    2016-07-01

    Spectroscopic studies of asteroids make possible to obtain some information on their composition from the surface but say little about the innermost material, porosity and density of the object. In addition, spectroscopic observations are affected by the effects of "space weathering" produced by the bombardment of charged particles for certain materials that change their chemical structure, albedo and other physical properties, partly altering their chances of identification. Data such as the mass, size and density of the asteroids are essential at the time to propose space missions in order to determine the best candidates for space exploration and is of great importance to determine a priori any of them remotely from Earth. From many years ago its determined masses of largest asteroids studying the gravitational effects they have on smaller asteroids when they approach them (see Davis and Bender, 1977; Schubart and Matson, 1979; School et al 1987; Hoffman, 1989b, among others), but estimates of the masses of the smallest objects is limited to the effects that occur in extreme close encounters to other asteroids of similar size. This paper presents the results of a search for approaches of pair of asteroids that approximate distances less than 0.0004 UA (50,000 km) of each other in order to study their masses through the astrometric method and to estimate in a future their densities and internal composition. References Davis, D. R., and D. F. Bender. 1977. Asteroid mass determinations: search for futher encounter opportunities. Bull. Am. Astron. Soc. 9, 502-503. Hoffman, M. 1989b. Asteroid mass determination: Present situation and perspectives. In asteroids II (R. P. Binzel, T. Gehreis, and M. S. Matthews, Eds.), pp 228-239. Univ. Arizona Press, Tucson. School, H. L. D. Schmadel and S. Roser 1987. The mass of the asteroid (10) Hygiea derived from observations of (829) Academia. Astron. Astrophys. 179, 311-316. Schubart, J. And D. L. Matson 1979. Masses and

  10. Volumetric magnetic resonance imaging classification for Alzheimer's disease based on kernel density estimation of local features

    Institute of Scientific and Technical Information of China (English)

    YAN Hao; WANG Hu; WANG Yong-hui; ZHANG Yu-mei

    2013-01-01

    Background The classification of Alzheimer's disease (AD) from magnetic resonance imaging (MRI) has been challenged by lack of effective and reliable biomarkers due to inter-subject variability.This article presents a classification method for AD based on kernel density estimation (KDE) of local features.Methods First,a large number of local features were extracted from stable image blobs to represent various anatomical patterns for potential effective biomarkers.Based on distinctive descriptors and locations,the local features were robustly clustered to identify correspondences of the same underlying patterns.Then,the KDE was used to estimate distribution parameters of the correspondences by weighting contributions according to their distances.Thus,biomarkers could be reliably quantified by reducing the effects of further away correspondences which were more likely noises from inter-subject variability.Finally,the Bayes classifier was applied on the distribution parameters for the classification of AD.Results Experiments were performed on different divisions of a publicly available database to investigate the accuracy and the effects of age and AD severity.Our method achieved an equal error classification rate of 0.85 for subject aged 60-80 years exhibiting mild AD and outperformed a recent local feature-based work regardless of both effects.Conclusions We proposed a volumetric brain MRI classification method for neurodegenerative disease based on statistics of local features using KDE.The method may be potentially useful for the computer-aided diagnosis in clinical settings.

  11. Fault prediction of fighter based on nonparametric density estimation

    Institute of Scientific and Technical Information of China (English)

    Zhang Zhengdao; Hu Shousong

    2005-01-01

    Fighters and other complex engineering systems have many characteristics such as difficult modeling and testing, multiple working situations, and high cost. Aim at these points, a new kind of real-time fault predictor is designed based on an improved k-nearest neighbor method, which needs neither the math model of system nor the training data and prior knowledge. It can study and predict while system's running, so that it can overcome the difficulty of data acquirement. Besides, this predictor has a fast prediction speed, and the false alarm rate and missing alarm rate can be adjusted randomly. The method is simple and universalizable. The result of simulation on fighter F-16 proved the efficiency.

  12. Estimating Magic Numbers Larger Than 126 by Fermi-Yang Liming Method

    Institute of Scientific and Technical Information of China (English)

    LI Xian-Hui; ZHOU Zhi-Ning; ZHONG Yu-Shu; YANG Ze-Sen

    2001-01-01

    The Fermi Yang Liming method is followed and developed to estimate new magic numbers in nuclei with a Woods Saxon density function. The calculated results predict that the magic number next to 126 should be around 184 and 258.

  13. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    Science.gov (United States)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  14. Density-preserving sampling: robust and efficient alternative to cross-validation for error estimation.

    Science.gov (United States)

    Budka, Marcin; Gabrys, Bogdan

    2013-01-01

    Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.

  15. Assay Method for 235U in Low-Density Waste

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>235U assay method will provide a semi-quantitative assay for any uranium lumps that might exist in low-density, low-Z material waste boxes within a short count time. These materials will consist of

  16. Exploration of diffusion kernel density estimation in agricultural drought risk analysis: a case study in Shandong, China

    Directory of Open Access Journals (Sweden)

    W. Chen

    2015-11-01

    Full Text Available Drought caused the most widespread damage in China, making up over 50 % of the total affected area nationwide in recent decades. In the paper, a Standardized Precipitation Index-based (SPI-based drought risk study is conducted using historical rainfall data of 19 weather stations in Shandong province, China. Kernel density based method is adopted to carry out the risk analysis. Comparison between the bivariate Gaussian kernel density estimation (GKDE and diffusion kernel density estimation (DKDE are carried out to analyze the effect of drought intensity and drought duration. The results show that DKDE is relatively more accurate without boundary-leakage. Combined with the GIS technique, the drought risk is presented which reveals the spatial and temporal variation of agricultural droughts for corn in Shandong. The estimation provides a different way to study the occurrence frequency and severity of drought risk from multiple perspectives.

  17. EnviroAtlas Estimated Intersection Density of Walkable Roads Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in each EnviroAtlas community....

  18. EnviroAtlas - Paterson, NJ - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  19. EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  20. EnviroAtlas - New Bedford, MA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  1. EnviroAtlas - Pittsburgh, PA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  2. EnviroAtlas - New York, NY - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  3. EnviroAtlas - Memphis, TN - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  4. EnviroAtlas - Cleveland, OH - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  5. EnviroAtlas - Fresno, CA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  6. EnviroAtlas - Green Bay, WI - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  7. EnviroAtlas - Tampa, FL - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  8. EnviroAtlas - Portland, ME - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  9. EnviroAtlas - Phoenix, AZ - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  10. EnviroAtlas - Des Moines, IA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  11. EnviroAtlas - Austin, TX - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  12. EnviroAtlas - Woodbine, IA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  13. EnviroAtlas - Milwaukee, WI - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  14. EnviroAtlas - Portland, OR - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  15. EnviroAtlas - Durham, NC - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  16. The new method for the residual gas density measurements

    CERN Document Server

    Anashin, V V; Krasnov, A A; Malyshev, O B; Nas'mov, V P; Pyata, E I; Shaftan, T V

    2001-01-01

    A new method of measurement for residual gas density in the vacuum chambers in presence of synchrotron radiation (SR) is described. The method is based on using a photomultiplier tube for the detection of the SR-stimulated residual gas luminescence, which is proportional to the residual gas density and SR intensity. The design of the experimental setup and results of the measurements of densities of residual gases (H sub 2 , CO sub 2 , CO, N sub 2 , Ar and O sub 2) are submitted.

  17. Optimal estimation of free energies and stationary densities from multiple biased simulations

    CERN Document Server

    Wu, Hao

    2013-01-01

    When studying high-dimensional dynamical systems such as macromolecules, quantum systems and polymers, a prime concern is the identification of the most probable states and their stationary probabilities or free energies. Often, these systems have metastable regions or phases, prohibiting to estimate the stationary probabilities by direct simulation. Efficient sampling methods such as umbrella sampling, metadynamics and conformational flooding have developed that perform a number of simulations where the system's potential is biased such as to accelerate the rare barrier crossing events. A joint free energy profile or stationary density can then be obtained from these biased simulations with weighted histogram analysis method (WHAM). This approach (a) requires a few essential order parameters to be defined in which the histogram is set up, and (b) assumes that each simulation is in global equilibrium. Both assumptions make the investigation of high-dimensional systems with previously unknown energy landscape ...

  18. Comparison of subjective and fully automated methods for measuring mammographic density.

    Science.gov (United States)

    Moshina, Nataliia; Roman, Marta; Sebuødegård, Sofie; Waade, Gunvor G; Ursin, Giske; Hofvind, Solveig

    2017-01-01

    Background Breast radiologists of the Norwegian Breast Cancer Screening Program subjectively classified mammographic density using a three-point scale between 1996 and 2012 and changed into the fourth edition of the BI-RADS classification since 2013. In 2015, an automated volumetric breast density assessment software was installed at two screening units. Purpose To compare volumetric breast density measurements from the automated method with two subjective methods: the three-point scale and the BI-RADS density classification. Material and Methods Information on subjective and automated density assessment was obtained from screening examinations of 3635 women recalled for further assessment due to positive screening mammography between 2007 and 2015. The score of the three-point scale (I = fatty; II = medium dense; III = dense) was available for 2310 women. The BI-RADS density score was provided for 1325 women. Mean volumetric breast density was estimated for each category of the subjective classifications. The automated software assigned volumetric breast density to four categories. The agreement between BI-RADS and volumetric breast density categories was assessed using weighted kappa (kw). Results Mean volumetric breast density was 4.5%, 7.5%, and 13.4% for categories I, II, and III of the three-point scale, respectively, and 4.4%, 7.5%, 9.9%, and 13.9% for the BI-RADS density categories, respectively ( P for trend density categories was kw = 0.5 (95% CI = 0.47-0.53; P density increased with increasing density category of the subjective classifications. The agreement between BI-RADS and volumetric breast density categories was moderate.

  19. Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.

    Science.gov (United States)

    Dorazio, Robert M

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

  20. Density and Biomass Estimates by Removal for an Amazonian Crocodilian, Paleosuchus palpebrosus.

    Directory of Open Access Journals (Sweden)

    Zilca Campos

    Full Text Available Direct counts of crocodilians are rarely feasible and it is difficult to meet the assumptions of mark-recapture methods for most species in most habitats. Catch-out experiments are also usually not logistically or morally justifiable because it would be necessary to destroy the habitat in order to be confident that most individuals had been captured. We took advantage of the draining and filling of a large area of flooded forest during the building of the Santo Antônio dam on the Madeira River to obtain accurate estimates of the density and biomass of Paleosuchus palpebrosus. The density, 28.4 non-hatchling individuals per km2, is one of the highest reported for any crocodilian, except for species that are temporarily concentrated in small areas during dry-season drought. The biomass estimate of 63.15 kg*km-2 is higher than that for most or even all mammalian carnivores in tropical forest. P. palpebrosus may be one of the World´s most abundant crocodilians.

  1. Management of deep brain stimulator battery failure: battery estimators, charge density, and importance of clinical symptoms.

    Directory of Open Access Journals (Sweden)

    Kaihan Fakhar

    Full Text Available OBJECTIVE: We aimed in this investigation to study deep brain stimulation (DBS battery drain with special attention directed toward patient symptoms prior to and following battery replacement. BACKGROUND: Previously our group developed web-based calculators and smart phone applications to estimate DBS battery life (http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. METHODS: A cohort of 320 patients undergoing DBS battery replacement from 2002-2012 were included in an IRB approved study. Statistical analysis was performed using SPSS 20.0 (IBM, Armonk, NY. RESULTS: The mean charge density for treatment of Parkinson's disease was 7.2 µC/cm(2/phase (SD = 3.82, for dystonia was 17.5 µC/cm(2/phase (SD = 8.53, for essential tremor was 8.3 µC/cm(2/phase (SD = 4.85, and for OCD was 18.0 µC/cm(2/phase (SD = 4.35. There was a significant relationship between charge density and battery life (r = -.59, p<.001, as well as total power and battery life (r = -.64, p<.001. The UF estimator (r = .67, p<.001 and the Medtronic helpline (r = .74, p<.001 predictions of battery life were significantly positively associated with actual battery life. Battery status indicators on Soletra and Kinetra were poor predictors of battery life. In 38 cases, the symptoms improved following a battery change, suggesting that the neurostimulator was likely responsible for symptom worsening. For these cases, both the UF estimator and the Medtronic helpline were significantly correlated with battery life (r = .65 and r = .70, respectively, both p<.001. CONCLUSIONS: Battery estimations, charge density, total power and clinical symptoms were important factors. The observation of clinical worsening that was rescued following neurostimulator replacement reinforces the notion that changes in clinical symptoms can be associated with battery drain.

  2. The importance of spatial models for estimating the strength of density dependence

    DEFF Research Database (Denmark)

    Thorson, James T.; Skaug, Hans J.; Kristensen, Kasper;

    2014-01-01

    Identifying the existence and magnitude of density dependence is one of the oldest concerns in ecology. Ecologists have aimed to estimate density dependence in population and community data by fitting a simple autoregressive (Gompertz) model for density dependence to time series of abundance...... for an entire population. However, it is increasingly recognized that spatial heterogeneity in population densities has implications for population and community dynamics. We therefore adapt the Gompertz model to approximate local densities over continuous space instead of population-wide abundance......, and to allow productivity to vary spatially. Using simulated data generated from a spatial model, we show that the conventional (nonspatial) Gompertz model will result in biased estimates of density dependence, e.g., identifying oscillatory dynamics when not present. By contrast, the spatial Gompertz model...

  3. Estimating density of a rare and cryptic high-mountain Galliform species, the Buff-throated Partridge Tetraophasis szechenyii

    Directory of Open Access Journals (Sweden)

    Yu Xu

    2016-06-01

    Full Text Available Estimates of abundance or density are essential for wildlife management and conservation. There are few effective density estimates for the Buff-throated Partridge Tetraophasis szechenyii, a rare and elusive high-mountain Galliform species endemic to western China. In this study, we used the temporary emigration N-mixture model to estimate density of this species, with data acquired from playback point count surveys around a sacred area based on indigenous Tibetan culture of protection of wildlife, in Yajiang County, Sichuan, China, during April-June 2009. Within 84 125-m radius points, we recorded 53 partridge groups during three repeats. The best model indicated that detection probability was described by covariates of vegetation cover type, week of visit, time of day, and weather with weak effects, and a partridge group was present during a sampling period with a constant probability. The abundance component was accounted for by vegetation association. Abundance was substantially higher in rhododendron shrubs, fir-larch forests, mixed spruce-larch-birch forests, and especially oak thickets than in pine forests. The model predicted a density of 5.14 groups/km², which is similar to an estimate of 4.7 - 5.3 groups/km² quantified via an intensive spot-mapping effort. The post-hoc estimate of individual density was 14.44 individuals/km², based on the estimated mean group size of 2.81. We suggest that the method we employed is applicable to estimate densities of Buff-throated Partridges in large areas. Given importance of a mosaic habitat for this species, local logging should be regulated. Despite no effect of the conservation area (sacred on the abundance of Buff-throated Partridges, we suggest regulations linking the sacred mountain conservation area with the official conservation system because of strong local participation facilitated by sacred mountains in land conservation.

  4. A Semianalytical Model Using MODIS Data to Estimate Cell Density of Red Tide Algae (Aureococcus anophagefferens

    Directory of Open Access Journals (Sweden)

    Lingling Jiang

    2016-01-01

    Full Text Available A multiband and a single-band semianalytical model were developed to predict algae cell density distribution. The models were based on cell density (N dependent parameterizations of the spectral backscattering coefficients, bb(λ, obtained from in situ measurements. There was a strong relationship between bb(λ and N, with a minimum regression coefficient of 0.97 at 488 nm and a maximum value of 0.98 at other bands. The cell density calculated by the multiband inversion model was similar to the field measurements of the coastal waters (the average relative error was only 8.9%, but it could not accurately discern the red tide from mixed pixels, and this led to overestimation of the area affected by the red tide. While the single-band inversion model is less precise than the former model in the high chlorophyll water, it could eliminate the impact of the suspended sediments and make more accurate estimates of the red tide area. We concluded that the two models both have advantages and disadvantages; these methods lay the foundation for developing a remote sensing forecasting system for red tides.

  5. Classification of motor imagery by means of cortical current density estimation and Von Neumann entropy.

    Science.gov (United States)

    Kamousi, Baharan; Amini, Ali Nasiri; He, Bin

    2007-06-01

    The goal of the present study is to employ the source imaging methods such as cortical current density estimation for the classification of left- and right-hand motor imagery tasks, which may be used for brain-computer interface (BCI) applications. The scalp recorded EEG was first preprocessed by surface Laplacian filtering, time-frequency filtering, noise normalization and independent component analysis. Then the cortical imaging technique was used to solve the EEG inverse problem. Cortical current density distributions of left and right trials were classified from each other by exploiting the concept of Von Neumann entropy. The proposed method was tested on three human subjects (180 trials each) and a maximum accuracy of 91.5% and an average accuracy of 88% were obtained. The present results confirm the hypothesis that source analysis methods may improve accuracy for classification of motor imagery tasks. The present promising results using source analysis for classification of motor imagery enhances our ability of performing source analysis from single trial EEG data recorded on the scalp, and may have applications to improved BCI systems.

  6. Comparing adaptive and fixed bandwidth-based kernel density estimates in spatial cancer epidemiology.

    Science.gov (United States)

    Lemke, Dorothea; Mattauch, Volkmar; Heidinger, Oliver; Pebesma, Edzer; Hense, Hans-Werner

    2015-03-31

    Monitoring spatial disease risk (e.g. identifying risk areas) is of great relevance in public health research, especially in cancer epidemiology. A common strategy uses case-control studies and estimates a spatial relative risk function (sRRF) via kernel density estimation (KDE). This study was set up to evaluate the sRRF estimation methods, comparing fixed with adaptive bandwidth-based KDE, and how they were able to detect 'risk areas' with case data from a population-based cancer registry. The sRRF were estimated within a defined area, using locational information on incident cancer cases and on a spatial sample of controls, drawn from a high-resolution population grid recognized as underestimating the resident population in urban centers. The spatial extensions of these areas with underestimated resident population were quantified with population reference data and used in this study as 'true risk areas'. Sensitivity and specificity analyses were conducted by spatial overlay of the 'true risk areas' and the significant (α=.05) p-contour lines obtained from the sRRF. We observed that the fixed bandwidth-based sRRF was distinguished by a conservative behavior in identifying these urban 'risk areas', that is, a reduced sensitivity but increased specificity due to oversmoothing as compared to the adaptive risk estimator. In contrast, the latter appeared more competitive through variance stabilization, resulting in a higher sensitivity, while the specificity was equal as compared to the fixed risk estimator. Halving the originally determined bandwidths led to a simultaneous improvement of sensitivity and specificity of the adaptive sRRF, while the specificity was reduced for the fixed estimator. The fixed risk estimator contrasts with an oversmoothing tendency in urban areas, while overestimating the risk in rural areas. The use of an adaptive bandwidth regime attenuated this pattern, but led in general to a higher false positive rate, because, in our study design

  7. Estimation of ocelot density in the pantanal using capture-recapture analysis of camera-trapping data

    Science.gov (United States)

    Trolle, M.; Kery, M.

    2003-01-01

    Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).

  8. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  9. Estimating babassu palm density using automatic palm tree detection with very high spatial resolution satellite images.

    Science.gov (United States)

    Dos Santos, Alessio Moreira; Mitja, Danielle; Delaître, Eric; Demagistri, Laurent; de Souza Miranda, Izildinha; Libourel, Thérèse; Petit, Michel

    2017-05-15

    High spatial resolution images as well as image processing and object detection algorithms are recent technologies that aid the study of biodiversity and commercial plantations of forest species. This paper seeks to contribute knowledge regarding the use of these technologies by studying randomly dispersed native palm tree. Here, we analyze the automatic detection of large circular crown (LCC) palm tree using a high spatial resolution panchromatic GeoEye image (0.50 m) taken on the area of a community of small agricultural farms in the Brazilian Amazon. We also propose auxiliary methods to estimate the density of the LCC palm tree Attalea speciosa (babassu) based on the detection results. We used the "Compt-palm" algorithm based on the detection of palm tree shadows in open areas via mathematical morphology techniques and the spatial information was validated using field methods (i.e. structural census and georeferencing). The algorithm recognized individuals in life stages 5 and 6, and the extraction percentage, branching factor and quality percentage factors were used to evaluate its performance. A principal components analysis showed that the structure of the studied species differs from other species. Approximately 96% of the babassu individuals in stage 6 were detected. These individuals had significantly smaller stipes than the undetected ones. In turn, 60% of the stage 5 babassu individuals were detected, showing significantly a different total height and a different number of leaves from the undetected ones. Our calculations regarding resource availability indicate that 6870 ha contained 25,015 adult babassu palm tree, with an annual potential productivity of 27.4 t of almond oil. The detection of LCC palm tree and the implementation of auxiliary field methods to estimate babassu density is an important first step to monitor this industry resource that is extremely important to the Brazilian economy and thousands of families over a large scale.

  10. An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index

    DEFF Research Database (Denmark)

    Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle

    2013-01-01

    We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...

  11. Nonparametric estimate of spectral density functions of sample covariance matrices: A first step

    OpenAIRE

    2012-01-01

    The density function of the limiting spectral distribution of general sample covariance matrices is usually unknown. We propose to use kernel estimators which are proved to be consistent. A simulation study is also conducted to show the performance of the estimators.

  12. An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index

    DEFF Research Database (Denmark)

    Dierckx, G.; Goegebeur, Y.; Guillou, A.

    2013-01-01

    We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency and as...... by a small simulation experiment involving both uncontaminated and contaminated samples. (C) 2013 Elsevier Inc. All rights reserved....

  13. The Z3 model with the density of states method

    CERN Document Server

    Mercado, Ydalia Delgado; Gattringer, Christof

    2014-01-01

    In this contribution we apply a new variant of the density of states method to the Z3 spin model at finite density. We use restricted expectation values evaluated with Monte Carlo simulations and study their dependence on a control parameter lambda. We show that a sequence of one-parameter fits to the Monte-Carlo data as a function of lambda is sufficient to completely determine the density of states. We expect that this method has smaller statistical errors than other approaches since all generated Monte Carlo data are used in the determination of the density. We compare results for magnetization and susceptibility to a reference simulation in the dual representation of the Z3 spin model and find good agreement for a wide range of parameters.

  14. Form the density-of-states method to finite density quantum field theory

    CERN Document Server

    Langfeld, Kurt

    2016-01-01

    During the last 40 years, Monte Carlo calculations based upon Importance Sampling have matured into the most widely employed method for determinig first principle results in QCD. Nevertheless, Importance Sampling leads to spectacular failures in situations in which certain rare configurations play a non-secondary role as it is the case for Yang-Mills theories near a first order phase transition or quantum field theories at finite matter density when studied with the re-weighting method. The density-of-states method in its LLR formulation has the potential to solve such overlap or sign problems by means of an exponential error suppression. We here introduce the LLR approach and its generalisation to complex action systems. Applications include U(1), SU(2) and SU(3) gauge theories as well as the Z3 spin model at finite densities and heavy-dense QCD.

  15. Statistical Analysis of Photopyroelectric Signals using Histogram and Kernel Density Estimation for differentiation of Maize Seeds

    Science.gov (United States)

    Rojas-Lima, J. E.; Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2016-09-01

    Considering the necessity of photothermal alternative approaches for characterizing nonhomogeneous materials like maize seeds, the objective of this research work was to analyze statistically the amplitude variations of photopyroelectric signals, by means of nonparametric techniques such as the histogram and the kernel density estimator, and the probability density function of the amplitude variations of two genotypes of maize seeds with different pigmentations and structural components: crystalline and floury. To determine if the probability density function had a known parametric form, the histogram was determined which did not present a known parametric form, so the kernel density estimator using the Gaussian kernel, with an efficiency of 95 % in density estimation, was used to obtain the probability density function. The results obtained indicated that maize seeds could be differentiated in terms of the statistical values for floury and crystalline seeds such as the mean (93.11, 159.21), variance (1.64× 103, 1.48× 103), and standard deviation (40.54, 38.47) obtained from the amplitude variations of photopyroelectric signals in the case of the histogram approach. For the case of the kernel density estimator, seeds can be differentiated in terms of kernel bandwidth or smoothing constant h of 9.85 and 6.09 for floury and crystalline seeds, respectively.

  16. Urinary density measurement and analysis methods in neonatal unit care

    Directory of Open Access Journals (Sweden)

    Maria Vera Lúcia Moreira Leitão Cardoso

    2013-09-01

    Full Text Available The objective was to assess urine collection methods through cotton in contact with genitalia and urinary collector to measure urinary density in newborns. This is a quantitative intervention study carried out in a neonatal unit of Fortaleza-CE, Brazil, in 2010. The sample consisted of 61 newborns randomly chosen to compose the study group. Most neonates were full term (31/50.8% males (33/54%. Data on urinary density measurement through the methods of cotton and collector presented statistically significant differences (p<0.05. The analysis of interquartile ranges between subgroups resulted in statistical differences between urinary collector/reagent strip (1005 and cotton/reagent strip (1010, however there was no difference between urinary collector/ refractometer (1008 and cotton/ refractometer. Therefore, further research should be conducted with larger sampling using methods investigated in this study and whenever possible, comparing urine density values to laboratory tests.

  17. On methods of estimating cosmological bulk flows

    CERN Document Server

    Nusser, Adi

    2015-01-01

    We explore similarities and differences between several estimators of the cosmological bulk flow, $\\bf B$, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of $\\bf B$ as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring $\\bf B$ for either of these definitions which coincide only for a constant velocity field. We focus on the Wiener Filtering (WF, Hoffman et al. 2015) and the Constrained Minimum Variance (CMV,Feldman et al. 2010) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute $\\bf B$ in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer $\\bf B$ directly from the observed velocities for the second definition of $\\bf B$. The WF ...

  18. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  19. Glacial density and GIA in Alaska estimated from ICESat, GPS and GRACE measurements

    Science.gov (United States)

    Jin, Shuanggen; Zhang, T. Y.; Zou, F.

    2017-01-01

    The density of glacial volume change in Alaska is a key factor in estimating the glacier mass loss from altimetry observations. However, the density of Alaskan glaciers has large uncertainty due to the lack of in situ measurements. In this paper, using the measurements of Ice, Cloud, and land Elevation Satellite (ICESat), Global Positioning System (GPS), and Gravity Recovery and Climate Experiment (GRACE) from 2003 to 2009, an optimal density of glacial volume change with 750 kg/m3 is estimated for the first time to fit the measurements. The glacier mass loss is -57.5 ± 6.5 Gt by converting the volumetric change from ICESat with the estimated density 750 kg/m3. Based on the empirical relation, the depth-density profiles are constructed, which show glacial density variation information with depths in Alaska. By separating the glacier mass loss from glacial isostatic adjustment (GIA) effects in GPS uplift rates and GRACE total water storage trends, the GIA uplift rates are estimated in Alaska. The best fitting model consists of a 60 km elastic lithosphere and 110 km thick asthenosphere with a viscosity of 2.0 × 1019 Pa s over a two-layer mantle.

  20. Estimating population density and connectivity of American mink using spatial capture-recapture

    Science.gov (United States)

    Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.

    2016-01-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  1. Wavelet Optimal Estimations for Density Functions under Severely Ill-Posed Noises

    Directory of Open Access Journals (Sweden)

    Rui Li

    2013-01-01

    Full Text Available Motivated by Lounici and Nickl's work (2011, this paper considers the problem of estimation of a density f based on an independent and identically distributed sample Y1,…,Yn from g=f*φ. We show a wavelet optimal estimation for a density (function over Besov ball Br,qs(L and Lp risk (1≤p<∞ in the presence of severely ill-posed noises. A wavelet linear estimation is firstly presented. Then, we prove a lower bound, which shows our wavelet estimator optimal. In other words, nonlinear wavelet estimations are not needed in that case. It turns out that our results extend some theorems of Pensky and Vidakovic (1999, as well as Fan and Koo (2002.

  2. Estimating detection and density of the Andean cat in the high Andes

    Science.gov (United States)

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  3. Application of Density Estimation Methods to Datasets from a Glider

    Science.gov (United States)

    2014-09-30

    odontocete clicks, likely sperm whales. The noisy patch between 5.6 and 7.1 seconds corresponds to glider self-noise. Glider self-noise is easily...Carretta, J. V., Forney, K. A., Lowry, M. S., Barlow, J., Baker, J., Johnston , D., Hanson, B., Brownell Jr., R. L., Robbins, J., Mattila, D. K., Ralls

  4. Gaussian regression and power spectral density estimation with missing data: The MICROSCOPE space mission as a case study

    CERN Document Server

    Baghi, Quentin; Bergé, Joël; Christophe, Bruno; Touboul, Pierre; Rodrigues, Manuel

    2016-01-01

    We present a Gaussian regression method for time series with missing data and stationary residuals of unknown power spectral density (PSD). The missing data are efficiently estimated by their conditional expectation as in universal Kriging, based on the circulant approximation of the complete data covariance. After initialization with an autoregessive fit of the noise, a few iterations of estimation/reconstruction steps are performed until convergence of the regression and PSD estimates, in a way similar to the expectation-conditional-maximization algorithm. The estimation can be performed for an arbitrary PSD provided that it is sufficiently smooth. The algorithm is developed in the framework of the MICROSCOPE space mission whose goal is to test the weak equivalence principle (WEP) with a precision of $10^{-15}$. We show by numerical simulations that the developed method allows us to meet three major requirements: to maintain the targeted precision of the WEP test in spite of the loss of data, to calculate a...

  5. Failure Analysis of Wind Turbines by Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W.F.

    2013-01-01

    The aim of this study is to present an efficient and accurate method for estimation of the failure probability of wind turbine structures which work under turbulent wind load. The classical method for this is to fit one of the extreme value probability distribution functions to the extracted maxima...

  6. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  7. Effects of tissue heterogeneity on the optical estimate of breast density

    Science.gov (United States)

    Taroni, Paola; Pifferi, Antonio; Quarto, Giovanna; Spinelli, Lorenzo; Torricelli, Alessandro; Abbate, Francesca; Balestreri, Nicola; Ganino, Serena; Menna, Simona; Cassano, Enrico; Cubeddu, Rinaldo

    2012-01-01

    Breast density is a recognized strong and independent risk factor for developing breast cancer. At present, breast density is assessed based on the radiological appearance of breast tissue, thus relying on the use of ionizing radiation. We have previously obtained encouraging preliminary results with our portable instrument for time domain optical mammography performed at 7 wavelengths (635–1060 nm). In that case, information was averaged over four images (cranio-caudal and oblique views of both breasts) available for each subject. In the present work, we tested the effectiveness of just one or few point measurements, to investigate if tissue heterogeneity significantly affects the correlation between optically derived parameters and mammographic density. Data show that parameters estimated through a single optical measurement correlate strongly with mammographic density estimated by using BIRADS categories. A central position is optimal for the measurement, but its exact location is not critical. PMID:23082283

  8. Courant-Snyder invariant density screening method for emittance analysis

    Institute of Scientific and Technical Information of China (English)

    SUN Ji-Lei; TANG Jing-Yu; JING Han-Tao

    2011-01-01

    Emittance is an important characteristic of describing charged particle beams.In hadron accelerators,we often meet irregular beam distributions that are not appropriately described by a single rms emittance or 95% emittance or total emittance.In this paper,it is pointed out that in many cases a beam halo should be described with very different Courant-Snyder parameters from the ones used for the beam core.A new method - the Courant-Snyder invariant density screening method - is introduced for analyzing emittance data clearly and accurately.The method treats the emittance data from both measurements and numerical simulations.The method uses the statistical distribution of the beam around each particle in phase space to mark its local density parameter,and then uses the density distribution to calculate the beam parameters such as the Courant-Snyder parameters and emittance for different beam boundary definitions.The method has been used in the calculations for.beams from different sources,and shows its advantages over other methods.An application code based on the method including the graphic interface has also been designed.

  9. Contribution to the Nonparametric Estimation of the Density of the Regression Errors (Doctoral Thesis)

    CERN Document Server

    LSTA, Rawane Samb

    2010-01-01

    This thesis deals with the nonparametric estimation of density f of the regression error term E of the model Y=m(X)+E, assuming its independence with the covariate X. The difficulty linked to this study is the fact that the regression error E is not observed. In a such setup, it would be unwise, for estimating f, to use a conditional approach based upon the probability distribution function of Y given X. Indeed, this approach is affected by the curse of dimensionality, so that the resulting estimator of the residual term E would have considerably a slow rate of convergence if the dimension of X is very high. Two approaches are proposed in this thesis to avoid the curse of dimensionality. The first approach uses the estimated residuals, while the second integrates a nonparametric conditional density estimator of Y given X. If proceeding so can circumvent the curse of dimensionality, a challenging issue is to evaluate the impact of the estimated residuals on the final estimator of the density f. We will also at...

  10. Method to reduce dislocation density in silicon using stress

    Science.gov (United States)

    Buonassisi, Anthony; Bertoni, Mariana; Argon, Ali; Castellanos, Sergio; Fecych, Alexandria; Powell, Douglas; Vogl, Michelle

    2013-03-05

    A crystalline material structure with reduced dislocation density and method of producing same is provided. The crystalline material structure is annealed at temperatures above the brittle-to-ductile transition temperature of the crystalline material structure. One or more stress elements are formed on the crystalline material structure so as to annihilate dislocations or to move them into less harmful locations.

  11. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    Science.gov (United States)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  12. Statistical Analysis of the Spectral Density Estimate Obtained via Coifman Scaling Function

    OpenAIRE

    2007-01-01

    Spectral density built as Fourier transform of covariance sequence of stationary random process is determining the process characteristics and makes for analysis of it’s structure. Thus, one of the main problems in time series analysis is constructing consistent estimates of spectral density via successive, taken after equal periods of time observations of stationary random process. This article is devoted to investigation of problems dealing with application of wavelet anal...

  13. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  14. Statistical methods of estimating mining costs

    Science.gov (United States)

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  15. Spatial Variation in Tree Density and Estimated Aboveground Carbon Stocks in Southern Africa

    Directory of Open Access Journals (Sweden)

    Lulseged Tamene

    2016-03-01

    Full Text Available Variability in woody plant species, vegetation assemblages and anthropogenic activities derails the efforts to have common approaches for estimating biomass and carbon stocks in Africa. In order to suggest management options, it is important to understand the vegetation dynamics and the major drivers governing the observed conditions. This study uses data from 29 sentinel landscapes (4640 plots across the southern Africa. We used T-Square distance method to sample trees. Allometric models were used to estimate aboveground tree biomass from which aboveground biomass carbon stock (AGBCS was derived for each site. Results show average tree density of 502 trees·ha−1 with semi-arid areas having the highest (682 trees·ha−1 and arid regions the lowest (393 trees·ha−1. The overall AGBCS was 56.4 Mg·ha−1. However, significant site to site variability existed across the region. Over 60 fold differences were noted between the lowest AGBCS (2.2 Mg·ha−1 in the Musungwa plains of Zambia and the highest (138.1 Mg·ha−1 in the scrublands of Kenilworth in Zimbabwe. Semi-arid and humid sites had higher carbon stocks than sites in sub-humid and arid regions. Anthropogenic activities also influenced the observed carbon stocks. Repeated measurements would reveal future trends in tree cover and carbon stocks across different systems.

  16. NEAR INFRARED SPECTROSCOPY FOR ESTIMATING SUGARCANE BAGASSE CONTENT IN MEDIUM DENSITY FIBERBOARD

    Directory of Open Access Journals (Sweden)

    Ugo Leandro Belini

    2011-04-01

    Full Text Available Medium density fiberboard (MDF is an engineered wood product formed by breaking down selected lignin-cellulosic material residuals into fibers, combining it with wax and a resin binder, and then forming panels by applying high temperature and pressure. Because the raw material in the industrial process is ever-changing, the panel industry requires methods for monitoring the composition of their products. The aim of this study was to estimate the ratio of sugarcane (SC bagasse to Eucalyptus wood in MDF panels using near infrared (NIR spectroscopy. Principal component analysis (PCA and partial least square (PLS regressions were performed. MDF panels having different bagasse contents were easily distinguished from each other by the PCA of their NIR spectra with clearly different patterns of response. The PLS-R models for SC content of these MDF samples presented a strong coefficient of determination (0.96 between the NIR-predicted and Lab-determined values and a low standard error of prediction (~1.5% in the cross-validations. A key role of resins (adhesives, cellulose, and lignin for such PLS-R calibrations was shown. PLS-DA model correctly classified ninety-four percent of MDF samples by cross-validations and ninety-eight percent of the panels by independent test set. These NIR-based models can be useful to quickly estimate sugarcane bagasse vs. Eucalyptus wood content ratio in unknown MDF samples and to verify the quality of these engineered wood products in an online process.

  17. Estimate of the density of Eucalyptus grandis W. Hill ex Maiden using near infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    Silviana Rosso

    2013-12-01

    Full Text Available This study aimed to analyze use of near infrared spectroscopy (NIRS to estimate wood density of Eucalyptus grandis. For that, 66 27-year-old trees were logged and central planks were removed from each log. Test pieces 2.5 x 2.5 x 5.0 cm in size were removed from the base of each plank, in the pith-bark direction, and subjected to determination of bulk and basic density at 12% moisture (dry basis, followed by spectral readings in the radial, tangential and transverse directions using a Bruker Tensor 37 infrared spectrophotometer. The calibration to estimate wood density was developed based on the matrix of spectra obtained from the radial face, containing 216 samples. The partial least squares regression to estimate bulk wood density of Eucalyptus grandis provided a coefficient of determination of validation of 0.74 and a ratio performance deviation of 2.29. Statistics relating to the predictive models had adequate magnitudes for estimating wood density from unknown samples, indicating that the above technique has potential for use in replacement of conventional testing.

  18. An empirical evaluation of camera trapping and spatially explicit capture-recapture models for estimating chimpanzee density.

    Science.gov (United States)

    Després-Einspenner, Marie-Lyne; Howe, Eric J; Drapeau, Pierre; Kühl, Hjalmar S

    2017-03-07

    Empirical validations of survey methods for estimating animal densities are rare, despite the fact that only an application to a population of known density can demonstrate their reliability under field conditions and constraints. Here, we present a field validation of camera trapping in combination with spatially explicit capture-recapture (SECR) methods for enumerating chimpanzee populations. We used 83 camera traps to sample a habituated community of western chimpanzees (Pan troglodytes verus) of known community and territory size in Taï National Park, Ivory Coast, and estimated community size and density using spatially explicit capture-recapture models. We aimed to: (1) validate camera trapping as a means to collect capture-recapture data for chimpanzees; (2) validate SECR methods to estimate chimpanzee density from camera trap data; (3) compare the efficacy of targeting locations frequently visited by chimpanzees versus deploying cameras according to a systematic design; (4) evaluate the performance of SECR estimators with reduced sampling effort; and (5) identify sources of heterogeneity in detection probabilities. Ten months of camera trapping provided abundant capture-recapture data. All weaned individuals were detected, most of them multiple times, at both an array of targeted locations, and a systematic grid of cameras positioned randomly within the study area, though detection probabilities were higher at targeted locations. SECR abundance estimates were accurate and precise, and analyses of subsets of the data indicated that the majority of individuals in a community could be detected with as few as five traps deployed within their territory. Our results highlight the potential of camera trapping for cost-effective monitoring of chimpanzee populations.

  19. Histogram method in finite density QCD with phase quenched simulations

    CERN Document Server

    Nakagawa, Y; Aoki, S; Kanaya, K; Ohno, H; Saito, H; Hatsuda, T; Umeda, T

    2011-01-01

    We propose a new approach to finite density QCD based on a histogram method with phase quenched simulations at finite chemical potential. Integrating numerically the derivatives of the logarithm of the quark determinant with respect to the chemical potential, we calculate the reweighting factor and the complex phase of the quark determinant. The complex phase is handled with a cumulant expansion to avoid the sign problem. We examine the applicability of this method.

  20. Estimating the amount and distribution of radon flux density from the soil surface in China.

    Science.gov (United States)

    Zhuo, Weihai; Guo, Qiuju; Chen, Bo; Cheng, Guan

    2008-07-01

    Based on an idealized model, both the annual and the seasonal radon ((222)Rn) flux densities from the soil surface at 1099 sites in China were estimated by linking a database of soil (226)Ra content and a global ecosystems database. Digital maps of the (222)Rn flux density in China were constructed in a spatial resolution of 25 km x 25 km by interpolation among the estimated data. An area-weighted annual average (222)Rn flux density from the soil surface across China was estimated to be 29.7+/-9.4 mBq m(-2)s(-1). Both regional and seasonal variations in the (222)Rn flux densities are significant in China. Annual average flux densities in the southeastern and northwestern China are generally higher than those in other regions of China, because of high soil (226)Ra content in the southeastern area and high soil aridity in the northwestern one. The seasonal average flux density is generally higher in summer/spring than winter, since relatively higher soil temperature and lower soil water saturation in summer/spring than other seasons are common in China.

  1. PEDO-TRANSFER FUNCTIONS FOR ESTIMATING SOIL BULK DENSITY IN CENTRAL AMAZONIA

    Directory of Open Access Journals (Sweden)

    Henrique Seixas Barros

    2015-04-01

    Full Text Available Under field conditions in the Amazon forest, soil bulk density is difficult to measure. Rigorous methodological criteria must be applied to obtain reliable inventories of C stocks and soil nutrients, making this process expensive and sometimes unfeasible. This study aimed to generate models to estimate soil bulk density based on parameters that can be easily and reliably measured in the field and that are available in many soil-related inventories. Stepwise regression models to predict bulk density were developed using data on soil C content, clay content and pH in water from 140 permanent plots in terra firme (upland forests near Manaus, Amazonas State, Brazil. The model results were interpreted according to the coefficient of determination (R2 and Akaike information criterion (AIC and were validated with a dataset consisting of 125 plots different from those used to generate the models. The model with best performance in estimating soil bulk density under the conditions of this study included clay content and pH in water as independent variables and had R2 = 0.73 and AIC = -250.29. The performance of this model for predicting soil density was compared with that of models from the literature. The results showed that the locally calibrated equation was the most accurate for estimating soil bulk density for upland forests in the Manaus region.

  2. Probability Density Evolution Analysis for Stochastic Dynamic Seismic Responses of Structures Based on Improved Point Estimation Method%基于改进点估计法的结构随机动力地震反应概率密度演化

    Institute of Scientific and Technical Information of China (English)

    宋鹏彦; 吕大刚; 于晓辉; 王光远

    2014-01-01

    为了获得结构反应概率密度随时间的变化规律,将改进的点估计法、最大熵原理与随机动力学的概率密度演化理论相结合,提出了基于统计矩信息的结构非线性随机动力反应概率密度演化分析方法。以一栋按我国规范设计的钢筋混凝土框架结构为研究对象,选取结构在地震作用下的顶层位移和整体地震损伤指数作为反应参数,并考虑结构参数的不确定性,用本文提出的方法进行了地震作用下结构的非线性随机动力反应的概率密度演化分析及参数灵敏度分析,结果表明:钢筋屈服强度、结构的阻尼、混凝土容重对结构的位移反应影响较为明显,灵敏度超过10%。%In order to obtain the law of probability densities of structural responses varying with time, a new moment-based approach for analysis of probability density evolution of nonlinear stochastic dynamic responses of structures was developed,by combining an improved point estimation method (IPEM)with the maximum entropy theory and the probability density evolution theory for stochastic dynamics of structures. The proposed method was then used to perform probabilistic density evolution analysis and parameter sensitivity analysis of a reinforced concrete (RC ) frame structure designed according to Chinese codes,selecting the top displacement and global seismic damage index of the structure under earthquake as response parameters,and taking into account the uncertainty of structural parameters. The results show that the steel yield strength,the structural damping ratio,and concrete gravity density have dominant influences on structural displacement,with a sensitivity of more than 10%.

  3. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien

    2015-12-30

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  4. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    Directory of Open Access Journals (Sweden)

    Eléanor Brassine

    Full Text Available Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9 cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km². While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200, no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.

  5. High-order ionospheric effects on electron density estimation from Fengyun-3C GPS radio occultation

    Science.gov (United States)

    Li, Junhai; Jin, Shuanggen

    2017-03-01

    GPS radio occultation can estimate ionospheric electron density and total electron content (TEC) with high spatial resolution, e.g., China's recent Fengyun-3C GPS radio occultation. However, high-order ionospheric delays are normally ignored. In this paper, the high-order ionospheric effects on electron density estimation from the Fengyun-3C GPS radio occultation data are estimated and investigated using the NeQuick2 ionosphere model and the IGRF12 (International Geomagnetic Reference Field, 12th generation) geomagnetic model. Results show that the high-order ionospheric delays have large effects on electron density estimation with up to 800 el cm-3, which should be corrected in high-precision ionospheric density estimation and applications. The second-order ionospheric effects are more significant, particularly at 250-300 km, while third-order ionospheric effects are much smaller. Furthermore, the high-order ionospheric effects are related to the location, the local time, the radio occultation azimuth and the solar activity. The large high-order ionospheric effects are found in the low-latitude area and in the daytime as well as during strong solar activities. The second-order ionospheric effects have a maximum positive value when the radio occultation azimuth is around 0-20°, and a maximum negative value when the radio occultation azimuth is around -180 to -160°. Moreover, the geomagnetic storm also affects the high-order ionospheric delay, which should be carefully corrected.

  6. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    Science.gov (United States)

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.

  7. A hierarchical model for estimating density in camera-trap studies

    Science.gov (United States)

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  8. Estimation of current density distribution of PAFC by analysis of cell exhaust gas

    Energy Technology Data Exchange (ETDEWEB)

    Kato, S.; Seya, A. [Fuji Electric Co., Ltd., Ichihara-shi (Japan); Asano, A. [Fuji Electric Corporate, Ltd., Yokosuka-shi (Japan)

    1996-12-31

    To estimate distributions of Current densities, voltages, gas concentrations, etc., in phosphoric acid fuel cell (PAFC) stacks, is very important for getting fuel cells with higher quality. In this work, we leave developed a numerical simulation tool to map out the distribution in a PAFC stack. And especially to Study Current density distribution in the reaction area of the cell, we analyzed gas composition in several positions inside a gas outlet manifold of the PAFC stack. Comparing these measured data with calculated data, the current density distribution in a cell plane calculated by the simulation, was certified.

  9. Minimum entropy density method for the time series analysis

    Science.gov (United States)

    Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae

    2009-01-01

    The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.

  10. Minimum Entropy Density Method for the Time Series Analysis

    CERN Document Server

    Lee, J W; Moon, H T; Park, J B; Yang, J S; Jo, Hang-Hyun; Lee, Jeong Won; Moon, Hie-Tae; Park, Joongwoo Brian; Yang, Jae-Suk

    2006-01-01

    The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the most correlated time interval of a given time series and define the effective delay of information (EDI) as the correlation length that minimizes the entropy density in relation to the velocity of information flow. The MEDM is applied to the financial time series of Standard and Poor's 500 (S&P500) index from February 1983 to April 2006. It is found that EDI of S&P500 index has decreased for the last twenty years, which suggests that the efficiency of the U.S. market dynamics became close to the efficient market hypothesis.

  11. A Modified Extended Bayesian Method for Parameter Estimation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents a modified extended Bayesian method for parameter estimation. In this method the mean value of the a priori estimation is taken from the values of the estimated parameters in the previous iteration step. In this way, the parameter covariance matrix can be automatically updated during the estimation procedure, thereby avoiding the selection of an empirical parameter. Because the extended Bayesian method can be regarded as a Tikhonov regularization, this new method is more stable than both the least-squares method and the maximum likelihood method. The validity of the proposed method is illustrated by two examples: one based on simulated data and one based on real engineering data.

  12. ICA Model Order Estimation Using Clustering Method

    Directory of Open Access Journals (Sweden)

    P. Sovka

    2007-12-01

    Full Text Available In this paper a novel approach for independent component analysis (ICA model order estimation of movement electroencephalogram (EEG signals is described. The application is targeted to the brain-computer interface (BCI EEG preprocessing. The previous work has shown that it is possible to decompose EEG into movement-related and non-movement-related independent components (ICs. The selection of only movement related ICs might lead to BCI EEG classification score increasing. The real number of the independent sources in the brain is an important parameter of the preprocessing step. Previously, we used principal component analysis (PCA for estimation of the number of the independent sources. However, PCA estimates only the number of uncorrelated and not independent components ignoring the higher-order signal statistics. In this work, we use another approach - selection of highly correlated ICs from several ICA runs. The ICA model order estimation is done at significance level α = 0.05 and the model order is less or more dependent on ICA algorithm and its parameters.

  13. A QUALITATIVE METHOD TO ESTIMATE HSI DISPLAY COMPLEXITY

    Directory of Open Access Journals (Sweden)

    JACQUES HUGO

    2013-04-01

    Full Text Available There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  14. Application of texture analysis method for mammogram density classification

    Science.gov (United States)

    Nithya, R.; Santhi, B.

    2017-07-01

    Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.

  15. Auxiliary Density Matrix Methods for Hartree-Fock Exchange Calculations.

    Science.gov (United States)

    Guidon, Manuel; Hutter, Jürg; VandeVondele, Joost

    2010-08-10

    The calculation of Hartree-Fock exchange (HFX) is computationally demanding for large systems described with high-quality basis sets. In this work, we show that excellent performance and good accuracy can nevertheless be obtained if an auxiliary density matrix is employed for the HFX calculation. Several schemes to derive an auxiliary density matrix from a high-quality density matrix are discussed. Key to the accuracy of the auxiliary density matrix methods (ADMM) is the use of a correction based on standard generalized gradient approximations for HFX. ADMM integrates seamlessly in existing HFX codes and, in particular, can be employed in linear scaling implementations. Demonstrating the performance of the method, the effect of HFX on the structure of liquid water is investigated in detail using Born-Oppenheimer molecular dynamics simulations (300 ps) of a system of 64 molecules. Representative for large systems are calculations on a solvated protein (Rubredoxin), for which ADMM outperforms the corresponding standard HFX implementation by approximately a factor 20.

  16. Bandwidth selection in kernel density estimation: oracle inequalities and adaptive minimax optimality

    CERN Document Server

    Goldenshluger, Alexander

    2010-01-01

    We address the problem of density estimation with $\\bL_p$--loss by selection of kernel estimators. We develop a selection procedure and derive corresponiding $\\bL_p$--risk oracle inequalities. It is shown that the proposed selection rule leads to the minimax estimator that is adaptive over a scale of the anisotropic Nikol'ski classes. The main technical tools used in our derivations are uniform bounds on the $\\bL_p$--norms of empirical processes developed recently in Goldenshluger and Lepski~(2010).

  17. Multi-objective mixture-based iterated density estimation evolutionary algorithms

    NARCIS (Netherlands)

    Thierens, D.; Bosman, P.A.N.

    2001-01-01

    We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability

  18. ks: Kernel Density Estimation and Kernel Discriminant Analysis for Multivariate Data in R

    Directory of Open Access Journals (Sweden)

    Tarn Duong

    2007-09-01

    Full Text Available Kernel smoothing is one of the most widely used non-parametric data smoothing techniques. We introduce a new R package ks for multivariate kernel smoothing. Currently it contains functionality for kernel density estimation and kernel discriminant analysis. It is a comprehensive package for bandwidth matrix selection, implementing a wide range of data-driven diagonal and unconstrained bandwidth selectors.

  19. ASYMPTOTIC NORMALITY OF KERNEL ESTIMATES OF A DENSITY FUNCTION UNDER ASSOCIATION DEPENDENCE

    Institute of Scientific and Technical Information of China (English)

    林正炎

    2003-01-01

    Let {Xn,n> _ 1} be a strictly stationary sequence of random variables,which are either associated or negatively associated,f(·)be their common density.In this paper,the author shows a central limit theorem for a kernel estimate of f(·)under certain regular conditions.

  20. Estimation of boiling points using density functional theory with polarized continuum model solvent corrections.

    Science.gov (United States)

    Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C

    2011-09-01

    An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 °C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R²=0.980 for a training set of 317 molecules, and R²=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  1. Topological Pressure and Coding Sequence Density Estimation in the Human Genome

    CERN Document Server

    Koslicki, David

    2011-01-01

    Inspired by concepts from ergodic theory, we give new insight into coding sequence (CDS) density estimation for the human genome. Our approach is based on the introduction and study of topological pressure: a numerical quantity assigned to any finite sequence based on an appropriate notion of `weighted information content'. For human DNA sequences, each codon is assigned a suitable weight, and using a window size of approximately 60,000bp, we obtain a very strong positive correlation between CDS density and topological pressure. The weights are selected by an optimization procedure, and can be interpreted as quantitative data on the relative importance of different codons for the density estimation of coding sequences. This gives new insight into codon usage bias which is an important subject where long standing questions remain open. Inspired again by ergodic theory, we use the weightings on the codons to define a probability measure on finite sequences. We demonstrate that this measure is effective in disti...

  2. Quantum Estimation Methods for Quantum Illumination.

    Science.gov (United States)

    Sanz, M; Las Heras, U; García-Ripoll, J J; Solano, E; Di Candia, R

    2017-02-17

    Quantum illumination consists in shining quantum light on a target region immersed in a bright thermal bath with the aim of detecting the presence of a possible low-reflective object. If the signal is entangled with the receiver, then a suitable choice of the measurement offers a gain with respect to the optimal classical protocol employing coherent states. Here, we tackle this detection problem by using quantum estimation techniques to measure the reflectivity parameter of the object, showing an enhancement in the signal-to-noise ratio up to 3 dB with respect to the classical case when implementing only local measurements. Our approach employs the quantum Fisher information to provide an upper bound for the error probability, supplies the concrete estimator saturating the bound, and extends the quantum illumination protocol to non-Gaussian states. As an example, we show how Schrödinger's cat states may be used for quantum illumination.

  3. Distributed Density Estimation Based on a Mixture of Factor Analyzers in a Sensor Network

    Directory of Open Access Journals (Sweden)

    Xin Wei

    2015-08-01

    Full Text Available Distributed density estimation in sensor networks has received much attention due to its broad applicability. When encountering high-dimensional observations, a mixture of factor analyzers (MFA is taken to replace mixture of Gaussians for describing the distributions of observations. In this paper, we study distributed density estimation based on a mixture of factor analyzers. Existing estimation algorithms of the MFA are for the centralized case, which are not suitable for distributed processing in sensor networks. We present distributed density estimation algorithms for the MFA and its extension, the mixture of Student’s t-factor analyzers (MtFA. We first define an objective function as the linear combination of local log-likelihoods. Then, we give the derivation process of the distributed estimation algorithms for the MFA and MtFA in details, respectively. In these algorithms, the local sufficient statistics (LSS are calculated at first and diffused. Then, each node performs a linear combination of the received LSS from nodes in its neighborhood to obtain the combined sufficient statistics (CSS. Parameters of the MFA and the MtFA can be obtained by using the CSS. Finally, we evaluate the performance of these algorithms by numerical simulations and application example. Experimental results validate the promising performance of the proposed algorithms.

  4. Plasma actuator electron density measurement using microwave perturbation method

    Energy Technology Data Exchange (ETDEWEB)

    Mirhosseini, Farid; Colpitts, Bruce [Electrical and Computer Engineering, University of New Brunswick, Fredericton, New Brunswick E3B 5A3 (Canada)

    2014-07-21

    A cylindrical dielectric barrier discharge plasma under five different pressures is generated in an evacuated glass tube. This plasma volume is located at the center of a rectangular copper waveguide cavity, where the electric field is maximum for the first mode and the magnetic field is very close to zero. The microwave perturbation method is used to measure electron density and plasma frequency for these five pressures. Simulations by a commercial microwave simulator are comparable to the experimental results.

  5. Methods for estimating production and utilization of paper birch saplings

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Development of technique to estimate browse production and utilization. Developed a set of methods for estimating annual production and utilization of paper birch...

  6. Enhancing Use Case Points Estimation Method Using Soft Computing Techniques

    OpenAIRE

    Nassif, Ali Bou; Capretz, Luiz Fernando; Ho, Danny

    2016-01-01

    Software estimation is a crucial task in software engineering. Software estimation encompasses cost, effort, schedule, and size. The importance of software estimation becomes critical in the early stages of the software life cycle when the details of software have not been revealed yet. Several commercial and non-commercial tools exist to estimate software in the early stages. Most software effort estimation methods require software size as one of the important metric inputs and consequently,...

  7. Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram\\'{e}r-Rao...... estimation. Moreover, simulations on real-life data indicate that the NLS and aNLS methods are applicable even when reverberation is present and the noise is not white Gaussian....

  8. A fusion method for estimate of trajectory

    Institute of Scientific and Technical Information of China (English)

    吴翊; 朱炬波

    1999-01-01

    The multiple station method is important in missile and space tracking system. A fusion method is presented. Based on the theory of multiple tracking, and starting with the investigation of precision of location by a single station, a recognition model for occasion system error is constructed, and a principle for preventing pollution by occasion system error is presented. Theoretical analysis and simulation results prove the proposed method correct.

  9. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.

    2012-09-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  10. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  11. Probability Density Estimation for Non-flat Functions%非平坦函数概率密度估计

    Institute of Scientific and Technical Information of China (English)

    汪洪桥; 蔡艳宁; 付光远; 王仕成

    2016-01-01

    Aiming at the probability density estimation problem for non-flat functions, this paper constructs a single slack factor multi-scale kernel support vector machine (SVM) probability density estimation model, by improving the form of constraint condition of the traditional SVM model and introducing the multi-scale kernel method. In the model, a single slack factor instead of two types of slack factors is used to control the learning error of SVM, which reduces the computational complexity of model. At the same time, by introducing the multi-scale kernel method, the model can well fit the functions with both the fiercely changed region and the flatly changed region. Through several probability density estimation experiments with typical non-flat functions, the results show that the single slack probability density estimation model has faster learning speed than the common SVM model. And compared with the single kernel method, the multi-scale kernel SVM probability density estimation model has better estimation precision.%针对非平坦函数的概率密度估计问题,通过改进支持向量机(support vector machine,SVM)概率密度估计模型约束条件的形式,并引入多尺度核方法,构建了一种单松弛因子多尺度核支持向量机概率密度估计模型。该模型采用合并的单个松弛因子来控制支持向量机的学习误差,减小了模型的计算复杂度;同时引入了多尺度核方法,使得模型既能适应函数剧烈变化的区域,也能适应平缓变化的区域。基于几种典型非平坦函数进行概率密度估计实验,结果证明,单松弛因子概率密度估计模型比常规支持向量机概率密度估计模型具有更快的学习速度;且相比于单核方法,多尺度核支持向量机概率密度估计模型具有更优的估计精度。

  12. Mammographic density and estimation of breast cancer risk in intermediate risk population.

    Science.gov (United States)

    Tesic, Vanja; Kolaric, Branko; Znaor, Ariana; Kuna, Sanja Kusacic; Brkljacic, Boris

    2013-01-01

    It is not clear to what extent mammographic density represents a risk factor for breast cancer among women with moderate risk for disease. We conducted a population-based study to estimate the independent effect of breast density on breast cancer risk and to evaluate the potential of breast density as a marker of risk in an intermediate risk population. From November 2006 to April 2009, data that included American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) breast density categories and risk information were collected on 52,752 women aged 50-69 years without previously diagnosed breast cancer who underwent screening mammography examination. A total of 257 screen-detected breast cancers were identified. Logistic regression was used to assess the effect of breast density on breast carcinoma risk and to control for other risk factors. The risk increased with density and the odds ratio for breast cancer among women with dense breast (heterogeneously and extremely dense breast), was 1.9 (95% confidence interval, 1.3-2.8) compared with women with almost entirely fat breasts, after adjustment for age, body mass index, age at menarche, age at menopause, age at first childbirth, number of live births, use of oral contraceptive, family history of breast cancer, prior breast procedures, and hormone replacement therapy use that were all significantly related to breast density (p density and decreased with number of live births. Our finding that mammographic density is an independent risk factor for breast cancer indicates the importance of breast density measurements for breast cancer risk assessment also in moderate risk populations. © 2012 Wiley Periodicals, Inc.

  13. Estimation of energy density of Li-S batteries with liquid and solid electrolytes

    Science.gov (United States)

    Li, Chunmei; Zhang, Heng; Otaegui, Laida; Singh, Gurpreet; Armand, Michel; Rodriguez-Martinez, Lide M.

    2016-09-01

    With the exponential growth of technology in mobile devices and the rapid expansion of electric vehicles into the market, it appears that the energy density of the state-of-the-art Li-ion batteries (LIBs) cannot satisfy the practical requirements. Sulfur has been one of the best cathode material choices due to its high charge storage (1675 mAh g-1), natural abundance and easy accessibility. In this paper, calculations are performed for different cell design parameters such as the active material loading, the amount/thickness of electrolyte, the sulfur utilization, etc. to predict the energy density of Li-S cells based on liquid, polymeric and ceramic electrolytes. It demonstrates that Li-S battery is most likely to be competitive in gravimetric energy density, but not volumetric energy density, with current technology, when comparing with LIBs. Furthermore, the cells with polymer and thin ceramic electrolytes show promising potential in terms of high gravimetric energy density, especially the cells with the polymer electrolyte. This estimation study of Li-S energy density can be used as a good guidance for controlling the key design parameters in order to get desirable energy density at cell-level.

  14. PERFORMANCE ANALYSIS OF METHODS FOR ESTIMATING ...

    African Journals Online (AJOL)

    2014-12-31

    Dec 31, 2014 ... analysis revealed that the MLM was the most accurate model ..... obtained using the empirical method as the same formula is used. ..... and applied meteorology, American meteorological society, October 1986, vol.25, pp.

  15. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  16. A new algebraic method for quantitative proton density mapping using multi-channel coil data.

    Science.gov (United States)

    Cordes, Dietmar; Yang, Zhengshi; Zhuang, Xiaowei; Sreenivasan, Karthik; Mishra, Virendra; Hua, Le H

    2017-08-01

    A difficult problem in quantitative MRI is the accurate determination of the proton density, which is an important quantity in measuring brain tissue organization. Recent progress in estimating proton density in vivo has been based on using the inverse linear relationship between the longitudinal relaxation rate T1 and proton density. In this study, the same type of relationship is being used, however, in a more general framework by constructing 3D basis functions to model the receiver bias field. The novelty of this method is that the basis functions developed are suitable to cover an entire range of inverse linearities between T1 and proton density. The method is applied by parcellating the human brain into small cubes with size 30mm x 30mm x 30mm. In each cube the optimal set of basis functions is determined to model the receiver coil sensitivities using multi-channel (32 element) coil data. For validation, we use arbitrary data from a numerical phantom where the data satisfy the conventional MR signal equations. Using added noise of different magnitude and realizations, we show that the proton densities obtained have a bias close to zero and also low noise sensitivity. The obtained root-mean-square-error rate is less than 0.2% for the estimated proton density in a realistic 3D simulation. As an application, the method is used in a small cohort of MS patients, and proton density values for specific brain structures are determined. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Advancing Methods for Estimating Cropland Area

    Science.gov (United States)

    King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.

    2014-12-01

    Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.

  18. Joint estimation of crown of thorns (Acanthaster planci densities on the Great Barrier Reef

    Directory of Open Access Journals (Sweden)

    M. Aaron MacNeil

    2016-08-01

    Full Text Available Crown-of-thorns starfish (CoTS; Acanthaster spp. are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1 estimating the detectability of adult CoTS on typical underwater visual count (UVC surveys using covariates and (2 inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR. We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD and [95% uncertainty intervals], with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014.

  19. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Burke, TImothy P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martin, William R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-11-19

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.

  20. Density estimation in a wolverine population using spatial capture-recapture models

    Science.gov (United States)

    Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin

    2011-01-01

    Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

  1. Joint estimation of crown of thorns (Acanthaster planci) densities on the Great Barrier Reef.

    Science.gov (United States)

    MacNeil, M Aaron; Mellin, Camille; Pratchett, Morgan S; Hoey, Jessica; Anthony, Kenneth R N; Cheal, Alistair J; Miller, Ian; Sweatman, Hugh; Cowan, Zara L; Taylor, Sascha; Moon, Steven; Fonnesbeck, Chris J

    2016-01-01

    Crown-of-thorns starfish (CoTS; Acanthaster spp.) are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1) estimating the detectability of adult CoTS on typical underwater visual count (UVC) surveys using covariates and (2) inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR). We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD) and [95% uncertainty intervals]), with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014.

  2. Polynomial probability distribution estimation using the method of moments.

    Science.gov (United States)

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  3. Markedly divergent estimates of Amazon forest carbon density from ground plots and satellites

    Science.gov (United States)

    Mitchard, Edward T A; Feldpausch, Ted R; Brienen, Roel J W; Lopez-Gonzalez, Gabriela; Monteagudo, Abel; Baker, Timothy R; Lewis, Simon L; Lloyd, Jon; Quesada, Carlos A; Gloor, Manuel; ter Steege, Hans; Meir, Patrick; Alvarez, Esteban; Araujo-Murakami, Alejandro; Aragão, Luiz E O C; Arroyo, Luzmila; Aymard, Gerardo; Banki, Olaf; Bonal, Damien; Brown, Sandra; Brown, Foster I; Cerón, Carlos E; Chama Moscoso, Victor; Chave, Jerome; Comiskey, James A; Cornejo, Fernando; Corrales Medina, Massiel; Da Costa, Lola; Costa, Flavia R C; Di Fiore, Anthony; Domingues, Tomas F; Erwin, Terry L; Frederickson, Todd; Higuchi, Niro; Honorio Coronado, Euridice N; Killeen, Tim J; Laurance, William F; Levis, Carolina; Magnusson, William E; Marimon, Beatriz S; Marimon Junior, Ben Hur; Mendoza Polo, Irina; Mishra, Piyush; Nascimento, Marcelo T; Neill, David; Núñez Vargas, Mario P; Palacios, Walter A; Parada, Alexander; Pardo Molina, Guido; Peña-Claros, Marielos; Pitman, Nigel; Peres, Carlos A; Poorter, Lourens; Prieto, Adriana; Ramirez-Angulo, Hirma; Restrepo Correa, Zorayda; Roopsind, Anand; Roucoux, Katherine H; Rudas, Agustin; Salomão, Rafael P; Schietti, Juliana; Silveira, Marcos; de Souza, Priscila F; Steininger, Marc K; Stropp, Juliana; Terborgh, John; Thomas, Raquel; Toledo, Marisol; Torres-Lezama, Armando; van Andel, Tinde R; van der Heijden, Geertje M F; Vieira, Ima C G; Vieira, Simone; Vilanova-Torre, Emilio; Vos, Vincent A; Wang, Ophelia; Zartman, Charles E; Malhi, Yadvinder; Phillips, Oliver L

    2014-01-01

    Aim The accurate mapping of forest carbon stocks is essential for understanding the global carbon cycle, for assessing emissions from deforestation, and for rational land-use planning. Remote sensing (RS) is currently the key tool for this purpose, but RS does not estimate vegetation biomass directly, and thus may miss significant spatial variations in forest structure. We test the stated accuracy of pantropical carbon maps using a large independent field dataset. Location Tropical forests of the Amazon basin. The permanent archive of the field plot data can be accessed at: http://dx.doi.org/10.5521/FORESTPLOTS.NET/2014_1 Methods Two recent pantropical RS maps of vegetation carbon are compared to a unique ground-plot dataset, involving tree measurements in 413 large inventory plots located in nine countries. The RS maps were compared directly to field plots, and kriging of the field data was used to allow area-based comparisons. Results The two RS carbon maps fail to capture the main gradient in Amazon forest carbon detected using 413 ground plots, from the densely wooded tall forests of the north-east, to the light-wooded, shorter forests of the south-west. The differences between plots and RS maps far exceed the uncertainties given in these studies, with whole regions over- or under-estimated by > 25%, whereas regional uncertainties for the maps were reported to be < 5%. Main conclusions Pantropical biomass maps are widely used by governments and by projects aiming to reduce deforestation using carbon offsets, but may have significant regional biases. Carbon-mapping techniques must be revised to account for the known ecological variation in tree wood density and allometry to create maps suitable for carbon accounting. The use of single relationships between tree canopy height and above-ground biomass inevitably yields large, spatially correlated errors. This presents a significant challenge to both the forest conservation and remote sensing communities

  4. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  5. Prediction Method of Safety Mud Density in Depleted Oilfields

    Directory of Open Access Journals (Sweden)

    Yuan Jun-Liang

    2013-04-01

    Full Text Available At present, many oilfields were placed in the middle and late development period and the reservoir pressure depleted usually, resulting in more serious differential pressure sticking and drilling mud leakage both in the reservoir and cap rock. In view of this situation, a systematic prediction method of safety mud density in depleted oilfields was established. The influence of reservoir depletion on stress and strength in reservoir and cap formation were both studied and taken into the prediction of safety mud density. The research showed that the risk of differential pressure sticking and drilling mud leakage in reservoir and cap formation were both increased and they were the main prevention object in depleted oilfields drilling. The research results were used to guide the practice drilling work, the whole progress gone smoothly.

  6. Application of singular perturbation method in analyzing traffic density waves

    Institute of Scientific and Technical Information of China (English)

    SHEN Fei-ying; GE Hong-xia; LEI Li

    2009-01-01

    Car following model is one of microscopic models for describing traffic flow. Through linear stability analysis, the neutral stability lines and the critical points are obtained for the different types of car following models and two modified models. The singular perturbation method has been used to derive various nonlinear wave equations, such as the Korteweg-de-Vries (KdV) equation and the modified Korteweg-de-Vries (mKdV) equation, which could describe different density waves occurring in traffic flows under certain conditions. These density waves are mainly employed to depict the formation of traffic jams in the congested traffic flow. The general soliton solutions are given for the different types of car following models, and the results have been used to the modified models efficiently.

  7. Linear density response function in the projector augmented wave method

    DEFF Research Database (Denmark)

    Yan, Jun; Mortensen, Jens Jørgen; Jacobsen, Karsten Wedel;

    2011-01-01

    We present an implementation of the linear density response function within the projector-augmented wave method with applications to the linear optical and dielectric properties of both solids, surfaces, and interfaces. The response function is represented in plane waves while the single......-particle eigenstates can be expanded on a real space grid or in atomic-orbital basis for increased efficiency. The exchange-correlation kernel is treated at the level of the adiabatic local density approximation (ALDA) and crystal local field effects are included. The calculated static and dynamical dielectric...... functions of Si, C, SiC, AlP, and GaAs compare well with previous calculations. While optical properties of semiconductors, in particular excitonic effects, are generally not well described by ALDA, we obtain excellent agreement with experiments for the surface loss function of graphene and the Mg(0001...

  8. Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.

    Science.gov (United States)

    Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A

    2009-06-01

    We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.

  9. Thermodynamic properties of organic compounds estimation methods, principles and practice

    CERN Document Server

    Janz, George J

    1967-01-01

    Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the

  10. The density matrix method in photonic bandgap and antiferromagnetic materials

    Science.gov (United States)

    Barrie, Scott B.

    In this thesis, a theory for dispersive polaritonic bandgap (DPBG) and photonic bandgap (PBG) materials is developed. An ensemble of multi-level nanoparticles, such as non-interacting two-, three- and four-level atoms doped in DPBG and PBG materials is considered. The optical properties of these materials such as spontaneous emission, line broadening, fluorescence and narrowing of the natural linewidth have been studied using the density matrix method. Numerical simulations for these properties have been performed for the DPBG materials SiC and InAs, and for a PBG material with a 20 percent gap-to-midgap ratio. When a three-level nanoparticle is doped into a DPBG material, it is predicted that one or two bound states exist when one or both resonance energies, respectively, lie in the bandgap. It is shown when a resonance energy lies below the bandgap, its spectral density peak weakens and broadens as the resonance energy increases to the lower band edge. For the first time it is predicted that when a nanoparticle's resonance energy lies above the bandgap, its spectral density peak weakens and broadens as the resonance energy increases. A relation is also found between spectral structure and gap-to-midgap ratios. The dressed states of a two-level atom doped into a DPBG material under the influence of an intense monochromatic laser field are examined. The splitting of the dressed state energies is calculated, and it is predicted that the splitting depends on the polariton density of states and the Rabi frequency of laser field. The fluoresence is also examined, and for the first time two distinct control processes are found for the transition from one peak to three peaks. It was previously known that the Rabi frequency controlled the Stark effect, but this thesis predicts that the local of the peak with respect to the optical bandgap can cause a transition from one to three peaks even with a weak Rabi frequency. The transient linewidth narrowing of PBG crystal

  11. Recursive Density Estimation of NA Samples%样本的递归密度估计

    Institute of Scientific and Technical Information of China (English)

    张冬霞; 梁汉营

    2008-01-01

    Let{Xn,n≥1} be a strictly stationary sequence of negatively associated random variables with the marginal probability density function f (x). In this paper, we discuss the point asymptotic normality for recursive kernel density estimator of f (x).%设{{Xn, n≥1}是一个严平稳的负相协的随机变量序列,其概率密谋函数为f(x).本文讨论了f(x)的递归核估计量的联合渐近正态性.

  12. Analysis and estimation of risk management methods

    Directory of Open Access Journals (Sweden)

    Kankhva Vadim Sergeevich

    2016-05-01

    Full Text Available At the present time risk management is an integral part of state policy in all the countries with developed market economy. Companies dealing with consulting services and implementation of the risk management systems carve out a niche. Unfortunately, conscious preventive risk management in Russia is still far from the level of standardized process of a construction company activity, which often leads to scandals and disapproval in case of unprofessional implementation of projects. The authors present the results of the investigation of the modern understanding of the existing methodology classification and offer the authorial concept of classification matrix of risk management methods. Creation of the developed matrix is based on the analysis of the method in the context of incoming and outgoing transformed information, which may include different elements of risk control stages. So the offered approach allows analyzing the possibilities of each method.

  13. Optical method of atomic ordering estimation

    Energy Technology Data Exchange (ETDEWEB)

    Prutskij, T. [Instituto de Ciencias, BUAP, Privada 17 Norte, No 3417, col. San Miguel Huyeotlipan, Puebla, Pue. (Mexico); Attolini, G. [IMEM/CNR, Parco Area delle Scienze 37/A - 43010, Parma (Italy); Lantratov, V.; Kalyuzhnyy, N. [Ioffe Physico-Technical Institute, 26 Polytekhnicheskaya, St Petersburg 194021, Russian Federation (Russian Federation)

    2013-12-04

    It is well known that within metal-organic vapor-phase epitaxy (MOVPE) grown semiconductor III-V ternary alloys atomically ordered regions are spontaneously formed during the epitaxial growth. This ordering leads to bandgap reduction and to valence bands splitting, and therefore to anisotropy of the photoluminescence (PL) emission polarization. The same phenomenon occurs within quaternary semiconductor alloys. While the ordering in ternary alloys is widely studied, for quaternaries there have been only a few detailed experimental studies of it, probably because of the absence of appropriate methods of its detection. Here we propose an optical method to reveal atomic ordering within quaternary alloys by measuring the PL emission polarization.

  14. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    Science.gov (United States)

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.

  15. Autoregressive Methods for Spectral Estimation from Interferograms.

    Science.gov (United States)

    1986-09-19

    Forman/Steele/Vanasse [12] phase filter approach, which approximately removes the linear phase distortion introduced into the interferogram by retidation...band interferogram for the spectrum to be analyzed. The symmetrizing algorithm, based on the Forman/Steele/Vanasse method [12] computes a phase filter from

  16. Novel method for quantitative estimation of biofilms

    DEFF Research Database (Denmark)

    Syal, Kirtimaan

    2017-01-01

    Biofilm protects bacteria from stress and hostile environment. Crystal violet (CV) assay is the most popular method for biofilm determination adopted by different laboratories so far. However, biofilm layer formed at the liquid-air interphase known as pellicle is extremely sensitive to its washin...

  17. System and method for correcting attitude estimation

    Science.gov (United States)

    Josselson, Robert H. (Inventor)

    2010-01-01

    A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.

  18. Modified periodogram method for estimating the Hurst exponent of fractional Gaussian noise.

    Science.gov (United States)

    Liu, Yingjun; Liu, Yong; Wang, Kun; Jiang, Tianzi; Yang, Lihua

    2009-12-01

    Fractional Gaussian noise (fGn) is an important and widely used self-similar process, which is mainly parametrized by its Hurst exponent (H) . Many researchers have proposed methods for estimating the Hurst exponent of fGn. In this paper we put forward a modified periodogram method for estimating the Hurst exponent based on a refined approximation of the spectral density function. Generalizing the spectral exponent from a linear function to a piecewise polynomial, we obtained a closer approximation of the fGn's spectral density function. This procedure is significant because it reduced the bias in the estimation of H . Furthermore, the averaging technique that we used markedly reduced the variance of estimates. We also considered the asymptotical unbiasedness of the method and derived the upper bound of its variance and confidence interval. Monte Carlo simulations showed that the proposed estimator was superior to a wavelet maximum likelihood estimator in terms of mean-squared error and was comparable to Whittle's estimator. In addition, a real data set of Nile river minima was employed to evaluate the efficiency of our proposed method. These tests confirmed that our proposed method was computationally simpler and faster than Whittle's estimator.

  19. Modified periodogram method for estimating the Hurst exponent of fractional Gaussian noise

    Science.gov (United States)

    Liu, Yingjun; Liu, Yong; Wang, Kun; Jiang, Tianzi; Yang, Lihua

    2009-12-01

    Fractional Gaussian noise (fGn) is an important and widely used self-similar process, which is mainly parametrized by its Hurst exponent (H) . Many researchers have proposed methods for estimating the Hurst exponent of fGn. In this paper we put forward a modified periodogram method for estimating the Hurst exponent based on a refined approximation of the spectral density function. Generalizing the spectral exponent from a linear function to a piecewise polynomial, we obtained a closer approximation of the fGn’s spectral density function. This procedure is significant because it reduced the bias in the estimation of H . Furthermore, the averaging technique that we used markedly reduced the variance of estimates. We also considered the asymptotical unbiasedness of the method and derived the upper bound of its variance and confidence interval. Monte Carlo simulations showed that the proposed estimator was superior to a wavelet maximum likelihood estimator in terms of mean-squared error and was comparable to Whittle’s estimator. In addition, a real data set of Nile river minima was employed to evaluate the efficiency of our proposed method. These tests confirmed that our proposed method was computationally simpler and faster than Whittle’s estimator.

  20. Combining Breeding Bird Survey and distance sampling to estimate density of migrant and breeding birds

    Science.gov (United States)

    Somershoe, S.G.; Twedt, D.J.; Reid, B.

    2006-01-01

    We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.

  1. Control and estimation methods over communication networks

    CERN Document Server

    Mahmoud, Magdi S

    2014-01-01

    This book provides a rigorous framework in which to study problems in the analysis, stability and design of networked control systems. Four dominant sources of difficulty are considered: packet dropouts, communication bandwidth constraints, parametric uncertainty, and time delays. Past methods and results are reviewed from a contemporary perspective, present trends are examined, and future possibilities proposed. Emphasis is placed on robust and reliable design methods. New control strategies for improving the efficiency of sensor data processing and reducing associated time delay are presented. The coverage provided features: ·        an overall assessment of recent and current fault-tolerant control algorithms; ·        treatment of several issues arising at the junction of control and communications; ·        key concepts followed by their proofs and efficient computational methods for their implementation; and ·        simulation examples (including TrueTime simulations) to...

  2. Methods and systems for rapid prototyping of high density circuits

    Science.gov (United States)

    Palmer, Jeremy A.; Davis, Donald W.; Chavez, Bart D.; Gallegos, Phillip L.; Wicker, Ryan B.; Medina, Francisco R.

    2008-09-02

    A preferred embodiment provides, for example, a system and method of integrating fluid media dispensing technology such as direct-write (DW) technologies with rapid prototyping (RP) technologies such as stereolithography (SL) to provide increased micro-fabrication and micro-stereolithography. A preferred embodiment of the present invention also provides, for example, a system and method for Rapid Prototyping High Density Circuit (RPHDC) manufacturing of solderless connectors and pilot devices with terminal geometries that are compatible with DW mechanisms and reduce contact resistance where the electrical system is encapsulated within structural members and manual electrical connections are eliminated in favor of automated DW traces. A preferred embodiment further provides, for example, a method of rapid prototyping comprising: fabricating a part layer using stereolithography and depositing thermally curable media onto the part layer using a fluid dispensing apparatus.

  3. Bayesian methods to estimate urban growth potential

    Science.gov (United States)

    Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.

    2017-01-01

    Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.

  4. Prediction of soil organic carbon concentration and soil bulk density of mineral soils for soil organic carbon stock estimation

    Science.gov (United States)

    Putku, Elsa; Astover, Alar; Ritz, Christian

    2016-04-01

    Soil monitoring networks provide a powerful base for estimating and predicting nation's soil status in many aspects. The datasets of soil monitoring are often hierarchically structured demanding sophisticated data analyzing methods. The National Soil Monitoring of Estonia was based on a hierarchical data sampling scheme as each of the monitoring site was divided into four transects with 10 sampling points on each transect. We hypothesized that the hierarchical structure in Estonian Soil Monitoring network data requires a multi-level mixed model approach to achieve good prediction accuracy of soil properties. We used this database to predict soil bulk density and soil organic carbon concentration of mineral soils in arable land using different statistical methods: median approach, linear regression and mixed model; additionally, random forests for SOC concentration. We compared the prediction results and selected the model with the best prediction accuracy to estimate soil organic carbon stock. The mixed model approach achieved the best prediction accuracy in both soil organic carbon (RMSE 0.22%) and bulk density (RMSE 0.09 g cm-3) prediction. Other considered methods under- or overestimated higher and lower values of soil parameters. Thus, using these predictions we calculated the soil organic carbon stock of mineral arable soils and applied the model to a specific case of Tartu County in Estonia. Average estimated SOC stock of Tartu County is 54.8 t C ha-1 and total topsoil SOC stock 1.8 Tg in humus horizon.

  5. Estimating risk aversion, Risk-Neutral and Real-World Densities using Brazilian Real currency options

    Directory of Open Access Journals (Sweden)

    José Fajardo

    2012-12-01

    Full Text Available This paper uses the Liu et al. (2007 approach to estimate the optionimplied Risk-Neutral Densities (RND, real-world density (RWD, and relative risk aversion from the Brazilian Real/US Dollar exchange rate distribution. Our empirical application uses a sample of exchange-traded Brazilian Real currency options from 1999 to 2011. Our estimated value of the relative risk aversion is around 2.7, which is in line with other articles for the Brazilian Economy. Our out-of-sample results showed that the RND has some ability to forecast the Brazilian Real exchange rate, but when we incorporate the risk aversion, the out-of-sample performance improves substantially.

  6. Method for providing a low density high strength polyurethane foam

    Science.gov (United States)

    Whinnery, Jr., Leroy L.; Goods, Steven H.; Skala, Dawn M.; Henderson, Craig C.; Keifer, Patrick N.

    2013-06-18

    Disclosed is a method for making a polyurethane closed-cell foam material exhibiting a bulk density below 4 lbs/ft.sup.3 and high strength. The present embodiment uses the reaction product of a modified MDI and a sucrose/glycerine based polyether polyol resin wherein a small measured quantity of the polyol resin is "pre-reacted" with a larger quantity of the isocyanate in a defined ratio such that when the necessary remaining quantity of the polyol resin is added to the "pre-reacted" resin together with a tertiary amine catalyst and water as a blowing agent, the polymerization proceeds slowly enough to provide a stable foam body.

  7. Augmented Lagrangian Method for Constrained Nuclear Density Functional Theory

    CERN Document Server

    Staszczak, A; Baran, A; Nazarewicz, W

    2010-01-01

    The augmented Lagrangiam method (ALM), widely used in quantum chemistry constrained optimization problems, is applied in the context of the nuclear Density Functional Theory (DFT) in the self-consistent constrained Skyrme Hartree-Fock-Bogoliubov (CHFB) variant. The ALM allows precise calculations of multidimensional energy surfaces in the space of collective coordinates that are needed to, e.g., determine fission pathways and saddle points; it improves accuracy of computed derivatives with respect to collective variables that are used to determine collective inertia; and is well adapted to supercomputer applications.

  8. METHOD ON ESTIMATION OF DRUG'S PENETRATED PARAMETERS

    Institute of Scientific and Technical Information of China (English)

    刘宇红; 曾衍钧; 许景锋; 张梅

    2004-01-01

    Transdermal drug delivery system (TDDS) is a new method for drug delivery. The analysis of plenty of experiments in vitro can lead to a suitable mathematical model for the description of the process of the drug's penetration through the skin, together with the important parameters that are related to the characters of the drugs.After the research work of the experiments data,a suitable nonlinear regression model was selected. Using this model, the most important parameter-penetrated coefficient of 20 drugs was computed.In the result one can find, this work supports the theory that the skin can be regarded as singular membrane.

  9. On the rate of convergence of the maximum likelihood estimator of a k-monotone density

    Institute of Scientific and Technical Information of China (English)

    WELLNER; Jon; A

    2009-01-01

    Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0,A] are obtained under both the Hellinger distance and the Lp(Q) distance,where 1 p < ∞ and Q is a probability measure on [0,A].The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a k-monotone density.

  10. On the rate of convergence of the maximum likelihood estimator of a K-monotone density

    Institute of Scientific and Technical Information of China (English)

    GAO FuChang; WELLNER Jon A

    2009-01-01

    Bounds for the bracketing entropy of the classes of bounded K-monotone functions on [0, A] are obtained under both the Hellinger distance and the LP(Q) distance, where 1 ≤ p < ∞ and Q is a probability measure on [0, A]. The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a K-monotone density.

  11. Cheap DECAF: Density Estimation for Cetaceans from Acoustic Fixed Sensors Using Separate, Non-Linked Devices

    Science.gov (United States)

    2014-06-29

    Centra de Geofisica Universidade de Lisboa Lisbon, Portugal. Award Number: N00014-11 -1 -0615 This project was a collaborative project between...submitted or in prep) from the University of St Andrews (UStA) and Universidade de Lisboa (UL) research effort. The work has also generated multiple...routines. Task 1.4. Use distance sampling software , Distance (Thomas et al. 2010), to estimate seasonal density, incorporating covariates affecting

  12. Effect of Broadband Nature of Marine Mammal Echolocation Clicks on Click-Based Population Density Estimates

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Effect of Broadband Nature of Marine Mammal Echolocation...modeled for different marine mammal species and detectors and assess the magnitude of error on the estimated density due to various commonly used...noise limited (von Benda-Beckmann et al. 2010). A three hour segment, previously audited by human operators to ensure no marine mammals were present in

  13. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    DEFF Research Database (Denmark)

    Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas

    2012-01-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit...

  14. Understanding Rasch measurement: estimation methods for Rasch measures.

    Science.gov (United States)

    Linacre, J M

    1999-01-01

    Rasch parameter estimation methods can be classified as non-interative and iterative. Non-iterative methods include the normal approximation algorithm (PROX) for complete dichotomous data. Iterative methods fall into 3 types. Datum-by-datum methods include Gaussian least-squares, minimum chi-square, and the pairwise (PAIR) method. Marginal methods without distributional assumptions include conditional maximum-likelihood estimation (CMLE), joint maximum-likelihood estimation (JMLE) and log-linear approaches. Marginal methods with distributional assumptions include marginal maximum-likelihood estimation (MMLE) and the normal approximation algorithm (PROX) for missing data. Estimates from all methods are characterized by standard errors and quality-control fit statistics. Standard errors can be local (defined relative to the measure of a particular item) or general (defined relative to the abstract origin of the scale). They can also be ideal (as though the data fit the model) or inflated by the misfit to the model present in the data. Five computer programs, implementing different estimation methods, produce statistically equivalent estimates. Nevertheless, comparing estimates from different programs requires care.

  15. Robust time and frequency domain estimation methods in adaptive control

    Science.gov (United States)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  16. A NOVEL METHOD FOR ESTIMATING SOIL PRECOMPRESSION STRESS FROM UNIAXIAL CONFINED COMPRESSION TESTS

    DEFF Research Database (Denmark)

    Lamandé, Mathieu; Schjønning, Per; Labouriau, Rodrigo

    2017-01-01

    The concept of precompression stress is used for estimating soil strength of relevance to fieldtraffic. It represents the maximum stress experienced by the soil. The most recently developed fitting method to estimate precompression stress (Gompertz) is based on the assumption of an S-shape stress......-strain curve, which is not always fulfilled. A new simple numerical method was developed to estimate precompression stress from stress-strain curves, based solely on the sharp bend on the stress-strain curve partitioning the curve into an elastic and a plastic section. Our study had three objectives: (i......) Assessing the utility of the numerical method by comparison with the Gompertz method; (ii) Comparing the estimated precompression stress to the maximum preload of test samples; (iii) Determining the influence that soil type, bulk density and soil water potential have on the estimated precompression stress...

  17. Examining the impact of the precision of address geocoding on estimated density of crime locations

    Science.gov (United States)

    Harada, Yutaka; Shimada, Takahito

    2006-10-01

    This study examines the impact of the precision of address geocoding on the estimated density of crime locations in a large urban area of Japan. The data consist of two separate sets of the same Penal Code offenses known to the police that occurred during a nine-month period of April 1, 2001 through December 31, 2001 in the central 23 wards of Tokyo. These two data sets are derived from older and newer recording system of the Tokyo Metropolitan Police Department (TMPD), which revised its crime reporting system in that year so that more precise location information than the previous years could be recorded. Each of these data sets was address-geocoded onto a large-scale digital map, using our hierarchical address-geocoding schema, and was examined how such differences in the precision of address information and the resulting differences in address-geocoded incidence locations affect the patterns in kernel density maps. An analysis using 11,096 pairs of incidences of residential burglary (each pair consists of the same incidents geocoded using older and newer address information, respectively) indicates that the kernel density estimation with a cell size of 25×25 m and a bandwidth of 500 m may work quite well in absorbing the poorer precision of geocoded locations based on data from older recording system, whereas in several areas where older recording system resulted in very poor precision level, the inaccuracy of incident locations may produce artifactitious and potentially misleading patterns in kernel density maps.

  18. Estimation of basic density of Eucalyptus globulus using near-infrared spectroscopy

    National Research Council Canada - National Science Library

    Muneri, Allie; Raymond, Carolyn A; Michell, Anthony J; Schimleck, Laurence R

    1999-01-01

    .... A method for estimating pulp yields has been developed by measuring the near-infrared spectra of wood powders from cores withdrawn from standing eucalypt plantation trees using motorized equipment...

  19. Ischemia episode detection in ECG using kernel density estimation, support vector machine and feature selection

    Directory of Open Access Journals (Sweden)

    Park Jinho

    2012-06-01

    Full Text Available Abstract Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1 the area between QRS offset and T-peak points, 2 the normalized and signed sum from QRS offset to effective zero voltage point, and 3 the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE and support vector machine (SVM methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical

  20. Low-Density LiDAR and Optical Imagery for Biomass Estimation over Boreal Forest in Sweden

    Directory of Open Access Journals (Sweden)

    Iurii Shendryk

    2014-05-01

    Full Text Available Knowledge of the forest biomass and its change in time is crucial to understanding the carbon cycle and its interactions with climate change. LiDAR (Light Detection and Ranging technology, in this respect, has proven to be a valuable tool, providing reliable estimates of aboveground biomass (AGB. The overall goal of this study was to develop a method for assessing AGB using a synergy of low point density LiDAR-derived point cloud data and multi-spectral imagery in conifer-dominated forest in the southwest of Sweden. Different treetop detection algorithms were applied for forest inventory parameter extraction from a LiDAR-derived canopy height model. Estimation of AGB was based on the power functions derived from tree parameters measured in the field, while vegetation classification of a multi-spectral image (SPOT-5 was performed in order to account for dependences of AGB estimates on vegetation types. Linear regression confirmed good performance of a newly developed grid-based approach for biomass estimation (R2 = 0.80. Results showed AGB to vary from below 1 kg/m2 in very young forests to 94 kg/m2 in mature spruce forests, with RMSE of 4.7 kg/m2. These AGB estimates build a basis for further studies on carbon stocks as well as for monitoring this forest ecosystem in respect of disturbance and change in time. The methodology developed in this study can be easily adopted for assessing biomass of other conifer-dominated forests on the basis of low-density LiDAR and multispectral imagery. This methodology is hence of much wider applicability than biomass derivation based on expensive and currently still scarce high-density LiDAR data.

  1. A least squares estimation method for the linear learning model

    NARCIS (Netherlands)

    B. Wierenga (Berend)

    1978-01-01

    textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a

  2. A stepwedge-based method for measuring breast density: observer variability and comparison with human reading

    Science.gov (United States)

    Diffey, Jenny; Berks, Michael; Hufton, Alan; Chung, Camilla; Verow, Rosanne; Morrison, Joanna; Wilson, Mary; Boggis, Caroline; Morris, Julie; Maxwell, Anthony; Astley, Susan

    2010-04-01

    Breast density is positively linked to the risk of developing breast cancer. We have developed a semi-automated, stepwedge-based method that has been applied to the mammograms of 1,289 women in the UK breast screening programme to measure breast density by volume and area. 116 images were analysed by three independent operators to assess inter-observer variability; 24 of these were analysed on 10 separate occasions by the same operator to determine intra-observer variability. 168 separate images were analysed using the stepwedge method and by two radiologists who independently estimated percentage breast density by area. There was little intra-observer variability in the stepwedge method (average coefficients of variation 3.49% - 5.73%). There were significant differences in the volumes of glandular tissue obtained by the three operators. This was attributed to variations in the operators' definition of the breast edge. For fatty and dense breasts, there was good correlation between breast density assessed by the stepwedge method and the radiologists. This was also observed between radiologists, despite significant inter-observer variation. Based on analysis of thresholds used in the stepwedge method, radiologists' definition of a dense pixel is one in which the percentage of glandular tissue is between 10 and 20% of the total thickness of tissue.

  3. Estimation of electron density in ionospheric D and E regions using MF radar: Inspection of DAE algorism

    Science.gov (United States)

    Ashihara, Y.; Miyake, T.; Ishisaka, K.; Murayama, Y.; Kawamura, S.; Nagano, I.; Okada, T.

    2006-12-01

    MF radar estimates the electron density in lower ionospheric D and E regions at the altitude from 60km to 100km by using the partial reflection information of MF radar transmission wave. Though, the electron density in ionospheric D region is very small, about 10-1000 /cc, electrons are closely related to neutral dynamic meteorology and chemistry including such as hydrated ion and NOx in the region. Therefore, it has the possibility to find a new physical knowledge in mesosphere and lower ionosphere. MF Rader transmits the burst pulse radio wave toward vertically direction. This pulse has 48μsec. width and modulated by 1.995MHz. Differential Absorption Experiment (DAE) is one of the methods to estimate the electron density by MF radar. DAE needs three information, which are ratio of received intensity, reflection coefficient and attenuate coefficient. Ratio of received intensity is found by the differential amount of between the left and the right polarized wave reflected by ionosphere. Though, reflection and attenuation coefficient are given as constant which is only depended on altitude, and is not depended on electron or atmospheric density. The validity of DAE has not been examined for more than 30 years. So we examine the validity of treatment both refection and attenuation coefficient as constant. Full wave analysis is a simulation method to calculate the radio wave propagation characteristics in ionosphere. Though, MF radar transmitted pulse must be treated in the time-domain. In this study, we obtain the time development of MF radar transmitted pulse by applying Fourier transformation to Full wave analysis on simulation. It is required some parameters, electron density profile, neutral-electron collision frequency profile, etc., to execute Full wave analysis. This time development data of MF radar transmitted pulse includes the reflection pulse, i.e. ratio of received intensity, at the ionosphere. We can calculate electron density profile by DAE method

  4. Carbon footprint: current methods of estimation.

    Science.gov (United States)

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker

    2011-07-01

    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.

  5. New theory of superconductivity. Method of equilibrium density matrix

    CERN Document Server

    Bondarev, Boris

    2014-01-01

    A new variational method for studying the equilibrium states of an interacting particles system has been proposed. The statistical description of the system is realized by means of a density matrix. This method is used for description of conduction electrons in metals. An integral equation for the electron distribution function over wave vectors has been obtained. The solutions of this equation have been found for those cases where the single-particle Hamiltonian and the electron interaction Hamiltonian can be approximated by a quite simple expression. It is shown that the distribution function at temperatures below the critical value possesses previously unknown features which allow to explain the superconductivity of metals and presence of a gap in the energy spectrum of superconducting electrons.

  6. Review on Vehicular Speed, Density Estimation and Classification Using Acoustic Signal

    Directory of Open Access Journals (Sweden)

    Prashant Borkar

    2013-09-01

    Full Text Available Traffic monitoring and parameters estimation from urban to non urban (battlefield environment traffic is fast-emerging field based on acoustic signals. We present here a comprehensive review of the state-of-the-art acoustic signal for vehicular speed estimation, density estimation and classification, critical analysis and an outlook to future research directions. This field is of increasing relevance for intelligent transport systems (ITSs. In recent years video monitoring and surveillance systems has been widely used in traffic management and hence traffic parameters can be achieved using such systems, but installation, operational and maintenance cost associated with these approaches are relatively high compared to the use of acoustic signal which is having very low installation and maintenance cost. The classification process includes sensing unit, class definition, feature extraction, classifier application and system evaluation. The acoustic classification system is part of a multi sensor real time environment for traffic surveillance and monitoring. Classification accuracy achieved by various studied algorithms shows very good performance for the ‘Heavy Weight’ class of vehicles as compared to the other category “Light Weight”. Also a slight performance degrades as vehicle speed increases. Vehicular speed estimation corresponds to average speed and traffic density measurement, and can be substantially used for traffic signal timings optimization.

  7. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  8. Joint 2-D DOA and Noncircularity Phase Estimation Method

    Directory of Open Access Journals (Sweden)

    Wang Ling

    2012-03-01

    Full Text Available Classical joint estimation methods need large calculation quantity and multidimensional search. In order to avoid these shortcoming, a novel joint two-Dimension (2-D Direction Of Arrival (DOA and noncircularity phase estimation method based on three orthogonal linear arrays is proposed. The problem of 3-D parameter estimation can be transformed to three parallel 2-D parameter estimation according to the characteristic of three orthogonal linear arrays. Further more, the problem of 2-D parameter estimation can be transformed to 1-D parameter estimation by using the rotational invariance property among signal subspace and orthogonal property of noise subspace at the same time in every subarray. Ultimately, the algorithm can realize joint estimation and pairing parameters by one eigen-decomposition of extended covariance matrix. The proposed algorithm can be applicable for low SNR and small snapshot scenarios, and can estiame 2(M −1 signals. Simulation results verify that the proposed algorithm is effective.

  9. A Fast LMMSE Channel Estimation Method for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Zhou Wen

    2009-01-01

    Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.

  10. mBEEF-vdW: Robust fitting of error estimation density functionals

    Science.gov (United States)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas

    2016-06-01

    We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.

  11. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    Science.gov (United States)

    Sarangi, Bighnaraj; Aggarwal, Shankar G.; Sinha, Deepak; Gupta, Prabhat K.

    2016-03-01

    In this work, we have used a scanning mobility particle sizer (SMPS) and a quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyser (DMA), where size segregation is done based on particle electrical mobility. Downstream of the DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas the other one is sent to the QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of the SMPS and mass concentration data obtained from the QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10-478 nm), i.e. AS, SC and AN, is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, values which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Using this technique, the percentage contribution of error in the measurement of effective density is calculated to be in the range of 9-17 %. Among the individual uncertainty components, repeatability of particle mass obtained by the QCM, the QCM crystal frequency, CPC counting efficiency, and the equivalence of CPC- and QCM-derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of the winter period in New Delhi was measured to be 1.28 ± 0.12 g cm-3

  12. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America

    Science.gov (United States)

    Chestnut, Tara E.; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic

  13. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America.

    Directory of Open Access Journals (Sweden)

    Tara Chestnut

    Full Text Available Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd, is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1. The highest density observed was ∼3 million zoospores L(-1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free

  14. A novel TOA estimation method with effective NLOS error reduction

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yi-heng; CUI Qi-mei; LI Yu-xiang; ZHANG Ping

    2008-01-01

    It is well known that non-line-of-sight (NLOS)error has been the major factor impeding the enhancement ofaccuracy for time of arrival (TOA) estimation and wirelesspositioning. This article proposes a novel method of TOAestimation effectively reducing the NLOS error by 60%,comparing with the traditional timing and synchronizationmethod. By constructing the orthogonal training sequences,this method converts the traditional TOA estimation to thedetection of the first arrival path (FAP) in the NLOS multipathenvironment, and then estimates the TOA by the round-triptransmission (RTT) technology. Both theoretical analysis andnumerical simulations prove that the method proposed in thisarticle achieves better performance than the traditional methods.

  15. A Method for Estimation of Death Tolls in Disastrous Earthquake

    Science.gov (United States)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  16. A New Method for Radar Rainfall Estimation Using Merged Radar and Gauge Derived Fields

    Science.gov (United States)

    Hasan, M. M.; Sharma, A.; Johnson, F.; Mariethoz, G.; Seed, A.

    2014-12-01

    Accurate estimation of rainfall is critical for any hydrological analysis. The advantage of radar rainfall measurements is their ability to cover large areas. However, the uncertainties in the parameters of the power law, that links reflectivity to rainfall intensity, have to date precluded the widespread use of radars for quantitative rainfall estimates for hydrological studies. There is therefore considerable interest in methods that can combine the strengths of radar and gauge measurements by merging the two data sources. In this work, we propose two new developments to advance this area of research. The first contribution is a non-parametric radar rainfall estimation method (NPZR) which is based on kernel density estimation. Instead of using a traditional Z-R relationship, the NPZR accounts for the uncertainty in the relationship between reflectivity and rainfall intensity. More importantly, this uncertainty can vary for different values of reflectivity. The NPZR method reduces the Mean Square Error (MSE) of the estimated rainfall by 16 % compared to a traditionally fitted Z-R relation. Rainfall estimates are improved at 90% of the gauge locations when the method is applied to the densely gauged Sydney Terrey Hills radar region. A copula based spatial interpolation method (SIR) is used to estimate rainfall from gauge observations at the radar pixel locations. The gauge-based SIR estimates have low uncertainty in areas with good gauge density, whilst the NPZR method provides more reliable rainfall estimates than the SIR method, particularly in the areas of low gauge density. The second contribution of the work is to merge the radar rainfall field with spatially interpolated gauge rainfall estimates. The two rainfall fields are combined using a temporally and spatially varying weighting scheme that can account for the strengths of each method. The weight for each time period at each location is calculated based on the expected estimation error of each method

  17. Determinants of the reliability of ultrasound tomography sound speed estimates as a surrogate for volumetric breast density

    Energy Technology Data Exchange (ETDEWEB)

    Khodr, Zeina G.; Pfeiffer, Ruth M.; Gierach, Gretchen L., E-mail: GierachG@mail.nih.gov [Department of Health and Human Services, Division of Cancer Epidemiology and Genetics, National Cancer Institute, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States); Sak, Mark A.; Bey-Knight, Lisa [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 (United States); Duric, Nebojsa; Littrup, Peter [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 and Delphinus Medical Technologies, 46701 Commerce Center Drive, Plymouth, Michigan 48170 (United States); Ali, Haythem; Vallieres, Patricia [Henry Ford Health System, 2799 W Grand Boulevard, Detroit, Michigan 48202 (United States); Sherman, Mark E. [Division of Cancer Prevention, National Cancer Institute, Department of Health and Human Services, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States)

    2015-10-15

    Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (two scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authors’ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1–13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (∼0%). When evaluating change in sound speed between scans, 2.7% and ∼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced rater’s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density.

  18. A long-term evaluation of biopsy darts and DNA to estimate cougar density

    Science.gov (United States)

    Beausoleil, Richard A.; Clark, Joseph D.; Maletzke, Benjamin T.

    2016-01-01

    Accurately estimating cougar (Puma concolor) density is usually based on long-term research consisting of intensive capture and Global Positioning System collaring efforts and may cost hundreds of thousands of dollars annually. Because wildlife agency budgets rarely accommodate this approach, most infer cougar density from published literature, rely on short-term studies, or use hunter harvest data as a surrogate in their jurisdictions; all of which may limit accuracy and increase risk of management actions. In an effort to develop a more cost-effective long-term strategy, we evaluated a research approach using citizen scientists with trained hounds to tree cougars and collect tissue samples with biopsy darts. We then used the DNA to individually identify cougars and employed spatially explicit capture–recapture models to estimate cougar densities. Overall, 240 tissue samples were collected in northeastern Washington, USA, producing 166 genotypes (including recaptures and excluding dependent kittens) of 133 different cougars (8-25/yr) from 2003 to 2011. Mark–recapture analyses revealed a mean density of 2.2 cougars/100 km2 (95% CI=1.1-4.3) and stable to decreasing population trends (β=-0.048, 95% CI=-0.106–0.011) over the 9 years of study, with an average annual harvest rate of 14% (range=7-21%). The average annual cost per year for field sampling and genotyping was US$11,265 ($422.24/sample or $610.73/successfully genotyped sample). Our results demonstrated that long-term biopsy sampling using citizen scientists can increase capture success and provide reliable cougar-density information at a reasonable cost.

  19. Development of density measurement method of negative ion in plasmas using laser Thomson scattering

    Science.gov (United States)

    Yamagata, Yukihiko; Saiho, Hiroatsu; Uchino, Kiichiro; Muraoka, Katsunori

    2004-09-01

    Measurements of negative ion density in plasmas have been an important subject for many years. We have proposed a new method to measure the negative ion density in plasmas using laser Thomson scattering (LTS), and successfully measured O^- ion density in an radio frequency inductively coupled plasma [1]. In order to ensure the reliability of this technique and to estimate the accuracy, we have measured O^- ion density in the same experimental conditions using the second (SHG) and third harmonics (THG) of a Nd:YAG laser as different laser sources. The LTS spectra measured at pure argon plasma (500 W, 20 mTorr) fitted in a straight line well in both SHG and THG cases. As for the plasma at 500 W in 20 mTorr with Ar/O_2=95%/5%, a clear bump in LTS spectra, which is caused by photo-detached electrons, was observed below 0.9 eV for the SHG case and 2 eV for the case, as predicted by a difference between the electron affinity of O^- ion and the laser photon energy. The electron temperatures, the electron densities and the O^- ion densities, which were obtained from the spectral shape and intensity of both LTS spectra, were in agreement each other within an experimental error. [1] M. Noguchi, K. Ariga, T. Hirao, P. Suanpoot, Y. Yamagata, K. Uchino, K. Muraoka, Plasma Sources Sci. Technol., 11 (2002) 57.

  20. An isometric muscle force estimation framework based on a high-density surface EMG array and an NMF algorithm

    Science.gov (United States)

    Huang, Chengjun; Chen, Xiang; Cao, Shuai; Qiu, Bensheng; Zhang, Xu

    2017-08-01

    Objective. To realize accurate muscle force estimation, a novel framework is proposed in this paper which can extract the input of the prediction model from the appropriate activation area of the skeletal muscle. Approach. Surface electromyographic (sEMG) signals from the biceps brachii muscle during isometric elbow flexion were collected with a high-density (HD) electrode grid (128 channels) and the external force at three contraction levels was measured at the wrist synchronously. The sEMG envelope matrix was factorized into a matrix of basis vectors with each column representing an activation pattern and a matrix of time-varying coefficients by a nonnegative matrix factorization (NMF) algorithm. The activation pattern with the highest activation intensity, which was defined as the sum of the absolute values of the time-varying coefficient curve, was considered as the major activation pattern, and its channels with high weighting factors were selected to extract the input activation signal of a force estimation model based on the polynomial fitting technique. Main results. Compared with conventional methods using the whole channels of the grid, the proposed method could significantly improve the quality of force estimation and reduce the electrode number. Significance. The proposed method provides a way to find proper electrode placement for force estimation, which can be further employed in muscle heterogeneity analysis, myoelectric prostheses and the control of exoskeleton devices.

  1. Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data

    Directory of Open Access Journals (Sweden)

    Wei-Kuang Lai

    2016-02-01

    Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.

  2. Estimating Tree Height-Diameter Models with the Bayesian Method

    Directory of Open Access Journals (Sweden)

    Xiongqing Zhang

    2014-01-01

    Full Text Available Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS and the maximum likelihood method (ML. The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  3. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    Science.gov (United States)

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  4. Research on the estimation method for Earth rotation parameters

    Science.gov (United States)

    Yao, Yibin

    2008-12-01

    In this paper, the methods of earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in details. To estimate ERP, two different ways are involved: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. With the IGS daily SINEX files produced by GPS tracking stations can be used to estimate ERP. The parameter transformation method can simplify the process. The process result indicates that the systemic error will exist in the estimated ERP by only using GPS observations. As to the daily GPS SINEX files, why the distinct systemic error is exist in the ERP, or whether this systemic error will affect other parameters estimation, and what its influenced magnitude being, it needs further study in the future.

  5. A TRMM Rainfall Estimation Method Applicable to Land Areas

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R.; Weinman, J.; Dalu, G.

    1999-01-01

    Methods developed to estimate rain rate on a footprint scale over land with the satellite-borne multispectral dual-polarization Special Sensor Microwave Imager (SSM/1) radiometer have met with limited success. Variability of surface emissivity on land and beam filling are commonly cited as the weaknesses of these methods. On the contrary, we contend a more significant reason for this lack of success is that the information content of spectral and polarization measurements of the SSM/I is limited. because of significant redundancy. As a result, the complex nature and vertical distribution C, of frozen and melting ice particles of different densities, sizes, and shapes cannot resolved satisfactorily. Extinction in the microwave region due to these complex particles can mask the extinction due to rain drops. Because of these reasons, theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. To illustrate the weakness of these models, as an example we can consider the brightness temperature measurement made by the radiometer in the 85 GHz channel (T85). Models indicate that T85 should be inversely related to the rain rate, because of scattering. However, rain rate derived from 15-minute rain gauges on land indicate that this is not true in a majority of footprints. This is also supported by the ship-borne radar observations of rain in the Tropical Oceans and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA-COARE) region over the ocean. Based on these observations. we infer that theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. We do not follow the above path of rain retrieval on a footprint scale. Instead, we depend on the limited ability of the microwave radiometer to detect the presence of rain. This capability is useful to determine the rain area in a mesoscale region. We find in a given rain event that this rain area is closely related to the mesoscale-average rain rate

  6. Estimating black bear population density and genetic diversity at Tensas River, Louisiana using microsatellite DNA markers

    Science.gov (United States)

    Boersen, Mark R.; Clark, Joseph D.; King, Tim L.

    2003-01-01

    The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

  7. The Polarizable Embedding Density Matrix Renormalization Group Method

    CERN Document Server

    Hedegård, Erik D

    2016-01-01

    The polarizable embedding (PE) approach is a flexible embedding model where a pre-selected region out of a larger system is described quantum mechanically while the interaction with the surrounding environment is modeled through an effective operator. This effective operator represents the environment by atom-centered multipoles and polarizabilities derived from quantum mechanical calculations on (fragments of) the environment. Thereby, the polarization of the environment is explicitly accounted for. Here, we present the coupling of the PE approach with the density matrix renormalization group (DMRG). This PE-DMRG method is particularly suitable for embedded subsystems that feature a dense manifold of frontier orbitals which requires large active spaces. Recovering such static electron-correlation effects in multiconfigurational electronic structure problems, while accounting for both electrostatics and polarization of a surrounding environment, allows us to describe strongly correlated electronic structures ...

  8. An evaluation of methods for estimating decadal stream loads

    Science.gov (United States)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between

  9. The Effects of Surfactants on the Estimation of Bacterial Density in Petroleum Samples

    Science.gov (United States)

    Luna, Aderval Severino; da Costa, Antonio Carlos Augusto; Gonçalves, Márcia Monteiro Machado; de Almeida, Kelly Yaeko Miyashiro

    The effect of the surfactants polyoxyethylene monostearate (Tween 60), polyoxyethylene monooleate (Tween 80), cetyl trimethyl ammonium bromide (CTAB), and sodium dodecyl sulfate (SDS) on the estimation of bacterial density (sulfate-reducing bacteria [SRB] and general anaerobic bacteria [GAnB]) was examined in petroleum samples. Three different compositions of oil and water were selected to be representative of the real samples. The first one contained a high content of oil, the second one contained a medium content of oil, and the last one contained a low content of oil. The most probable number (MPN) was used to estimate the bacterial density. The results showed that the addition of surfactants did not improve the SRB quantification for the high or medium oil content in the petroleum samples. On other hand, Tween 60 and Tween 80 promoted a significant increase on the GAnB quantification at 0.01% or 0.03% m/v concentrations, respectively. CTAB increased SRB and GAnB estimation for the sample with a low oil content at 0.00005% and 0.0001% m/v, respectively.

  10. Large-sample study of the kernel density estimators under multiplicative censoring

    CERN Document Server

    Asgharian, Masoud; Fakoor, Vahid; 10.1214/11-AOS954

    2012-01-01

    The multiplicative censoring model introduced in Vardi [Biometrika 76 (1989) 751--761] is an incomplete data problem whereby two independent samples from the lifetime distribution $G$, $\\mathcal{X}_m=(X_1,...,X_m)$ and $\\mathcal{Z}_n=(Z_1,...,Z_n)$, are observed subject to a form of coarsening. Specifically, sample $\\mathcal{X}_m$ is fully observed while $\\mathcal{Y}_n=(Y_1,...,Y_n)$ is observed instead of $\\mathcal{Z}_n$, where $Y_i=U_iZ_i$ and $(U_1,...,U_n)$ is an independent sample from the standard uniform distribution. Vardi [Biometrika 76 (1989) 751--761] showed that this model unifies several important statistical problems, such as the deconvolution of an exponential random variable, estimation under a decreasing density constraint and an estimation problem in renewal processes. In this paper, we establish the large-sample properties of kernel density estimators under the multiplicative censoring model. We first construct a strong approximation for the process $\\sqrt{k}(\\hat{G}-G)$, where $\\hat{G}$ is...

  11. Evaluation of interpolating methods for daily precipitation at various station densities

    Science.gov (United States)

    Li, H.; Xu, C.-Y.; Chen, H.; Zhang, Z. X.; Xu, H. L.

    2012-04-01

    agreement (IOA) via cross-validation. To further observe their performance in interpolating different precipitation parameters, we also analyzed the changes of performance of the five methods in estimating daily maximum precipitation and precipitation values at different quintiles (5%, 25%, 50%, 75% and 95%) with different sampling densities.

  12. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    Directory of Open Access Journals (Sweden)

    B. Sarangi

    2015-12-01

    Full Text Available In this work, we have used scanning mobility particle sizer (SMPS and quartz crystal microbalance (QCM to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS, ammonium nitrate (AN and sodium chloride (SC, and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyzer (DMA, where size segregation was done based on particle electrical mobility. At the downstream of DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC to measure particle number concentration, whereas other one is sent to QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of SMPS and mass concentration data obtained from QCM, the mean effective density (ρeff with uncertainty of inorganic salt particles (for particle count mean diameter (CMD over a size range 10 to 478 nm, i.e. AS, SC and AN is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm−3, which are comparable with the material density (ρ values, 1.77, 2.17 and 1.72 g cm−3, respectively. Among individual uncertainty components, repeatability of particle mass obtained by QCM, QCM crystal frequency, CPC counting efficiency, and equivalence of CPC and QCM derived volume are the major contributors to the expanded uncertainty (at k = 2 in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of winter period in New Delhi is measured to be 1.28 ± 0.12 g cm−3. It was found that in general, mid-day effective density of ambient aerosols increases with increase in CMD of particle size measurement but particle photochemistry is an

  13. Numerical methods for high-dimensional probability density function equations

    Science.gov (United States)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  14. Numerical methods for high-dimensional probability density function equations

    Energy Technology Data Exchange (ETDEWEB)

    Cho, H. [Department of Mathematics, University of Maryland College Park, College Park, MD 20742 (United States); Venturi, D. [Department of Applied Mathematics and Statistics, University of California Santa Cruz, Santa Cruz, CA 95064 (United States); Karniadakis, G.E., E-mail: gk@dam.brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2016-01-15

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker–Planck and Dostupov–Pugachev equations), random wave theory (Malakhov–Saichev equations) and coarse-grained stochastic systems (Mori–Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  15. Evaluating maximum likelihood estimation methods to determine the hurst coefficients

    Science.gov (United States)

    Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.

    1999-12-01

    A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.

  16. Population density estimation of the European wildcat (Felis silvestris silvestris in Sicily using camera trapping

    Directory of Open Access Journals (Sweden)

    Stefano Anile

    2012-07-01

    Full Text Available The wildcat is an elusive species that is threatened with extinction in many areas of its European distribution. In Sicily the wildcat lives in a wide range of habitats; this study was done on Mount Etna. A previous camera trap monitoring was conducted in 2006 (pilot study and 2007 (first estimation of wildcat population size using camera trapping with capture-recapture analyses in the same study area. In 2009 digital camera traps in pair were used at each station with the aim of obtaining photographs of the wildcat. Experience and data collected from previous studies were used to develop a protocol to estimate the density of the wildcat’s population using capture–recapture analyses and the coat-colour and markings system to recognize individuals. Two trap-lines adjacent to each other were run in two consecutive data collection periods. Camera traps worked together for 1080 trap-days and we obtained 42 pictures of wildcats from 32 events of photographic capture, from which 10 individuals ( excluding four kittens were determined. The history capture of each individual was constructed and the software CAPTURE was used to generate an estimation of the population density (0.22 to 0.44 wildcat/100 ha for our study area using two different approaches for the calculation of the effective area sampled. The wildcat’s population density on Mount Etna is higher than those found throughout Europe, and is favoured by the habitat structure, prey availability, Mediterranean climate and the protection status provided by the park.

  17. Statistical methods for cosmological parameter selection and estimation

    CERN Document Server

    Liddle, Andrew R

    2009-01-01

    The estimation of cosmological parameters from precision observables is an important industry with crucial ramifications for particle physics. This article discusses the statistical methods presently used in cosmological data analysis, highlighting the main assumptions and uncertainties. The topics covered are parameter estimation, model selection, multi-model inference, and experimental design, all primarily from a Bayesian perspective.

  18. Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence

    Science.gov (United States)

    Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.

    2008-01-01

    This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…

  19. WAVELET BASED SPECTRAL CORRELATION METHOD FOR DPSK CHIP RATE ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Li Yingxiang; Xiao Xianci; Tai Hengming

    2004-01-01

    A wavelet-based spectral correlation algorithm to detect and estimate BPSK signal chip rate is proposed. Simulation results show that the proposed method can correctly estimate the BPSK signal chip rate, which may be corrupted by the quadratic characteristics of the spectral correlation function, in a low SNR environment.

  20. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  1. Performance of sampling methods to estimate log characteristics for wildlife.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2004-01-01

    Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...

  2. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars

    Directory of Open Access Journals (Sweden)

    Yi-Xiong Zhang

    2016-06-01

    Full Text Available Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP of monopulse. In wideband radars, linear frequency modulated (LFM signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF. Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars.

  3. Linear-In-The-Parameters Oblique Least Squares (LOLS) Provides More Accurate Estimates of Density-Dependent Survival

    Science.gov (United States)

    Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure

    2016-01-01

    Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048

  4. Effects of Percent Tree Canopy Density and DEM Misregistration on SRTM/NED Vegetation Height Estimates

    Directory of Open Access Journals (Sweden)

    George Miliaresis

    2009-04-01

    Full Text Available The U.S National Elevation Dataset and the NLCD 2001 landcover data were used to test the correlation between SRTM elevation values and the height of evergreen forest vegetation in the Klamath Mountains of California.Vegetation height estimates (SRTM-NED are valid only for the two out of eight (N, NE, E, SE, S, SW, W, NW geographic directions, due to NED and SRTM grid data misregistration. Penetration depths of SRTM radar were found to linearly correlate to tree percent canopy density.

  5. Cetacean Density Estimation from Novel Acoustic Datasets by Acoustic Propagation Modeling

    Science.gov (United States)

    2013-09-30

    whales off Kona, Hawai’i, is based on the works of Zimmer et al. (2008), Marques et al. (2009), and Küsel et al. (2011). The density estimator formula...given by Marques et al. (2009) is applied here for the case of one (k=1) sensor, yielding the following formulation: � = (−�) ...2124 manually labeled false killer whale clicks, calculated in 1 kHz band intervals from 0 to 90 kHz. From the above image it can be observed the

  6. Estimation of pump operational state with model-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina [Institute of Energy Technology, Lappeenranta University of Technology, P.O. Box 20, FI-53851 Lappeenranta (Finland); Kestilae, Juha [ABB Drives, P.O. Box 184, FI-00381 Helsinki (Finland)

    2010-06-15

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently. (author)

  7. Bayesian semiparametric power spectral density estimation in gravitational wave data analysis

    CERN Document Server

    Edwards, Matthew C; Christensen, Nelson

    2015-01-01

    The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with non-stationary data by breaking longer data streams into smaller and locally stationary components.

  8. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    Science.gov (United States)

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are

  9. Grade-Average Method: A Statistical Approach for Estimating ...

    African Journals Online (AJOL)

    Grade-Average Method: A Statistical Approach for Estimating Missing Value for Continuous Assessment Marks. ... Journal of the Nigerian Association of Mathematical Physics. Journal Home · ABOUT ... Open Access DOWNLOAD FULL TEXT ...

  10. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  11. A new FOA estimation method in SAR/GALILEO system

    Science.gov (United States)

    Liu, Gang; He, Bing; Li, Jilin

    2007-11-01

    The European Galileo Plan will include the Search and Rescue (SAR) transponder which will become part of the future MEOSAR (Medium earth orbit Search and Rescue) system, the new SAR system can improve localization accuracy through measuring the frequency of arrival (FOA) and time of arrival (TOA) of beacons, the FOA estimation is one of the most important part. In this paper, we aim to find a good FOA algorithm with minimal estimation error, which must be less than 0.1Hz. We propose a new method called Kay algorithm for the SAR/GALILEO system by comparing some frequency estimation methods and current methods using in the COAPAS-SARSAT system and analyzing distress beacon in terms of signal structure, spectrum characteristic. The simulation proves that the Kay method for FOA estimation is better.

  12. A bootstrap method for estimating uncertainty of water quality trends

    Science.gov (United States)

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura

    2015-01-01

    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  13. Methods of multicriterion estimations in system total quality management

    Directory of Open Access Journals (Sweden)

    Nikolay V. Diligenskiy

    2011-05-01

    Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.

  14. Comparison study between coherent echoes at VHF range and electron density estimated by Ionosphere Model for Auroral Zone

    Science.gov (United States)

    Nishiyama, Takanori; Nakamura, Takuji; Tsutsumi, Masaki; Tanaka, Yoshi; Nishimura, Koji; Sato, Kaoru; Tomikawa, Yoshihiro; Kohma, Masashi

    2016-07-01

    Polar Mesosphere Winter Echo (PMWE) is known as back scatter echo from 55 to 85 km in the mesosphere, and it has been observed by MST and IS radar in polar region during non-summer period. Since density of free electrons as scatterer is low in the dark mesosphere during winter, it is suggested that PMWE requires strong ionization of neutral atmosphere associated with Energetic Particles Precipitations (EPPs) during Solar Proton Events [Kirkwood et al., 2002] or during geomagnetically disturbed periods [Nishiyama et al., 2015]. However, studies on relationship between occurrence of PMWE and background electron density has been limited yet [Lübken et al., 2006], partly because the PMWE occurrence rate is known to be quite low (2.9%) [Zeller et al., 2006]. The PANSY (Program of the Antarctic Syowa MST/IS) radar, which is the largest MST radar in Antarctica, observed many PMWE events since it has started mesosphere observations in June 2012. We established an application method of the PANSY radar as riometer, which makes it possible to estimate Cosmic Noise Absorptions (CNA) as proxy of relative variations on background electron density. In addition, electron density profiles from 60 to 150 km altitude are calculated by Ionospheric Model for the Auroral Zone (IMAZ) [McKinnell and Friedrich, 2007] and CNA estimated by the PANSY radar. In this presentation, we would like to focus on strong PMWE during two big geomagnetic storm events, St. Patrick's Day and the Summer Solstice 2015 Event, in order to compare observed PMWE characteristics to model background electron density. On March 19 and 22, recovery phase of St. Patrick's Day Storm, sudden PMWE intensification was detected near 60 km by the PANSY radar. At the same time, strong Cosmic Noise Absorptions (CNA) of 0.8 dB and 1.0 dB were measured, respectively. However, calculated electron density profiles did not show high electron density at the altitude where the PMWE intensification were observed. On June 22, the

  15. Method of moments estimation of GO-GARCH models

    NARCIS (Netherlands)

    Boswijk, H.P.; van der Weide, R.

    2009-01-01

    We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to

  16. A LEVEL-VALUE ESTIMATION METHOD FOR SOLVING GLOBAL OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    WU Dong-hua; YU Wu-yang; TIAN Wei-wen; ZHANG Lian-sheng

    2006-01-01

    A level-value estimation method was illustrated for solving the constrained global optimization problem. The equivalence between the root of a modified variance equation and the optimal value of the original optimization problem is shown. An alternate algorithm based on the Newton's method is presented and the convergence of its implementable approach is proved. Preliminary numerical results indicate that the method is effective.

  17. Exploring Complex-Langevin Methods for Finite-Density QCD

    CERN Document Server

    Sinclair, D K

    2015-01-01

    QCD at non-zero chemical potential ($\\mu$) for quark number has a complex fermion determinant and thus standard simulation methods for lattice QCD cannot be applied. We therefore simulate this theory using the Complex-Langevin algorithm with Gauge Cooling in addition to adaptive methods, to prevent runaway behaviour. Simulations are performed at zero temperature on a $12^4$ lattice with 2 quarks which are light enough that $m_N/3$ is significantly larger than $m_\\pi/2$. Preliminary results are qualitatively as expected. The quark-number density is close to zero for $\\mu < m_N/3$, beyond which it increases, eventually reaching its saturation value of $3$ for $\\mu$ sufficiently large. The chiral condensate decreases as $\\mu$ is increased approaching zero at saturation, while the plaquette increases towards its quenched value. We have yet to observe the transition to nuclear matter at $\\mu \\approx m_N/3$, presumably because the runs for $\\mu$ between $m_N/3$ and saturation have yet to equilibrate.

  18. Parameterizing deep convection using the assumed probability density function method

    Directory of Open Access Journals (Sweden)

    R. L. Storer

    2014-06-01

    Full Text Available Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  19. Methods for Estimating the Ultimate Bearing Capacity of Layered Foundations

    Institute of Scientific and Technical Information of China (English)

    袁凡凡; 闫澍旺

    2003-01-01

    The Meyerhof and Hanna′s(M-H) method to estimate the ultimate bearing capacity of layered foundations was improved. The experimental results of the load tests in Tianjin New Harbor were compared with predictions with the method recommended by the code for the foundations of harbor engineering, i.e. Hansen′s method and the improved M-H method. The results of the comparisons implied that the code and the improved M-H method could give a better prediction.

  20. Using the Mercy Method for Weight Estimation in Indian Children

    Directory of Open Access Journals (Sweden)

    Gitanjali Batmanabane MD, PhD

    2015-01-01

    Full Text Available This study was designed to compare the performance of a new weight estimation strategy (Mercy Method with 12 existing weight estimation methods (APLS, Best Guess, Broselow, Leffler, Luscombe-Owens, Nelson, Shann, Theron, Traub-Johnson, Traub-Kichen in children from India. Otherwise healthy children, 2 months to 16 years, were enrolled and weight, height, humeral length (HL, and mid-upper arm circumference (MUAC were obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights and the slope, intercept, and Pearson correlation coefficient estimated. Agreement between estimated weight and actual weight was determined using Bland–Altman plots with log-transformation. Predictive performance of each method was assessed using mean error (ME, mean percentage error (MPE, and root mean square error (RMSE. Three hundred seventy-five children (7.5 ± 4.3 years, 22.1 ± 12.3 kg, 116.2 ± 26.3 cm participated in this study. The Mercy Method (MM offered the best correlation between actual and estimated weight when compared with the other methods (r2 = .967 vs .517-.844. The MM also demonstrated the lowest ME, MPE, and RMSE. Finally, the MM estimated weight within 20% of actual for nearly all children (96% as opposed to the other methods for which these values ranged from 14% to 63%. The MM performed extremely well in Indian children with performance characteristics comparable to those observed for US children in whom the method was developed. It appears that the MM can be used in Indian children without modification, extending the utility of this weight estimation strategy beyond Western populations.