Nonparametric methods for volatility density estimation
Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.
2009-01-01
Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on
Concrete density estimation by rebound hammer method
International Nuclear Information System (INIS)
Ismail, Mohamad Pauzi bin; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri; Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin; Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin
2016-01-01
Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been
Ambit determination method in estimating rice plant population density
Directory of Open Access Journals (Sweden)
Abu Bakar, B.,
2017-11-01
Full Text Available Rice plant population density is a key indicator in determining the crop setting and fertilizer application rate. It is therefore essential that the population density is monitored to ensure that a correct crop management decision is taken. The conventional method of determining plant population is by manually counting the total number of rice plant tillers in a 25 cm x 25 cm square frame. Sampling is done by randomly choosing several different locations within a plot to perform tiller counting. This sampling method is time consuming, labour intensive and costly. An alternative fast estimating method was developed to overcome this issue. The method relies on measuring the outer circumference or ambit of the contained rice plants in a 25 cm x 25 cm square frame to determine the number of tillers within that square frame. Data samples of rice variety MR219 were collected from rice plots in the Muda granary area, Sungai Limau Dalam, Kedah. The data were taken at 50 days and 70 days after seeding (DAS. A total of 100 data samples were collected for each sampling day. A good correlation was obtained for the variety of 50 DAS and 70 DAS. The model was then verified by taking 100 samples with the latching strap for 50 DAS and 70 DAS. As a result, this technique can be used as a fast, economical and practical alternative to manual tiller counting. The technique can potentially be used in the development of an electronic sensing system to estimate paddy plant population density.
Review of methods for level density estimation from resonance parameters
International Nuclear Information System (INIS)
Froehner, F.H.
1983-01-01
A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)
A projection and density estimation method for knowledge discovery.
Directory of Open Access Journals (Sweden)
Adam Stanski
Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.
Estimating forest canopy bulk density using six indirect methods
Robert E. Keane; Elizabeth D. Reinhardt; Joe Scott; Kathy Gray; James Reardon
2005-01-01
Canopy bulk density (CBD) is an important crown characteristic needed to predict crown fire spread, yet it is difficult to measure in the field. Presented here is a comprehensive research effort to evaluate six indirect sampling techniques for estimating CBD. As reference data, detailed crown fuel biomass measurements were taken on each tree within fixed-area plots...
Application of Density Estimation Methods to Datasets from a Glider
2014-09-30
humpback and sperm whales as well as different dolphin species. OBJECTIVES The objective of this research is to extend existing methods for cetacean...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...estimation from single sensor datasets. Required steps for a cue counting approach, where a cue has been defined as a clicking event (KÃ¼sel et al., 2011), to
Directory of Open Access Journals (Sweden)
Hendra Gunawan
2014-06-01
Full Text Available http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting of an effect of topography, an effect of intracrustal, and an isostatic compensation. Based on simulation results, Bouguer density estimates were then investigated for a gravity survey of 2005 on La Soufriere Volcano-Guadeloupe area (Antilles Islands. The Bouguer density based on the Parasnis approach is 2.71 g/cm3 for the whole area, except the edifice area where average topography density estimates are 2.21 g/cm3 where Bouguer density estimates from previous gravity survey of 1975 are 2.67 g/cm3. The Bouguer density in La Soufriere Volcano was uncertainly estimated to be 0.1 g/cm3. For the studied area, the density deduced from refraction seismic data is coherent with the recent Bouguer density estimates. New Bouguer anomaly map based on these Bouguer density values allows to a better geological intepretation. Â Â
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
International Nuclear Information System (INIS)
Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi
2016-01-01
Highlights: â€¢ Effectiveness of six numerical methods is evaluated to determine wind power density. â€¢ More appropriate method for computing the daily wind power density is estimated. â€¢ Four windy stations located in the south part of Alberta, Canada namely is investigated. â€¢ The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.
Gunawan, Hendra; Micheldiament, Micheldiament; Mikhailov, Valentin
2008-01-01
http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density) estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting ...
A citizen science based survey method for estimating the density of urban carnivores
Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.
2018-01-01
Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980â€™s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on
Ambrogioni, Luca; GÃ¼Ã§lÃ¼, Umut; van Gerven, Marcel A. J.; Maris, Eric
2017-01-01
This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...
A group contribution method to estimate the densities of ionic liquids
International Nuclear Information System (INIS)
Qiao Yan; Ma Youguang; Huo Yan; Ma Peisheng; Xia Shuqian
2010-01-01
Densities of ionic liquids at different temperature and pressure were collected from 84 references. The collection contains 7381 data points derived from 123 pure ionic liquids and 13 kinds of binary ionic liquids mixtures. In terms of the collected database, a group contribution method based on 51 groups was used to predict the densities of ionic liquids. In group partition, the effect of interaction among several substitutes on the same center was considered. The same structure in different substitutes may have different group values. According to the estimation of pure ionic liquids' densities, the results show that the average relative error is 0.88% and the standard deviation (S) is 0.0181. Using the set of group values three pure ionic liquids densities were predicted, the average relative error is 0.27% and the S is 0.0048. For ionic liquid mixtures, they are thought considered as idea mixtures, so the group contribution method was used to estimate their densities and the average relative error is 1.22% with S is 0.0607. And the method can also be used to estimate the densities of MCl x type ionic liquids which are produced by mixing an ionic liquid with a Cl - anion and a kind of metal chloride.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Directory of Open Access Journals (Sweden)
Samir Saoudi
2008-07-01
Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
An automatic iris occlusion estimation method based on high-dimensional density estimation.
Li, Yung-Hui; Savvides, Marios
2013-04-01
Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.
A new estimation method for nuclide number densities in equilibrium cycle
International Nuclear Information System (INIS)
Seino, Takeshi; Sekimoto, Hiroshi; Ando, Yoshihira.
1997-01-01
A new method is proposed for estimating nuclide number densities of LWR equilibrium cycle by multi-recycling calculation. Conventionally, it is necessary to spend a large computation time for attaining the ultimate equilibrium state. Hence, the cycle in nearly constant fuel composition has been considered as an equilibrium state which can be achieved by a few of recycling calculations on a simulated cycle operation under a specific fuel core design. The present method uses steady state fuel nuclide number densities as the initial guess for multi-recycling burnup calculation obtained by a continuously fuel supplied core model. The number densities are modified to be the initial number densities for nuclides of a batch supplied fuel. It was found that the calculated number densities could attain to more precise equilibrium state than that of a conventional multi-recycling calculation with a small number of recyclings. In particular, the present method could give the ultimate equilibrium number densities of the nuclides with the higher mass number than 245 Cm and 244 Pu which were not able to attain to the ultimate equilibrium state within a reasonable number of iterations using a conventional method. (author)
ESTIMATE OF STAND DENSITY INDEX FOR EUCALYPTUS UROPHYLLA USING DIFFERENT FIT METHODS
Directory of Open Access Journals (Sweden)
Ernani Lopes Possato
Full Text Available ABSTRACT The Reineke stand density index (SDI was created on 1933 and remains as target of researches due to its importance on helping decision making regarding the management of population density. Part of such works is focused on the manner by which plots were selected and methods for the fit of Reineke model parameters in order to improve the definition of SDI value for the genetic material evaluated. The present study aimed to estimate the SDI value for Eucalyptus urophylla using the Reineke model fitted by the method of linear regression (LR and stochastic frontier analysis (SFA. The database containing pairs of data number of stems per hectare (N and mean quadratic diameter (Dq was selected in three intensities, containing the 8, 30 and 43 plots of greatest density, and models were fitted by LR and SFA on each selected intensities. The intensity of data selection altered slightly the estimates of parameters and SDI when comparing the fits of each method. On the other hand, the adjust method influenced the mean estimated values of slope and SDI, which corresponded to -1.863 and 740 for LR and -1.582 and 810 for SFA.
Reliability analysis based on a novel density estimation method for structures with correlations
Directory of Open Access Journals (Sweden)
Baoyu LI
2017-06-01
Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.
International Nuclear Information System (INIS)
Terzic, B.; Bassi, G.
2011-01-01
In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. (G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)G. Bassi and B. Terzic, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043), designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code (G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)), and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.
2015-09-30
titled â€œOcean Basin Impact of Ambient Noise on Marine Mammal Detectability, Distribution, and Acoustic Communication â€. Patterns and trends of ocean... mammals in response to potentially negative interactions with human activity requires knowledge of how many animals are present in an area during a...specific time period. Many marine mammal species are relatively hard to sight, making standard visual methods of density estimation difficult and
On the method of logarithmic cumulants for parametric probability density function estimation.
Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane
2013-10-01
Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.
A simple method for estimating the length density of convoluted tubular systems.
Ferraz de Carvalho, ClÃ¡udio A; de Campos Boldrini, Silvia; Nishimaru, FlÃ¡vio; Liberti, Edson A
2008-10-01
We present a new method for estimating the length density (Lv) of convoluted tubular structures exhibiting an isotropic distribution. Although the traditional equation Lv=2Q/A is used, the parameter Q is obtained by considering the collective perimeters of tubular sections. This measurement is converted to a standard model of the structure, assuming that all cross-sections are approximately circular and have an average perimeter similar to that of actual circular cross-sections observed in the same material. The accuracy of this method was tested in eight experiments using hollow macaroni bent into helical shapes. After measuring the length of the macaroni segments, they were boiled and randomly packed into cylindrical volumes along with an aqueous suspension of gelatin and India ink. The solidified blocks were cut into slices 1.0 cm thick and 33.2 cm2 in area (A). The total perimeter of the macaroni cross-sections so revealed was stereologically estimated using a test system of straight parallel lines. Given Lv and the reference volume, the total length of macaroni in each section could be estimated. Additional corrections were made for the changes induced by boiling, and the off-axis position of the thread used to measure length. No statistical difference was observed between the corrected estimated values and the actual lengths. This technique is useful for estimating the length of capillaries, renal tubules, and seminiferous tubules.
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Estimation of Engine Intake Air Mass Flow using a generic Speed-Density method
Directory of Open Access Journals (Sweden)
VojtÃÅ¡ek Michal
2014-10-01
Full Text Available Measurement of real driving emissions (RDE from internal combustion engines under real-world operation using portable, onboard monitoring systems (PEMS is becoming an increasingly important tool aiding the assessment of the effects of new fuels and technologies on environment and human health. The knowledge of exhaust flow is one of the prerequisites for successful RDE measurement with PEMS. One of the simplest approaches for estimating the exhaust flow from virtually any engine is its computation from the intake air flow, which is calculated from measured engine rpm and intake manifold charge pressure and temperature using a generic speed-density algorithm, applicable to most contemporary four-cycle engines. In this work, a generic speed-density algorithm was compared against several reference methods on representative European production engines - a gasoline port-injected automobile engine, two turbocharged diesel automobile engines, and a heavy-duty turbocharged diesel engine. The overall results suggest that the uncertainty of the generic speed-density method is on the order of 10% throughout most of the engine operating range, but increasing to tens of percent where high-volume exhaust gas recirculation is used. For non-EGR engines, such uncertainty is acceptable for many simpler and screening measurements, and may be, where desired, reduced by engine-specific calibration.
Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study
Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.
2010-01-01
This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.
Novel method for the simultaneous estimation of density and surface tension of liquids
International Nuclear Information System (INIS)
Thirunavukkarasu, G.; Srinivasan, G.J.
2003-01-01
The conventional Hare's apparatus generally used for the determination of density of liquids has been modified by replacing its vertical arms (glass tubes) with capillary tubes of 30 cm length and 0.072 cm diameter. When the columns of liquids are drawn through the capillary tubes with reduced pressure at the top of the liquid columns and kept at equilibrium with the atmospheric pressure acting on the liquid surface outside the capillary tubes, the downward pressure due to gravity of the liquid columns has to be coupled with the pressure arising due to the effect of surface tension of the liquids. A fresh expression for the density and surface tension of liquids has been arrived at while equating the pressure balancing system for the two individual liquid columns of the modified Hare's apparatus. The experimental results showed that the proposed method is precise and accurate in the simultaneous estimation of density and surface tension of liquids, with an error of less than 5%
Comparison of density estimators. [Estimation of probability density functions
Energy Technology Data Exchange (ETDEWEB)
Kao, S.; Monahan, J.F.
1977-09-01
Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)
Singh, Tulika; Sharma, Madhurima; Singla, Veenu; Khandelwal, Niranjan
2016-01-01
The objective of our study was to calculate mammographic breast density with a fully automated volumetric breast density measurement method and to compare it to breast imaging reporting and data system (BI-RADS) breast density categories assigned by two radiologists. A total of 476 full-field digital mammography examinations with standard mediolateral oblique and craniocaudal views were evaluated by two blinded radiologists and BI-RADS density categories were assigned. Using a fully automated software, mean fibroglandular tissue volume, mean breast volume, and mean volumetric breast density were calculated. Based on percentage volumetric breast density, a volumetric density grade was assigned from 1 to 4. The weighted overall kappa was 0.895 (almost perfect agreement) for the two radiologists' BI-RADS density estimates. A statistically significant difference was seen in mean volumetric breast density among the BI-RADS density categories. With increased BI-RADS density category, increase in mean volumetric breast density was also seen (Pâ€‰BI-RADS categories and volumetric density grading by fully automated software (Ïâ€‰=â€‰0.728, Pâ€‰BI-RADS density category by two observers showed fair agreement (Îºâ€‰=â€‰0.398 and 0.388, respectively). In our study, a good correlation was seen between density grading using fully automated volumetric method and density grading using BI-RADS density categories assigned by the two radiologists. Thus, the fully automated volumetric method may be used to quantify breast density on routine mammography. Copyright Â© 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Hong Shen
2011-01-01
The concepts of curve profile, curve intercept, curve intercept density, curve profile area density, intersection density in containing intersection (or intersection density relied on intersection reference), curve profile intersection density in surface (or curve intercept intersection density relied on intersection of containing curve), and curve profile area density in surface (AS) were defined. AS expressed the amount of curve profile area of Y phase in the unit containing surface area, S...
Application of Density Estimation Methods to Datasets Collected From a Glider
2015-09-30
2832. PUBLICATIONS KÃ¼sel, E.T., Siderius, M., and Mellinger, D.K., â€œSingle-sensor, cue- counting population density estimation: Average ...contained echolocation clicks of sperm whales (Physeter macrocephalus). This species is also known to occur in the Gulf of Mexico where data is...Because such approach considers the entire click bandwidth, the average probability of detection of thousands of click realizations, and hence the
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing Î±(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law Î±(f)= Î±0 f (Î²), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the Î±0 and Î² estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the Î±0 and Î² estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the Î±0 and Î² estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
Walker, Martin; BasÃ¡Ã±ez, MarÃa-Gloria; OuÃ©draogo, AndrÃ© Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Histogram Estimators of Bivariate Densities
National Research Council Canada - National Science Library
Husemann, Joyce A
1986-01-01
One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
PDE-Foam - a probability-density estimation method using self-adapting phase-space binning
Dannheim, Dominik; Voigt, Alexander; Grahn, Karl-Johan; Speckmayer, Peter
2009-01-01
Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. To efficiently use large event samples to estimate the probability density, a binary search tree (range searching) is used in the PDE-RS implementation. It is a generalisation of standard likelihood methods and a powerful classification tool for problems with highly non-linearly correlated observables. In this paper, we present an innovative improvement of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multidimensional phase space, minimizing the variance of the signal and background densities inside the cells. The binned density information is stored in binary trees, allowing for a very ...
Directory of Open Access Journals (Sweden)
Md Nabiul Islam Khan
Full Text Available In the Point-Centred Quarter Method (PCQM, the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1 and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns plant populations and empirical ones.PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3 show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition. If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1/(Ï€ âˆ‘ R2 but not 12N/(Ï€ âˆ‘ R2, of PCQM2 is 4(8N - 1/(Ï€ âˆ‘ R2 but not 28N/(Ï€ âˆ‘ R2 and of PCQM3 is 4(12N - 1/(Ï€ âˆ‘ R2 but not 44N/(Ï€ âˆ‘ R2 as published.If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process
Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.
2016-01-01
During the 2004â€“2005 to 2015â€“2016 hunting seasons, the New Mexico Department of Game and Fish (NMDGF) estimated black bear abundance (Ursus americanus) across the state by coupling density estimates with the distribution of primary habitat generated by Costello et al. (2001). These estimates have been used to set harvest limits. For example, a density of 17 bears/100 km2 for the Sangre de Cristo and Sacramento Mountains and 13.2 bears/100 km2 for the Sandia Mountains were used to set harvest levels. The advancement and widespread acceptance of non-invasive sampling and mark-recapture methods, prompted the NMDGF to collaborate with the New Mexico Cooperative Fish and Wildlife Research Unit and New Mexico State University to update their density estimates for black bear populations in select mountain ranges across the state.We established 5 study areas in 3 mountain ranges: the northern (NSC; sampled in 2012) and southern Sangre de Cristo Mountains (SSC; sampled in 2013), the Sandia Mountains (Sandias; sampled in 2014), and the northern (NSacs) and southern Sacramento Mountains (SSacs; both sampled in 2014). We collected hair samples from black bears using two concurrent non-invasive sampling methods, hair traps and bear rubs. We used a gender marker and a suite of microsatellite loci to determine the individual identification of hair samples that were suitable for genetic analysis. We used these data to generate mark-recapture encounter histories for each bear and estimated density in a spatially explicit capture-recapture framework (SECR). We constructed a suite of SECR candidate models using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We used Akaikeâ€™s Information Criterion corrected for small sample size (AICc) to rank and select the most supported model from which we estimated density.We set 554 hair traps, 117 bear rubs and collected 4,083 hair
Directory of Open Access Journals (Sweden)
Miroslav KaliÅ¡nik
2011-05-01
Full Text Available In the introduction the evolution of methods for numerical density estimation of particles is presented shortly. Three pairs of methods have been analysed and compared: (1 classical methods for particles counting in thin and thick sections, (2 original and modified differential counting methods and (3 physical and optical disector methods. Metric characteristics such as accuracy, efficiency, robustness, and feasibility of methods have been estimated and compared. Logical, geometrical and mathematical analysis as well as computer simulations have been applied. In computer simulations a model of randomly distributed equal spheres with maximal contrast against surroundings has been used. According to our computer simulation all methods give accurate results provided that the sample is representative and sufficiently large. However, there are differences in their efficiency, robustness and feasibility. Efficiency and robustness increase with increasing slice thickness in all three pairs of methods. Robustness is superior in both differential and both disector methods compared to both classical methods. Feasibility can be judged according to the additional equipment as well as to the histotechnical and counting procedures necessary for performing individual counting methods. However, it is evident that not all practical problems can efficiently be solved with models.
International Nuclear Information System (INIS)
Ansarifar, G.R.; Nasrabadi, M.N.; Hassanvand, R.
2016-01-01
Highlights: â€¢ We present a S.M.C. system based on the S.M.O for control of a fast reactor power. â€¢ A S.M.O has been developed to estimate the density of delayed neutron precursor. â€¢ The stability analysis has been given by means Lyapunov approach. â€¢ The control system is guaranteed to be stable within a large range. â€¢ The comparison between S.M.C. and the conventional PID controller has been done. - Abstract: In this paper, a nonlinear controller using sliding mode method which is a robust nonlinear controller is designed to control a fast nuclear reactor. The reactor core is simulated based on the point kinetics equations and one delayed neutron group. Considering the limitations of the delayed neutron precursor density measurement, a sliding mode observer is designed to estimate it and finally a sliding mode control based on the sliding mode observer is presented. The stability analysis is given by means Lyapunov approach, thus the control system is guaranteed to be stable within a large range. Sliding Mode Control (SMC) is one of the robust and nonlinear methods which have several advantages such as robustness against matched external disturbances and parameter uncertainties. The employed method is easy to implement in practical applications and moreover, the sliding mode control exhibits the desired dynamic properties during the entire output-tracking process independent of perturbations. Simulation results are presented to demonstrate the effectiveness of the proposed controller in terms of performance, robustness and stability.
Estimation of Engine Intake Air Mass Flow using a generic Speed-Density method
VojtÃÅ¡ek Michal; Kotek Martin
2014-01-01
Measurement of real driving emissions (RDE) from internal combustion engines under real-world operation using portable, onboard monitoring systems (PEMS) is becoming an increasingly important tool aiding the assessment of the effects of new fuels and technologies on environment and human health. The knowledge of exhaust flow is one of the prerequisites for successful RDE measurement with PEMS. One of the simplest approaches for estimating the exhaust flow from virtually any engine is its comp...
Optimization of Barron density estimates
Czech Academy of Sciences Publication Activity Database
Vajda, Igor; van der Meulen, E. C.
2001-01-01
RoÄ. 47, Ä. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ÄŒR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, SÃ¸ren R.K.; Liu, W. F.
2013-01-01
Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...
Directory of Open Access Journals (Sweden)
Odd Halvorsen
1983-05-01
Full Text Available A method for estimating the density of Elaphostrongylus rangiferi larvae in reindeer faeces that have been deep frozen is described. The method involves the use of an inverted microscope with plankton counting chambers. Statistical data on the efficiency and sensitivity of the method are given. With fresh faeces, the results obtained with the method were not significantly different from those obtained with the Baermann technique. With faeces that had been stored in deep freeze, the method detected on average 30 per cent more larvae than the Baermann technique.Metoder for Ã¥ estimere tettheten av hjernemarklarver i avfÃ¸ring fra reinsdyr.Abstract in Norwegian / Sammendrag: En metode for Ã¥ estimere tettheten av hjernemarklarver i avfÃ¸ring som har vÃ¦rt dypfryst blir beskrevet. Anvendelse av et invertert mikroskop med plankton tellekammer inngÃ¥r i metoden. Det blir gitt statistiske data for metodens effektivitet og fÃ¸lsomhet. Ved undersÃ¸kelse av fersk avfÃ¸ring skilte ikke de resultatene metoden ga seg fra de som ble oppnÃ¥dd med Baermanns metode. Ved undersÃ¸kelse av avfÃ¸ring som hadde vÃ¦rt lagret dypfrosset ga metoden i gjennomsnitt 30 prosent flere larver enn Baermanns metode.
Directory of Open Access Journals (Sweden)
Kwanmoon Jeong
2016-04-01
Full Text Available The main goal of osteoporosis treatment is prevention of osteoporosis-induced bone fracture. Dual-energy X-ray absorptiometry (DXA and quantitative computed tomographic imaging (QCT are widely used for assessment of bone mineral density (BMD. However, they have limitations in patients with special conditions. This study evaluated a method for diagnosis of osteoporosis using peripheral cone beam computed tomography (CBCT to estimate BMD. We investigated the correlation between the ratio of cortical and total bone areas of the forearm and femoral neck BMD. Based on the correlation, we established a linear transformation between the ratio and femoral neck BMD. We obtained forearm images using CBCT and femoral neck BMDs using dual-energy X-ray absorptiometry (DXA for 23 subjects. We first calculated the ratio of the cortical to the total bone area in the forearm from the CBCT images, and investigated the relationship with the femoral neck BMDs obtained from DXA. Based on this relationship, we further investigated the optimal forearm region to provide the highest correlation coefficient. We used the optimized forearm region to establish a linear transformation of the form to estimate femoral neck BMD from the calculated ratio. We observed the correlation factor of r = 0.857 (root mean square error = 0.056435 g/cm2; mean absolute percentage error = 4.5105% between femoral neck BMD and the ratio of the cortical and total bone areas. The strongest correlation was observed for the average ratios of the mid-shaft regions of the ulna and radius. Our results suggest that femoral neck BMD can be estimated from forearm CBCT images and may be useful for screening osteoporosis, with patients in a convenient sitting position. We believe that peripheral CBCT image-based BMD estimation may have significant preventative value for early osteoporosis treatment and management.
Federrath, Christoph; Salim, Diane M.; Medling, Anne M.; Davies, Rebecca L.; Yuan, Tiantian; Bian, Fuyan; Groves, Brent A.; Ho, I.-Ting; Sharp, Robert; Kewley, Lisa J.; Sweet, Sarah M.; Richards, Samuel N.; Bryant, Julia J.; Brough, Sarah; Croom, Scott; Scott, Nicholas; Lawrence, Jon; Konstantopoulos, Iraklis; Goodwin, Michael
2017-07-01
Stars form in cold molecular clouds. However, molecular gas is difficult to observe because the most abundant molecule (H2) lacks a permanent dipole moment. Rotational transitions of CO are often used as a tracer of H2, but CO is much less abundant and the conversion from CO intensity to H2 mass is often highly uncertain. Here we present a new method for estimating the column density of cold molecular gas (Î£gas) using optical spectroscopy. We utilize the spatially resolved HÎ± maps of flux and velocity dispersion from the Sydney-AAO Multi-object Integral field spectrograph (SAMI) Galaxy Survey. We derive maps of Î£gas by inverting the multi-freefall star formation relation, which connects the star formation rate surface density (Î£SFR) with Î£gas and the turbulent Mach number (M). Based on the measured range of Î£SFR = 0.005-1.5 {M_{âŠ™} yr^{-1} kpc^{-2}} and M=18-130, we predict Î£gas = 7-200 {M_{âŠ™} pc^{-2}} in the star-forming regions of our sample of 260 SAMI galaxies. These values are close to previously measured Î£gas obtained directly with unresolved CO observations of similar galaxies at low redshift. We classify each galaxy in our sample as 'star-forming' (219) or 'composite/AGN/shock' (41), and find that in 'composite/AGN/shock' galaxies the average Î£SFR, M and Î£gas are enhanced by factors of 2.0, 1.6 and 1.3, respectively, compared to star-forming galaxies. We compare our predictions of Î£gas with those obtained by inverting the Kennicutt-Schmidt relation and find that our new method is a factor of 2 more accurate in predicting Î£gas, with an average deviation of 32 per cent from the actual Î£gas.
Directory of Open Access Journals (Sweden)
Y. Erfanifard
2014-06-01
Full Text Available Distance methods and their estimators of density may have biased measurements unless the studied stand of trees has a random spatial pattern. This study aimed at assessing the effect of spatial arrangement of wild pistachio trees on the results of density estimation by using the nearest individual sampling method in Zagros woodlands, Iran, and applying a correction factor based on the spatial pattern of trees. A 45 ha clumped stand of wild pistachio trees was selected in Zagros woodlands and two random and dispersed stands with similar density and area were simulated. Distances from the nearest individual and neighbour at 40 sample points in a 100 Ã— 100 m grid were measured in the three stands. The results showed that the nearest individual method with Batcheler estimator could not calculate density correctly in all stands. However, applying the correction factor based on the spatial pattern of the trees, density was measured with no significant difference in terms of the real density of the stands. This study showed that considering the spatial arrangement of trees can improve the results of the nearest individual method with Batcheler estimator in density measurement.
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
Nonparametric Collective Spectral Density Estimation and Clustering
Maadooliat, Mehdi
2017-04-12
In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at
Nonparametric Collective Spectral Density Estimation and Clustering
Maadooliat, Mehdi; Sun, Ying; Chen, Tianbo
2017-01-01
In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at
Anisotropic Density Estimation in Global Illumination
DEFF Research Database (Denmark)
SchjÃ¸th, Lars
2009-01-01
Density estimation employed in multi-pass global illumination algorithms gives cause to a trade-off problem between bias and noise. The problem is seen most evident as blurring of strong illumination features. This thesis addresses the problem, presenting four methods that reduce both noise...
2018-01-30
homeÂ rangeÂ maintenance Â orÂ attractionÂ toÂ orÂ avoidanceÂ ofÂ landscapeÂ features,Â includingÂ roads Â (MoralesÂ etÂ al.Â 2004,Â McClintockÂ etÂ al.Â 2012).Â ForÂ example...radiotelemetry and extensive road survey data are used to generate the first density estimates available for the species. The results show that southern...secretive snakes that combines behavioral observations of snake road crossing speed, systematic road survey data, and simulations of spatial
Schillinger, Dominik
2013-07-01
The method of separation can be used as a non-parametric estimation technique, especially suitable for evolutionary spectral density functions of uniformly modulated and strongly narrow-band stochastic processes. The paper at hand provides a consistent derivation of method of separation based spectrum estimation for the general multi-variate and multi-dimensional case. The validity of the method is demonstrated by benchmark tests with uniformly modulated spectra, for which convergence to the analytical solution is demonstrated. The key advantage of the method of separation is the minimization of spectral dispersion due to optimum time- or space-frequency localization. This is illustrated by the calibration of multi-dimensional and multi-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed. Â© 2013 Elsevier Ltd.
Estimating snowpack density from Albedo measurement
James L. Smith; Howard G. Halverson
1979-01-01
Snow is a major source of water in Western United States. Data on snow depth and average snowpack density are used in mathematical models to predict water supply. In California, about 75 percent of the snow survey sites above 2750-meter elevation now used to collect data are in statutory wilderness areas. There is need for a method of estimating the water content of a...
Global Population Density Grid Time Series Estimates
National Aeronautics and Space Administration â€” Global Population Density Grid Time Series Estimates provide a back-cast time series of population density grids based on the year 2000 population grid from SEDAC's...
Density estimation from local structure
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2009-11-01
Full Text Available Mixture Model (GMM) density function of the data and the log-likelihood scores are compared to the scores of a GMM trained with the expectation maximization (EM) algorithm on 5 real-world classification datasets (from the UCI collection). They show...
Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing
2012-01-01
Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.
On Improving Convergence Rates for Nonnegative Kernel Density Estimators
Terrell, George R.; Scott, David W.
1980-01-01
To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...
The U.S.EPA has published recommendations for calibrator cell equivalent (CCE) densities of enterococci in recreational waters determined by a qPCR method in its 2012 Recreational Water Quality Criteria (RWQC). The CCE quantification unit stems from the calibration model used to ...
Infrared thermography for wood density estimation
LÃ³pez, Gamaliel; Basterra, Luis-Alfonso; AcuÃ±a, Luis
2018-03-01
Infrared thermography (IRT) is becoming a commonly used technique to non-destructively inspect and evaluate wood structures. Based on the radiation emitted by all objects, this technique enables the remote visualization of the surface temperature without making contact using a thermographic device. The process of transforming radiant energy into temperature depends on many parameters, and interpreting the results is usually complicated. However, some works have analyzed the operation of IRT and expanded its applications, as found in the latest literature. This work analyzes the effect of density on the thermodynamic behavior of timber to be determined by IRT. The cooling of various wood samples has been registered, and a statistical procedure that enables one to quantitatively estimate the density of timber has been designed. This procedure represents a new method to physically characterize this material.
ADN* Density log estimation Using Rockcell*
International Nuclear Information System (INIS)
Okuku, C.; Iloghalu, Emeka. M.; Omotayo, O.
2003-01-01
This work is intended to inform on the possibilities of estimating good density data in zones associated with sliding in a reservoir with ADN* tool with or without ADOS in string in cases where repeat sections were not done, possibly due to hole stability or directional concerns. This procedure has been equally used to obtain a better density data in corkscrew holes. Density data (ROBB) was recomputed using neural network in RockCell* to estimate the density over zones of interest. RockCell* is a Schlumberger software that has neural network functionally which can be used to estimate missing logs using the combination of the responses of other log curves and intervals that are not affected by sliding. In this work, an interval was selected and within this interval twelve litho zones were defined using the unsupervised neural network. From this a training set was selected based on intervals of very good log responses outside the sliding zones. This training set was used to train and run the neural network for a specific lithostratigraphic interval. The results matched the known good density curve. Then after this, an estimation of the density curve was done using the supervised neural network. The output from this estimation matched very closely in the good portions of the log, thus providing some density measurements in the sliding zone. This methodology provides a scientific solution to missing data during the process of Formation evaluation
Energy Technology Data Exchange (ETDEWEB)
Fan, J; Fan, J; Hu, W; Wang, J [Fudan University Shanghai Cancer Center, Shanghai, Shanghai (China)
2016-06-15
Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditional probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.
International Nuclear Information System (INIS)
Fan, J; Fan, J; Hu, W; Wang, J
2016-01-01
Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditional probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.
On Improving Density Estimators which are not Bona Fide Functions
Gajek, Leslaw
1986-01-01
In order to improve the rate of decrease of the IMSE for nonparametric kernel density estimators with nonrandom bandwidth beyond $O(n^{-4/5})$ all current methods must relax the constraint that the density estimate be a bona fide function, that is, be nonnegative and integrate to one. In this paper we show how to achieve similar improvement without relaxing any of these constraints. The method can also be applied for orthogonal series, adaptive orthogonal series, spline, jackknife, and other ...
Toward accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2017-08-01
Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Breast density estimation from high spectral and spatial resolution MRI
Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.
2016-01-01
Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologistsâ€™ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (pdensity estimations. An interclass correlation coefficient of 0.99 (pdensity estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologistsâ€™ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590
Probability Density Estimation Using Neural Networks in Monte Carlo Calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo
2008-01-01
The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)
Estimation and display of beam density profiles
Energy Technology Data Exchange (ETDEWEB)
Dasgupta, S; Mukhopadhyay, T; Roy, A; Mallik, C
1989-03-15
A setup in which wire-scanner-type beam-profile monitor data are collected on-line in a nuclear data-acquisition system has been used and a simple algorithm for estimation and display of the current density distribution in a particle beam is described.
Optimal Bandwidth Selection for Kernel Density Functionals Estimation
Directory of Open Access Journals (Sweden)
Su Chen
2015-01-01
Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals âˆ«Î³(xf2(xdx with appropriate choice of Î³(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. âˆ«Î³(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of âˆ«Î³(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of âˆ«Î³(xf2(xdx are provided: Normal scale bandwidth selection (namely, â€œRule of Thumbâ€ and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.
Energy Technology Data Exchange (ETDEWEB)
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina, E-mail: despina.kontos@uphs.upenn.edu [Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)
2013-12-15
Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readersâ€™ manual segmentation, the proposed FCM-Atlas method achieves a
International Nuclear Information System (INIS)
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina
2013-01-01
Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readersâ€™ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0
Density Estimation in Several Populations With Uncertain Population Membership
Ma, Yanyuan
2011-09-01
We devise methods to estimate probability density functions of several populations using observations with uncertain population membership, meaning from which population an observation comes is unknown. The probability of an observation being sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate our methods with data from a nutrition study.
A method of estimating log weights.
Charles N. Mann; Hilton H. Lysons
1972-01-01
This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...
Density estimates of monarch butterflies overwintering in central Mexico
Directory of Open Access Journals (Sweden)
Wayne E. Thogmartin
2017-04-01
Full Text Available Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L. under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9â€“60.9 million haâˆ’1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was âˆ¼27.9 million butterflies haâˆ’1 (95% CI [2.4â€“80.7] million haâˆ’1; the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies haâˆ’1. Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp. lost (0.86 billion stems in the northern US plus the amount of milkweed remaining (1.34 billion stems, we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.
Density estimates of monarch butterflies overwintering in central Mexico
Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9â€“60.9 million haâˆ’1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was âˆ¼27.9 million butterflies haâˆ’1 (95% CI [2.4â€“80.7] million haâˆ’1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies haâˆ’1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.
A new approach for estimating the density of liquids.
Sakagami, T; Fuchizaki, K; Ohara, K
2016-10-05
We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.
Gradient-based stochastic estimation of the density matrix
Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton
2018-03-01
Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
Schillinger, Dominik; Stefanov, Dimitar; Stavrev, Atanas
2013-01-01
-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed
Multivariate density estimation theory, practice, and visualization
Scott, David W
2015-01-01
David W. Scott, PhD, is Noah Harding Professor in the Department of Statistics at Rice University. The author of over 100 published articles, papers, and book chapters, Dr. Scott is also Fellow of the American Statistical Association (ASA) and the Institute of Mathematical Statistics. He is recipient of the ASA Founder's Award and the Army Wilks Award. His research interests include computational statistics, data visualization, and density estimation. Dr. Scott is also Coeditor of Wiley Interdisciplinary Reviews: Computational Statistics and previous Editor of the Journal of Computational and
Method of measuring surface density
International Nuclear Information System (INIS)
Gregor, J.
1982-01-01
A method is described of measuring surface density or thickness, preferably of coating layers, using radiation emitted by a suitable radionuclide, e.g., 241 Am. The radiation impinges on the measured material, e.g., a copper foil and in dependence on its surface density or thickness part of the flux of impinging radiation is reflected and part penetrates through the material. The radiation which has penetrated through the material excites in a replaceable adjustable backing characteristic radiation of an energy close to that of the impinging radiation (within +-30 keV). Part of the flux of the characteristic radiation spreads back to the detector, penetrates through the material in which in dependence on surface density or thickness of the coating layer it is partly absorbed. The flux of the penetrated characteristic radiation impinging on the face of the detector is a function of surface density or thickness. Only that part of the energy is evaluated of the energy spectrum which corresponds to the energy of characteristic radiation. (B.S.)
International Nuclear Information System (INIS)
Bilski, Pawel
2010-01-01
The high-temperature ratio (HTR) method which exploits changes in the LiF:Mg,Ti glow-curve due to high-LET radiation, has been used for several years to estimate LET in an unknown radiation field. As TL efficiency is known to decrease after doses of densely ionizing radiation, a LET estimate is used to correct the TLD-measured values of dose. The HTR method is purely empirical and its general correctness is questionable. The validity of the HTR method was investigated by theoretical simulation of various mixed radiation fields. The LET eff values estimated with the HTR method for mixed radiation fields were found in general to be incorrect, in some cases underestimating the true values of dose-averaged LET by an order of magnitude. The method produced correct estimates of average LET only in cases of almost mono-energetic fields (i.e. in non-mixed radiation conditions). The value of LET eff found by the HTR method may therefore be treated as a qualitative indicator of increased LET, but not as a quantitative estimator of average LET. However, HTR-based correction of the TLD-measured dose value (HTR-B method) was found to be quite reliable. In all cases studied, application of this technique improved the result. Most of the measured doses fell within 10% of the true values. A further empirical improvement to the method is proposed. One may therefore recommend the HTR-B method to correct for decreased TL efficiency in mixed high-LET fields.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.
2012-03-11
The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. Â© 2012 The Author(s).
A Balanced Approach to Adaptive Probability Density Estimation
Directory of Open Access Journals (Sweden)
Julio A. Kovacs
2017-04-01
Full Text Available Our development of a Fast (Mutual Information Matching (FIM of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Mammography density estimation with automated volumetic breast density measurement
International Nuclear Information System (INIS)
Ko, Su Yeon; Kim, Eun Kyung; Kim, Min Jung; Moon, Hee Jung
2014-01-01
To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p 0.001 to 0.015). There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density.
Current Source Density Estimation for Single Neurons
Directory of Open Access Journals (Sweden)
Dorottya CserpÃ¡n
2014-03-01
Full Text Available Recent developments of multielectrode technology made it possible to measure the extracellular potential generated in the neural tissue with spatial precision on the order of tens of micrometers and on submillisecond time scale. Combining such measurements with imaging of single neurons within the studied tissue opens up new experimental possibilities for estimating distribution of current sources along a dendritic tree. In this work we show that if we are able to relate part of the recording of extracellular potential to a specific cell of known morphology we can estimate the spatiotemporal distribution of transmembrane currents along it. We present here an extension of the kernel CSD method (Potworowski et al., 2012 applicable in such case. We test it on several model neurons of progressively complicated morphologies from ball-and-stick to realistic, up to analysis of simulated neuron activity embedded in a substantial working network (Traub et al, 2005. We discuss the caveats and possibilities of this new approach.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of
Face Value: Towards Robust Estimates of Snow Leopard Densities.
Directory of Open Access Journals (Sweden)
Justine S Alexander
Full Text Available When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01 individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87. Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.
Abbasi, Fahim; Reaven, Gerald M
2011-12-01
The objective was to compare relationships between insulin-mediated glucose uptake and surrogate estimates of insulin action, particularly those using fasting triglyceride (TG) and high-density lipoprotein cholesterol (HDL-C) concentrations. Insulin-mediated glucose uptake was quantified by determining the steady-state plasma glucose (SSPG) concentration during the insulin suppression test in 455 nondiabetic subjects. Fasting TG, HDL-C, glucose, and insulin concentrations were measured; and calculations were made of the following: (1) plasma concentration ratio of TG/HDL-C, (2) TG Ã— fasting glucose (TyG index), (3) homeostasis model assessment of insulin resistance, and (4) insulin area under the curve (insulin-AUC) during a glucose tolerance test. Insulin-AUC correlated most closely with SSPG (r âˆ¼ 0.75, P index, homeostasis model assessment of insulin resistance, and fasting TG and insulin (r âˆ¼ 0.60, P index correlated with SSPG concentration to a similar degree, and the relationships were comparable to estimates using fasting insulin. The strongest relationship was between SSPG and insulin-AUC. Copyright Â© 2011 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Mathew W. Alldredge
2007-12-01
Full Text Available The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture-recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence, which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low homogenous rates per interval with those singing at (high and low heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant
Simplified large African carnivore density estimators from track indices
Directory of Open Access Journals (Sweden)
Christiaan W. Winterbach
2016-12-01
Full Text Available Background The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. Methods We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ÃŸÂ in the linear model yÂ =Â Î±xÂ + ÃŸ, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. Results The Lion on Clay and Low Density on Sand models with intercept were not significant (PÂ >Â 0.05. The other four models with intercept and the six models thorough origin were all significant (PÂ <Â 0.05. The models using linear regression with intercept all included zero in the confidence interval for ÃŸÂ and the null hypothesis that ÃŸÂ = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Discussion Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26Â
Estimating diurnal primate densities using distance sampling ...
African Journals Online (AJOL)
SARAH
2016-03-31
Mar 31, 2016 ... In the second session, we used 10 transect adjusted to transect (Grid 17 ... session transect was visited 20 times while at the second session transect ... probability, the density of the group and the group size of each speciesÂ ...
Estimating black bear density using DNA data from hair snares
Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.
2010-01-01
DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.
Improving Frozen Precipitation Density Estimation in Land Surface Modeling
Sparrow, K.; Fall, G. M.
2017-12-01
The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in
Nonparametric volatility density estimation for discrete time models
Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.
2005-01-01
We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed
Ant-inspired density estimation via random walks.
Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A
2017-10-03
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.
Information geometry of density matrices and state estimation
International Nuclear Information System (INIS)
Brody, Dorje C
2011-01-01
Given a pure state vector |x) and a density matrix Ï-hat, the function p(x|Ï-hat)= defines a probability density on the space of pure states parameterised by density matrices. The associated Fisher-Rao information measure is used to define a unitary invariant Riemannian metric on the space of density matrices. An alternative derivation of the metric, based on square-root density matrices and trace norms, is provided. This is applied to the problem of quantum-state estimation. In the simplest case of unitary parameter estimation, new higher-order corrections to the uncertainty relations, applicable to general mixed states, are derived. (fast track communication)
An Improved Convolutional Neural Network on Crowd Density Estimation
Directory of Open Access Journals (Sweden)
Pan Shao-Yun
2016-01-01
Full Text Available In this paper, a new method is proposed for crowd density estimation. An improved convolutional neural network is combined with traditional texture feature. The data calculated by the convolutional layer can be treated as a new kind of features.So more useful information of images can be extracted by different features.In the meantime, the size of image has little effect on the result of convolutional neural network. Experimental results indicate that our scheme has adequate performance to allow for its use in real world applications.
Kernel bandwidth estimation for non-parametric density estimation: a comparative study
CSIR Research Space (South Africa)
Van der Walt, CM
2013-12-01
Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...
Energy Technology Data Exchange (ETDEWEB)
Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in
2016-09-07
Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: â€¢ State estimation using maximum likelihood method was performed on an NMR quantum information processor. â€¢ Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. â€¢ Density matrices of several different entangled and separable states were reconstructed for two and three qubits.
(epf) in estimating wind power density
African Journals Online (AJOL)
Dogara M. D et. al
(7). âˆž. 0. Energy Pattern Factor (EPF). The Energy Pattern Factor (EPF) Method, is defined by Akdag and Ali (2009) as an ... the monthly average for an eleven year period was found (table. 1). .... Edition, Mc Graw Hill Inc., New York. WalkerÂ ...
A density gradient theory based method for surface tension calculations
DEFF Research Database (Denmark)
Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios
2016-01-01
The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali Ali; Zhang, Xiangliang; Wang, Suojin
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. Â© 2012 ACM.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.; Franek, M.; Schonlieb, C.-B.
2012-01-01
for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations
Estimating Foreign-Object-Debris Density from Photogrammetry Data
Long, Jason; Metzger, Philip; Lane, John
2013-01-01
Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.
Improved Variable Window Kernel Estimates of Probability Densities
Hall, Peter; Hu, Tien Chung; Marron, J. S.
1995-01-01
Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...
Polarizable Density Embedding Coupled Cluster Method
DEFF Research Database (Denmark)
HrÅ¡ak, Dalibor; Olsen, JÃ³gvan Magnus Haugaard; Kongsted, Jacob
2018-01-01
by an embedding potential consisting of a set of fragment densities obtained from calculations on isolated fragments with a quantum-chemistry method such as Hartree-Fock (HF) or Kohn-Sham density functional theory (KS-DFT) and dressed with a set of atom-centered anisotropic dipole-dipole polarizabilities...
Boundary methods for mode estimation
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Heuristic introduction to estimation methods
International Nuclear Information System (INIS)
Feeley, J.J.; Griffith, J.M.
1982-08-01
The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems
HEDPIN: a computer program to estimate pinwise power density
International Nuclear Information System (INIS)
Cappiello, M.W.
1976-05-01
A description is given of the digital computer program, HEDPIN. This program, modeled after a previously developed program, POWPIN, provides a means of estimating the pinwise power density distribution in fast reactor triangular pitched pin bundles. The capability also exists for computing any reaction rate of interest at the respective pin positions within an assembly. HEDPIN was developed in support of FTR fuel and test management as well as fast reactor core design and core characterization planning and analysis. The results of a test devised to check out HEDPIN's computational method are given, and the realm of application is discussed. Nearly all programming is in FORTRAN IV. Variable dimensioning is employed to make efficient use of core memory and maintain short running time for small problems. Input instructions, sample problem, and a program listing are also given
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from captureâ€“recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÃŒâ€š) and effective sampling area (AÃŒâ€š) are used to calculate density (DÃŒâ€š = NÃŒâ€š/AÃŒâ€š); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from captureâ€“recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional â€œsaturationâ€ trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÃŒâ€š) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this â
Computerized image analysis: estimation of breast density on mammograms
Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
2000-06-01
An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.
Quantitative assessment of breast density: comparison of different methods
International Nuclear Information System (INIS)
Qin Naishan; Guo Li; Dang Yi; Song Luxin; Wang Xiaoying
2011-01-01
Objective: To Compare different methods of quantitative breast density measurement. Methods: The study included sixty patients who underwent both mammography and breast MRI. The breast density was computed automatically on digital mammograms with R2 workstation, Two experienced radiologists read the mammograms and assessed the breast density with Wolfe and ACR classification respectively. Fuzzy C-means clustering algorithm (FCM) was used to assess breast density on MRI. Each assessment method was repeated after 2 weeks. Spearman and Pearson correlations of inter- and intrareader and intermodality were computed for density estimates. Results: Inter- and intrareader correlation of Wolfe classification were 0.74 and 0.65, and they were 0.74 and 0.82 for ACR classification respectively. Correlation between Wolfe and ACR classification was 0.77. High interreader correlation of 0.98 and intrareader correlation of 0.96 was observed with MR FCM measurement. And the correlation between digital mammograms and MRI was high in the assessment of breast density (r=0.81, P<0.01). Conclusion: High correlation of breast density estimates on digital mammograms and MRI FCM suggested the former could be used as a simple and accurate method. (authors)
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Methods for estimating the semivariogram
DEFF Research Database (Denmark)
Lophaven, SÃ¸ren Nymand; Carstensen, Niels Jacob; Rootzen, Helle
2002-01-01
. In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...
Cortical cell and neuron density estimates in one chimpanzee hemisphere.
Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H
2016-01-19
The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates.
Bouguer density analysis using nettleton method at Banten NPP site
International Nuclear Information System (INIS)
Yuliastuti; Hadi Suntoko; Yarianto SBS
2017-01-01
Sub-surface information become crucial in determining a feasible NPP site that safe from external hazards. Gravity survey which result as density information, is essential to understand the sub-surface structure. Nevertheless, overcorrected or under corrected will lead to a false interpretation. Therefore, density correction in term of near-surface average density or Bouguer density is necessary to be calculated. The objective of this paper is to estimate and analyze Bouguer density using Nettleton method at Banten NPP Site. Methodology used in this paper is Nettleton method that applied in three different slices (A-B, A-C and A-D) with density assumption range between 1700 and 3300 kg/m"3. Nettleton method is based on minimum correlation between gravity anomaly and topography to determine density correction. The result shows that slice A-B which covers rough topography difference, Nettleton method failed. While using the other two slices, Nettleton method yield with a different density value, 2700 kg/m"3 for A-C and 2300 kg/m"3 for A-D. A-C provides the lowest correlation value which represents the Upper Banten tuff and Gede Mt. volcanic rocks in accordance with Quartenary rocks exist in the studied area. (author)
Continuum Level Density in Complex Scaling Method
International Nuclear Information System (INIS)
Suzuki, R.; Myo, T.; Kato, K.
2005-01-01
A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique
Density-functional expansion methods: Grand challenges.
Giese, Timothy J; York, Darrin M
2012-03-01
We discuss the source of errors in semiempirical density functional expansion (VE) methods. In particular, we show that VE methods are capable of well-reproducing their standard Kohn-Sham density functional method counterparts, but suffer from large errors upon using one or more of these approximations: the limited size of the atomic orbital basis, the Slater monopole auxiliary basis description of the response density, and the one- and two-body treatment of the core-Hamiltonian matrix elements. In the process of discussing these approximations and highlighting their symptoms, we introduce a new model that supplements the second-order density-functional tight-binding model with a self-consistent charge-dependent chemical potential equalization correction; we review our recently reported method for generalizing the auxiliary basis description of the atomic orbital response density; and we decompose the first-order potential into a summation of additive atomic components and many-body corrections, and from this examination, we provide new insights and preliminary results that motivate and inspire new approximate treatments of the core-Hamiltonian.
Unrecorded Alcohol Consumption: Quantitative Methods of Estimation
Razvodovsky, Y. E.
2010-01-01
unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.
Acoustic levitation methods for density measurements
Trinh, E. H.; Hsu, C. J.
1986-01-01
The capability of ultrasonic levitators operating in air to perform density measurements has been demonstrated. The remote determination of the density of ordinary liquids as well as low density solid metals can be carried out using levitated samples with size on the order of a few millimeters and at a frequency of 20 kHz. Two basic methods may be used. The first one is derived from a previously known technique developed for acoustic levitation in liquid media, and is based on the static equilibrium position of levitated samples in the earth's gravitational field. The second approach relies on the dynamic interaction between a levitated sample and the acoustic field. The first technique appears more accurate (1 percent uncertainty), but the latter method is directly applicable to a near gravity-free environment such as that found in space.
Bayesian error estimation in density-functional theory
DEFF Research Database (Denmark)
Mortensen, Jens JÃ¸rgen; Kaasbjerg, Kristen; Frederiksen, SÃ¸ren Lund
2005-01-01
We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...
Estimate of energy density on CYCLOPS spatial filter pinhole structure
International Nuclear Information System (INIS)
Guch, S. Jr.
1974-01-01
The inclusion of a spatial filter between the B and C stages in CYCLOPS to reduce the effects of small-scale beam self-focusing is discussed. An estimate is made of the energy density to which the pinhole will be subjected, and the survivability of various pinhole materials and designs is discussed
State of the Art in Photon-Density Estimation
DEFF Research Database (Denmark)
Hachisuka, Toshiya; Jarosz, Wojciech; Georgiev, Iliyan
2013-01-01
scattering. Since its introduction, photon-density estimation has been significantly extended in computer graphics with the introduction of: specialized techniques that intelligently modify the positions or bandwidths to reduce visual error using a small number of photons, approaches that eliminate error...
State of the Art in Photon Density Estimation
DEFF Research Database (Denmark)
Hachisuka, Toshiya; Jarosz, Wojciech; Bouchard, Guillaume
2012-01-01
scattering. Since its introduction, photon-density estimation has been significantly extended in computer graphics with the introduction of: specialized techniques that intelligently modify the positions or bandwidths to reduce visual error using a small number of photons, approaches that eliminate error...
Estimation of larval density of Liriomyza sativae Blanchard (Diptera ...
African Journals Online (AJOL)
This study was conducted to develop sequential sampling plans to estimate larval density of Liriomyza sativae Blanchard (Diptera: Agromyzidae) at three precision levels in cucumber greenhouse. The within- greenhouse spatial patterns of larvae were aggregated. The slopes and intercepts of both Iwao's patchinessÂ ...
Estimating Soil Bulk Density and Total Nitrogen from Catchment ...
African Journals Online (AJOL)
Even though data on soil bulk density (BD) and total nitrogen (TN) are essential for planning modern farming techniques, their data availability is limited for many applications in the developing word. This study is designed to estimate BD and TN from soil properties, land-use systems, soil types and landforms in theÂ ...
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial captureâ€“recapture data. The model, which combined information, provided the most precise estimate of density (8.5 Â± 1.95 tigers/100 km2 [posterior mean Â± SD]) relative to a model that utilized only one data source (photographic, 12.02 Â± 3.02 tigers/100 km2 and fecal DNA, 6.65 Â± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Corruption clubs: empirical evidence from kernel density estimates
Herzfeld, T.; Weiss, Ch.
2007-01-01
A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to
Evaluating lidar point densities for effective estimation of aboveground biomass
Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason M.; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.
2016-01-01
The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.
Estimating the effect of urban density on fuel demand
Energy Technology Data Exchange (ETDEWEB)
Karathodorou, Niovi; Graham, Daniel J. [Imperial College London, London, SW7 2AZ (United Kingdom); Noland, Robert B. [Rutgers University, New Brunswick, NJ 08901 (United States)
2010-01-15
Much of the empirical literature on fuel demand presents estimates derived from national data which do not permit any explicit consideration of the spatial structure of the economy. Intuitively we would expect the degree of spatial concentration of activities to have a strong link with transport fuel consumption. The present paper addresses this theme by estimating a fuel demand model for urban areas to provide a direct estimate of the elasticity of demand with respect to urban density. Fuel demand per capita is decomposed into car stock per capita, fuel consumption per kilometre and annual distance driven per car per year. Urban density is found to affect fuel consumption, mostly through variations in the car stock and in the distances travelled, rather than through fuel consumption per kilometre. (author)
International Nuclear Information System (INIS)
Humbert, Ludovic; Hazrati Marangalou, Javad; Rietbergen, Bert van; RÃo Barquero, Luis Miguel del; Lenthe, G. Harry van
2016-01-01
Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was used as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 Â± 0.19 mm (âˆ’18 Â± 92 mg/cm"3) using the simulated clinical CT volumes with the smallest voxel size (0.33 Ã— 0.33 Ã— 0.5 mm"3), and 0.10 Â± 0.24 mm (âˆ’10 Â± 115 mg/cm"3) using the volumes with the largest voxel size (1.0 Ã— 1.0 Ã— 3.0 mm"3). A trend for the cortical thickness and
Energy Technology Data Exchange (ETDEWEB)
Humbert, Ludovic, E-mail: ludohumberto@gmail.com [Galgo Medical, Barcelona 08036 (Spain); Hazrati Marangalou, Javad; Rietbergen, Bert van [Orthopaedic Biomechanics, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB (Netherlands); RÃo Barquero, Luis Miguel del [CETIR Centre Medic, Barcelona 08029 (Spain); Lenthe, G. Harry van [Biomechanics Section, KU Leuvenâ€“University of Leuven, Leuven 3001 (Belgium)
2016-04-15
Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was used as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 Â± 0.19 mm (âˆ’18 Â± 92 mg/cm{sup 3}) using the simulated clinical CT volumes with the smallest voxel size (0.33 Ã— 0.33 Ã— 0.5 mm{sup 3}), and 0.10 Â± 0.24 mm (âˆ’10 Â± 115 mg/cm{sup 3}) using the volumes with the largest voxel size (1.0 Ã— 1.0 Ã— 3.0 mm{sup 3}). A trend for the
Automated mammographic breast density estimation using a fully convolutional network.
Lee, Juhun; Nishikawa, Robert M
2018-03-01
The purpose of this study was to develop a fully automated algorithm for mammographic breast density estimation using deep learning. Our algorithm used a fully convolutional network, which is a deep learning framework for image segmentation, to segment both the breast and the dense fibroglandular areas on mammographic images. Using the segmented breast and dense areas, our algorithm computed the breast percent density (PD), which is the faction of dense area in a breast. Our dataset included full-field digital screening mammograms of 604 women, which included 1208 mediolateral oblique (MLO) and 1208 craniocaudal (CC) views. We allocated 455, 58, and 91 of 604 women and their exams into training, testing, and validation datasets, respectively. We established ground truth for the breast and the dense fibroglandular areas via manual segmentation and segmentation using a simple thresholding based on BI-RADS density assessments by radiologists, respectively. Using the mammograms and ground truth, we fine-tuned a pretrained deep learning network to train the network to segment both the breast and the fibroglandular areas. Using the validation dataset, we evaluated the performance of the proposed algorithm against radiologists' BI-RADS density assessments. Specifically, we conducted a correlation analysis between a BI-RADS density assessment of a given breast and its corresponding PD estimate by the proposed algorithm. In addition, we evaluated our algorithm in terms of its ability to classify the BI-RADS density using PD estimates, and its ability to provide consistent PD estimates for the left and the right breast and the MLO and CC views of the same women. To show the effectiveness of our algorithm, we compared the performance of our algorithm against a state of the art algorithm, laboratory for individualized breast radiodensity assessment (LIBRA). The PD estimated by our algorithm correlated well with BI-RADS density ratings by radiologists. Pearson's rho values of
KDE-Track: An Efficient Dynamic Density Estimator for Data Streams
Qahtan, Abdulhakim Ali Ali; Wang, Suojin; Zhang, Xiangliang
2016-01-01
Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.
KDE-Track: An Efficient Dynamic Density Estimator for Data Streams
Qahtan, Abdulhakim Ali Ali
2016-11-08
Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.
Maximum entropy method in momentum density reconstruction
International Nuclear Information System (INIS)
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
New method for initial density reconstruction
Shi, Yanlong; Cautun, Marius; Li, Baojiu
2018-01-01
A theoretically interesting and practically important question in cosmology is the reconstruction of the initial density distribution provided a late-time density field. This is a long-standing question with a revived interest recently, especially in the context of optimally extracting the baryonic acoustic oscillation (BAO) signals from observed galaxy distributions. We present a new efficient method to carry out this reconstruction, which is based on numerical solutions to the nonlinear partial differential equation that governs the mapping between the initial Lagrangian and final Eulerian coordinates of particles in evolved density fields. This is motivated by numerical simulations of the quartic Galileon gravity model, which has similar equations that can be solved effectively by multigrid Gauss-Seidel relaxation. The method is based on mass conservation, and does not assume any specific cosmological model. Our test shows that it has a performance comparable to that of state-of-the-art algorithms that were very recently put forward in the literature, with the reconstructed density field over Ëœ80 % (50%) correlated with the initial condition at k â‰²0.6 h /Mpc (1.0 h /Mpc ). With an example, we demonstrate that this method can significantly improve the accuracy of BAO reconstruction.
Fusion rule estimation using vector space methods
International Nuclear Information System (INIS)
Rao, N.S.V.
1997-01-01
In a system of N sensors, the sensor S j , j = 1, 2 .... N, outputs Y (j) element-of Re, according to an unknown probability distribution P (Y(j) /X) , corresponding to input X element-of [0, 1]. A training n-sample (X 1 , Y 1 ), (X 2 , Y 2 ), ..., (X n , Y n ) is given where Y i = (Y i (1) , Y i (2) , . . . , Y i N ) such that Y i (j) is the output of S j in response to input X i . The problem is to estimate a fusion rule f : Re N â†’ [0, 1], based on the sample, such that the expected square error is minimized over a family of functions Y that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f to f* is feasible. We estimate the sample size sufficient to ensure that f provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (a) the sample size estimate is a simple function of the dimensionality of F, and (b) the estimate f can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks
Directory of Open Access Journals (Sweden)
Manan Gupta
Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates
Dual-Layer Density Estimation for Multiple Object Instance Detection
Directory of Open Access Journals (Sweden)
Qiang Zhang
2016-01-01
Full Text Available This paper introduces a dual-layer density estimation-based architecture for multiple object instance detection in robot inventory management applications. The approach consists of raw scale-invariant feature transform (SIFT feature matching and key point projection. The dominant scale ratio and a reference clustering threshold are estimated using the first layer of the density estimation. A cascade of filters is applied after feature template reconstruction and refined feature matching to eliminate false matches. Before the second layer of density estimation, the adaptive threshold is finalized by multiplying an empirical coefficient for the reference value. The coefficient is identified experimentally. Adaptive threshold-based grid voting is applied to find all candidate object instances. Error detection is eliminated using final geometric verification in accordance with Random Sample Consensus (RANSAC. The detection results of the proposed approach are evaluated on a self-built dataset collected in a supermarket. The results demonstrate that the approach provides high robustness and low latency for inventory management application.
Directory of Open Access Journals (Sweden)
YU Wenhao
2015-01-01
Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.
Estimation of dislocations density and distribution of dislocations during ECAP-Conform process
Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza
2018-01-01
Dislocation density of coarse grain aluminum AA1100 alloy (140 Âµm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2Ã—1012 m-2 increases to 4Ã—1013 m-2 at the middle of the groove (135Â° from the entrance), and they reach to 6.4Ã—1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2Ã—1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.
Automated volumetric breast density estimation: A comparison with visual assessment
International Nuclear Information System (INIS)
Seo, J.M.; Ko, E.S.; Han, B.-K.; Ko, E.Y.; Shin, J.H.; Hahn, S.Y.
2013-01-01
Aim: To compare automated volumetric breast density (VBD) measurement with visual assessment according to Breast Imaging Reporting and Data System (BI-RADS), and to determine the factors influencing the agreement between them. Materials and methods: One hundred and ninety-three consecutive screening mammograms reported as negative were included in the study. Three radiologists assigned qualitative BI-RADS density categories to the mammograms. An automated volumetric breast-density method was used to measure VBD (% breast density) and density grade (VDG). Each case was classified into an agreement or disagreement group according to the comparison between visual assessment and VDG. The correlation between visual assessment and VDG was obtained. Various physical factors were compared between the two groups. Results: Agreement between visual assessment by the radiologists and VDG was good (ICC value = 0.757). VBD showed a highly significant positive correlation with visual assessment (Spearman's Ï = 0.754, p < 0.001). VBD and the x-ray tube target was significantly different between the agreement group and the disagreement groups (p = 0.02 and 0.04, respectively). Conclusion: Automated VBD is a reliable objective method to measure breast density. The agreement between VDG and visual assessment by radiologist might be influenced by physical factors
Bayesian estimation methods in metrology
International Nuclear Information System (INIS)
Cox, M.G.; Forbes, A.B.; Harris, P.M.
2004-01-01
In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods
Semiautomatic estimation of breast density with DM-Scan software.
MartÃnez GÃ³mez, I; Casals El Busto, M; AntÃ³n Guirao, J; Ruiz Perales, F; Llobet Azpitarte, R
2014-01-01
To evaluate the reproducibility of the calculation of breast density with DM-Scan software, which is based on the semiautomatic segmentation of fibroglandular tissue, and to compare it with the reproducibility of estimation by visual inspection. The study included 655 direct digital mammograms acquired using craniocaudal projections. Three experienced radiologists analyzed the density of the mammograms using DM-Scan, and the inter- and intra-observer agreement between pairs of radiologists for the Boyd and BI-RADSÂ® scales were calculated using the intraclass correlation coefficient. The Kappa index was used to compare the inter- and intra-observer agreements with those obtained previously for visual inspection in the same set of images. For visual inspection, the mean interobserver agreement was 0,876 (95% CI: 0,873-0,879) on the Boyd scale and 0,823 (95% CI: 0,818-0,829) on the BI-RADSÂ® scale. The mean intraobserver agreement was 0,813 (95% CI: 0,796-0,829) on the Boyd scale and 0,770 (95% CI: 0,742-0,797) on the BI-RADSÂ® scale. For DM-Scan, the mean inter- and intra-observer agreement was 0,92, considerably higher than the agreement for visual inspection. The semiautomatic calculation of breast density using DM-Scan software is more reliable and reproducible than visual estimation and reduces the subjectivity and variability in determining breast density. Copyright Â© 2012 SERAM. Published by Elsevier Espana. All rights reserved.
Covariance and correlation estimation in electron-density maps.
Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna
2012-03-01
Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.
Density Estimation in Several Populations With Uncertain Population Membership
Ma, Yanyuan; Hart, Jeffrey D.; Carroll, Raymond J.
2011-01-01
sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate
Combinatorial nuclear level density by a Monte Carlo method
International Nuclear Information System (INIS)
Cerf, N.
1994-01-01
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations
Estimation of current density distribution under electrodes for external defibrillation
Directory of Open Access Journals (Sweden)
Papazov Sava P
2002-12-01
Full Text Available Abstract Background Transthoracic defibrillation is the most common life-saving technique for the restoration of the heart rhythm of cardiac arrest victims. The procedure requires adequate application of large electrodes on the patient chest, to ensure low-resistance electrical contact. The current density distribution under the electrodes is non-uniform, leading to muscle contraction and pain, or risks of burning. The recent introduction of automatic external defibrillators and even wearable defibrillators, presents new demanding requirements for the structure of electrodes. Method and Results Using the pseudo-elliptic differential equation of Laplace type with appropriate boundary conditions and applying finite element method modeling, electrodes of various shapes and structure were studied. The non-uniformity of the current density distribution was shown to be moderately improved by adding a low resistivity layer between the metal and tissue and by a ring around the electrode perimeter. The inclusion of openings in long-term wearable electrodes additionally disturbs the current density profile. However, a number of small-size perforations may result in acceptable current density distribution. Conclusion The current density distribution non-uniformity of circular electrodes is about 30% less than that of square-shaped electrodes. The use of an interface layer of intermediate resistivity, comparable to that of the underlying tissues, and a high-resistivity perimeter ring, can further improve the distribution. The inclusion of skin aeration openings disturbs the current paths, but an appropriate selection of number and size provides a reasonable compromise.
Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.
Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M
2015-05-01
Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology.
Nonparametric Bayesian density estimation on manifolds with applications to planar shapes.
Bhattacharya, Abhishek; Dunson, David B
2010-12-01
Statistical analysis on landmark-based shape spaces has diverse applications in morphometrics, medical diagnostics, machine vision and other areas. These shape spaces are non-Euclidean quotient manifolds. To conduct nonparametric inferences, one may define notions of centre and spread on this manifold and work with their estimates. However, it is useful to consider full likelihood-based methods, which allow nonparametric estimation of the probability density. This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular. Following a Bayesian approach with a nonparametric prior on the mixing distribution, conditions are obtained under which the Kullback-Leibler property holds, implying large support and weak posterior consistency. Gibbs sampling methods are developed for posterior computation, and the methods are applied to problems in density estimation and classification with shape-based predictors. Simulation studies show improved estimation performance relative to existing approaches.
Level density in the complex scaling method
International Nuclear Information System (INIS)
Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki
2005-01-01
It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)
Bulk density estimation using a 3-dimensional image acquisition and analysis system
Directory of Open Access Journals (Sweden)
Heyduk Adam
2016-01-01
Full Text Available The paper presents a concept of dynamic bulk density estimation of a particulate matter stream using a 3-d image analysis system and a conveyor belt scale. A method of image acquisition should be adjusted to the type of scale. The paper presents some laboratory results of static bulk density measurements using the MS Kinect time-of-flight camera and OpenCV/Matlab software. Measurements were made for several different size classes.
Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery
Directory of Open Access Journals (Sweden)
Shouyang Liu
2017-05-01
Full Text Available Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seedsâ‹…m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages.
Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.
Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li
2017-05-03
In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.
Crowd density estimation based on convolutional neural networks with mixed pooling
Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming
2017-09-01
Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.
METAPHOR: Probability density estimation for machine learning based photometric redshifts
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-06-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).
An analytical framework for estimating aquatic species density from environmental DNA
Chambert, Thierry; Pilliod, David S.; Goldberg, Caren S.; Doi, Hideyuki; Takahara, Teruhiko
2018-01-01
Environmental DNA (eDNA) analysis of water samples is on the brink of becoming a standard monitoring method for aquatic species. This method has improved detection rates over conventional survey methods and thus has demonstrated effectiveness for estimation of site occupancy and species distribution. The frontier of eDNA applications, however, is to infer species density. Building upon previous studies, we present and assess a modeling approach that aims at inferring animal density from eDNA. The modeling combines eDNA and animal count data from a subset of sites to estimate species density (and associated uncertainties) at other sites where only eDNA data are available. As a proof of concept, we first perform a cross-validation study using experimental data on carp in mesocosms. In these data, fish densities are known without error, which allows us to test the performance of the method with known data. We then evaluate the model using field data from a study on a stream salamander species to assess the potential of this method to work in natural settings, where density can never be known with absolute certainty. Two alternative distributions (Normal and Negative Binomial) to model variability in eDNA concentration data are assessed. Assessment based on the proof of concept data (carp) revealed that the Negative Binomial model provided much more accurate estimates than the model based on a Normal distribution, likely because eDNA data tend to be overdispersed. Greater imprecision was found when we applied the method to the field data, but the Negative Binomial model still provided useful density estimates. We call for further model development in this direction, as well as further research targeted at sampling design optimization. It will be important to assess these approaches on a broad range of study systems.
Volumetric breast density estimation from full-field digital mammograms.
Engeland, S. van; Snoeren, P.R.; Huisman, H.J.; Boetes, C.; Karssemeijer, N.
2006-01-01
A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast
DNA-based population density estimation of black bear at northern ...
African Journals Online (AJOL)
The analysis of deoxyribonucleic acid (DNA) microsatellites from hair samples obtained by the non-invasive method of traps was used to estimate the population density of black bears (Ursus americanus eremicus) in a mountain located at the county of Lampazos, Nuevo Leon, Mexico. The genotyping of bears wasÂ ...
Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.
Directory of Open Access Journals (Sweden)
Alexander Richard Braczkowski
Full Text Available Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96 or temporal activity of female (p = 0.12 or male leopards (p = 0.79, and the assumption of geographic closure was met for both surveys (p >0.05. The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90. Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2 were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2. The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.
3D depth-to-basement and density contrast estimates using gravity and borehole data
Barbosa, V. C.; Martins, C. M.; Silva, J. B.
2009-05-01
We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding
Dose estimation by biological methods
International Nuclear Information System (INIS)
Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.
1997-01-01
The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)
Calculation of the time resolution of the J-PET tomograph using kernel density estimation
RaczyÅ„ski, L.; WiÅ›licki, W.; KrzemieÅ„, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; BiaÅ‚as, P.; Curceanu, C.; CzerwiÅ„ski, E.; Dulski, K.; Gajos, A.; GÅ‚owacz, B.; Gorgol, M.; Hiesmayr, B.; JasiÅ„ska, B.; KamiÅ„ska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-NiedÅºwiecka, M.; NiedÅºwiecki, S.; PaÅ‚ka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; ZgardziÅ„ska, B.; ZieliÅ„ski, M.; Moskal, P.
2017-06-01
In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30â€‰cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.
Directory of Open Access Journals (Sweden)
Noritaka Shimizu
2016-02-01
Full Text Available We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of JÏ€=2+ and 2âˆ’ states in 58Ni in a unified manner.
Importance of tree basic density in biomass estimation and associated uncertainties
DEFF Research Database (Denmark)
Njana, Marco Andrew; Meilby, Henrik; Eid, Tron
2016-01-01
Key message Aboveground and belowground tree basic densities varied between and within the three mangrove species. If appropriately determined and applied, basic density may be useful in estimation of tree biomass. Predictive accuracy of the common (i.e. multi-species) models including aboveground...... of sustainable forest management, conservation and enhancement of carbon stocks (REDD+) initiatives offer an opportunity for sustainable management of forests including mangroves. In carbon accounting for REDD+, it is required that carbon estimates prepared for monitoring reporting and verification schemes...... and examine uncertainties in estimation of tree biomass using indirect methods. Methods This study focused on three dominant mangrove species (Avicennia marina (Forssk.) Vierh, Sonneratia alba J. Smith and Rhizophora mucronata Lam.) in Tanzania. A total of 120 trees were destructively sampled for aboveground...
[Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].
Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong
2015-11-01
With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t Â· hm(-2), ranging from 0.00 to 67.35 t Â· hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.
Directory of Open Access Journals (Sweden)
Z. Lari
2012-07-01
Full Text Available Over the past few years, LiDAR systems have been established as a leading technology for the acquisition of high density point clouds over physical surfaces. These point clouds will be processed for the extraction of geo-spatial information. Local point density is one of the most important properties of the point cloud that highly affects the performance of data processing techniques and the quality of extracted information from these data. Therefore, it is necessary to define a standard methodology for the estimation of local point density indices to be considered for the precise processing of LiDAR data. Current definitions of local point density indices, which only consider the 2D neighbourhood of individual points, are not appropriate for 3D LiDAR data and cannot be applied for laser scans from different platforms. In order to resolve the drawbacks of these methods, this paper proposes several approaches for the estimation of the local point density index which take the 3D relationship among the points and the physical properties of the surfaces they belong to into account. In the simplest approach, an approximate value of the local point density for each point is defined while considering the 3D relationship among the points. In the other approaches, the local point density is estimated by considering the 3D neighbourhood of the point in question and the physical properties of the surface which encloses this point. The physical properties of the surfaces enclosing the LiDAR points are assessed through eigen-value analysis of the 3D neighbourhood of individual points and adaptive cylinder methods. This paper will discuss these approaches and highlight their impact on various LiDAR data processing activities (i.e., neighbourhood definition, region growing, segmentation, boundary detection, and classification. Experimental results from airborne and terrestrial LiDAR data verify the efficacy of considering local point density variation for
Light element nucleosynthesis and estimates of the universal baryon density
International Nuclear Information System (INIS)
Mathews, G.J.; Viola, V.E.
1978-01-01
The present mean universal baryon density rho/sub b/, is of interest because it and the Hubble constant determine the curvature of the Universe. The available indicators of rho/sub b/ come from the present deuterium abundance, if it is assumed that ''big-bang'' nucleosynthesis must produce enough D to at least match the abundance of this nuclide in the interstellar medium. An alternative method utilizing the 7 Li/D ratio is used to evaluate rho/sub b/. With this method the difficulty associated with the astration process can be essentially canceled from the problem. The results obtained indicate an open Universe with a best guess for rho/sub b/ of 7.1 x 10 -31 g/cm 3 . 1 figure, 1 table
Directory of Open Access Journals (Sweden)
Mario ZortÃ©a Antunes Junior
2009-12-01
Full Text Available O objetivo deste trabalho foi estimar o nÃºmero de folhas de ramos do dossel de cultivares de mangueira e estimar a densidade de Ã¡rea foliar utilizando, respectivamente, uma relaÃ§Ã£o alomÃ©trica e um modelo de interceptaÃ§Ã£o de luz. O trabalho foi conduzido com as cultivares Alfa, Roxa e Malind, na fazenda experimental da Universidade Federal de Mato Grosso, no MunicÃpio de Santo AntÃ´nio do Leverger, MT. As equaÃ§Ãµes testadas para a determinaÃ§Ã£o do nÃºmero de folhas apresentaram desempenho Ã³timo, com Ãndices de confianÃ§a que variaram entre 0,85 e 0,94, e podem ser utilizadas como alternativa para a estimativa da Ã¡rea foliar das trÃªs cultivares. O modelo de interceptaÃ§Ã£o da luz tambÃ©m apresentou desempenho Ã³timo e bom na estimativa da densidade foliar, com Ãndices de confianÃ§a que variaram entre 0,97 e 0,99 e 0,68 e 0,95 para as cultivares de mangueira Roxa e Malind, respectivamente.The objective of this work was to estimate the number of leaves in the branches of mango cultivars canopies and to estimate the leaf area density using, respectively, an allometric relation and a light interception model. The work was carried out with the Alfa, Roxa and Malind cultivars, grown at the experimental farm of the Universidade Federal de Mato Grosso, in the municipality of Santo AntÃ´nio do Leverger, MT, Brazil. The equations tested for determining the number of leaves had excellent performance, with confidence indexes ranging from 0,85 to 0,94, and can be used as an alternative for estimating the leaf area of the three cultivars. The light interception model also had good performance in estimating leaf density, with confidence indexes ranging from 0,97 to 0,99 and from 0,68 to 0,95 for the Roxa and Malind mango cultivars respectively.
Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations
Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik
2009-04-01
Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 Â° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.
Volumetric breast density estimation from full-field digital mammograms.
van Engeland, Saskia; Snoeren, Peter R; Huisman, Henkjan; Boetes, Carla; Karssemeijer, Nico
2006-03-01
A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear attenuation coefficients of these tissues are derived from empirical data as a function of tube voltage (kVp), anode material, filtration, and compressed breast thickness. By employing these, tissue composition at a given pixel is computed after performing breast thickness compensation, using a reference value for fatty tissue determined by the maximum pixel value in the breast tissue projection. Validation has been performed using 22 FFDM cases acquired with a GE Senographe 2000D by comparing the volume estimates with volumes obtained by semi-automatic segmentation of breast magnetic resonance imaging (MRI) data. The correlation between MRI and mammography volumes was 0.94 on a per image basis and 0.97 on a per patient basis. Using the dense tissue volumes from MRI data as the gold standard, the average relative error of the volume estimates was 13.6%.
Ahn, Chul Kyun; Heo, Changyong; Jin, Heongmin; Kim, Jong Hyo
2017-03-01
Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert's manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.
DEFF Research Database (Denmark)
Wellendorff, Jess; LundgÃ¥rd, Keld Troen; MÃ¸gelhÃ¸j, Andreas
2012-01-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this....
Estimating abundance and density of Amur tigers along the Sino-Russian border.
Xiao, Wenhong; Feng, Limin; Mou, Pu; Miquelle, Dale G; Hebblewhite, Mark; Goldberg, Joshua F; Robinson, Hugh S; Zhao, Xiaodan; Zhou, Bo; Wang, Tianming; Ge, Jianping
2016-07-01
As an apex predator the Amur tiger (Panthera tigris altaica) could play a pivotal role in maintaining the integrity of forest ecosystems in Northeast Asia. Due to habitat loss and harvest over the past century, tigers rapidly declined in China and are now restricted to the Russian Far East and bordering habitat in nearby China. To facilitate restoration of the tiger in its historical range, reliable estimates of population size are essential to assess effectiveness of conservation interventions. Here we used camera trap data collected in Hunchun National Nature Reserve from April to June 2013 and 2014 to estimate tiger density and abundance using both maximum likelihood and Bayesian spatially explicit capture-recapture (SECR) methods. A minimum of 8 individuals were detected in both sample periods and the documentation of marking behavior and reproduction suggests the presence of a resident population. Using Bayesian SECR modeling within the 11 400 km(2) state space, density estimates were 0.33 and 0.40 individuals/100 km(2) in 2013 and 2014, respectively, corresponding to an estimated abundance of 38 and 45 animals for this transboundary Sino-Russian population. In a maximum likelihood framework, we estimated densities of 0.30 and 0.24 individuals/100 km(2) corresponding to abundances of 34 and 27, in 2013 and 2014, respectively. These density estimates are comparable to other published estimates for resident Amur tiger populations in the Russian Far East. This study reveals promising signs of tiger recovery in Northeast China, and demonstrates the importance of connectivity between the Russian and Chinese populations for recovering tigers in Northeast China. Â© 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
Methods to enhance blanket power density
International Nuclear Information System (INIS)
Hsu, P.Y.; Miller, L.G.; Bohn, T.S.; Deis, G.A.; Longhurst, G.R.; Masson, L.S.; Wessol, D.E.; Abdou, M.A.
1982-06-01
The overall objective of this task is to investigate the extent to which the power density in the FED/INTOR breeder blanket test modules can be enhanced by artificial means. Assuming a viable approach can be developed, it will allow advanced reactor blanket modules to be tested on FED/INTOR under representative conditions
A Method of Nuclear Software Reliability Estimation
International Nuclear Information System (INIS)
Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol
2011-01-01
A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed
Method-related estimates of sperm vitality.
Cooper, Trevor G; Hellenkemper, Barbara
2009-01-01
Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.
Using bremsstrahlung for electron density estimation and correction in EAST tokamak
Energy Technology Data Exchange (ETDEWEB)
Chen, Yingjie, E-mail: bestfaye@gmail.com; Wu, Zhenwei; Gao, Wei; Jie, Yinxian; Zhang, Jizong; Huang, Juan; Zhang, Ling; Zhao, Junyu
2013-11-15
Highlights: â€¢ The visible bremsstrahlung diagnostic provides a simple and effective tool for electron density estimation in steady state discharges. â€¢ This method can make up some disadvantages of present FIR and TS diagnostics in EAST tokamak. â€¢ Line averaged electron density has been deduced from central VB signal. The results can also be used for FIR n{sub e} correction. â€¢ Typical n{sub e} profiles have been obtained with T{sub e} and reconstructed bremsstrahlung profiles. -- Abstract: In EAST electron density (n{sub e}) is measured by the multi-channel far-infrared (FIR) hydrogen cyanide (HCN) interferometer and Thomson scattering (TS) diagnostics. However, it is difficult to obtain accurate n{sub e} profile for that there are many problems existing in current electron density diagnostics. Since the visible bremsstrahlung (VB) emission coefficient has a strong dependence on electron density, the visible bremsstrahlung measurement system developed to determine the ion effective charge (Z{sub eff}) may also be used for n{sub e} estimation via inverse operations. With assumption that Z{sub eff} has a flat profile and does not change significantly in steady state discharges, line averaged electron density (n{sup Â¯}{sub e}) has been deduced from VB signals in L-mode and H-mode discharges in EAST. The results are in good coincidence with n{sup Â¯}{sub e} from FIR, which proves that VB measurement is an effective tool for n{sub e} estimation. VB diagnostic is also applied to n{sup Â¯}{sub e} correction when FIR n{sup Â¯}{sub e} is wrong for the laser phase shift reversal together with noise causes errors when electron density changed rapidly in the H-mode discharges. Typical n{sub e} profiles in L-mode and H-mode phase are also deduced with reconstructed bremsstrahlung profiles.
DEFF Research Database (Denmark)
Rosholm, A; Hyldstrup, L; Backsgaard, L
2002-01-01
A new automated radiogrammetric method to estimate bone mineral density (BMD) from a single radiograph of the hand and forearm is described. Five regions of interest in radius, ulna and the three middle metacarpal bones are identified and approximately 1800 geometrical measurements from these bones......-ray absoptiometry (r = 0.86, p Relative to this age-related loss, the reported short...... sites and a precision that potentially allows for relatively short observation intervals. Udgivelsesdato: 2001-null...
Power spectral density of velocity fluctuations estimated from phase Doppler data
Jicha Miroslav; Lizal Frantisek; Jedelsky Jan
2012-01-01
Laser Doppler Anemometry (LDA) and its modifications such as PhaseDoppler Particle Anemometry (P/DPA) is point-wise method for optical nonintrusive measurement of particle velocity with high data rate. Conversion of the LDA velocity data from temporal to frequency domain â€“ calculation of power spectral density (PSD) of velocity fluctuations, is a non trivial task due to nonequidistant data sampling in time. We briefly discuss possibilities for the PSD estimation and specify limitations caused...
Density measurements of small amounts of high-density solids by a floatation method
International Nuclear Information System (INIS)
Akabori, Mitsuo; Shiba, Koreyuki
1984-09-01
A floatation method for determining the density of small amounts of high-density solids is described. The use of a float combined with an appropriate floatation liquid allows us to measure the density of high-density substances in small amounts. Using the sample of 0.1 g in weight, the floatation liquid of 3.0 g cm -3 in density and the float of 1.5 g cm -3 in apparent density, the sample densities of 5, 10 and 20 g cm -3 are determined to an accuracy better than +-0.002, +-0.01 and +-0.05 g cm -3 , respectively that correspond to about +-1 x 10 -5 cm 3 in volume. By means of appropriate degassing treatments, the densities of (Th,U)O 2 pellets of --0.1 g in weight and --9.55 g cm -3 in density were determined with an accuracy better than +-0.05 %. (author)
Carroll, Raymond J.
2011-03-01
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys
Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik
2011-01-01
The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...
Spectrum estimation method based on marginal spectrum
International Nuclear Information System (INIS)
Cai Jianhua; Hu Weiwen; Wang Xianchun
2011-01-01
FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)
Histogram specification as a method of density modification
International Nuclear Information System (INIS)
Harrison, R.W.
1988-01-01
A new method for improving the quality and extending the resolution of Fourier maps is described. The method is based on a histogram analysis of the electron density. The distribution of electron density values in the map is forced to be 'ideal'. The 'ideal' distribution is assumed to be Gaussian. The application of the method to improve the electron density map for the protein Acinetobacter asparaginase, which is a tetrameric enzyme of molecular weight 140000 daltons, is described. (orig.)
Factorization method for simulating QCD at finite density
International Nuclear Information System (INIS)
Nishimura, Jun
2003-01-01
We propose a new method for simulating QCD at finite density. The method is based on a general factorization property of distribution functions of observables, and it is therefore applicable to any system with a complex action. The so-called overlap problem is completely eliminated by the use of constrained simulations. We test this method in a Random Matrix Theory for finite density QCD, where we are able to reproduce the exact results for the quark number density. (author)
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim
2016-05-11
Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The
Network Kernel Density Estimation for the Analysis of Facility POI Hotspots
Directory of Open Access Journals (Sweden)
YU Wenhao
2015-12-01
Full Text Available The distribution pattern of urban facility POIs (points of interest usually forms clusters (i.e. "hotspots" in urban geographic space. To detect such type of hotspot, the methods mostly employ spatial density estimation based on Euclidean distance, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. By using these methods, it is difficult to exactly and objectively delimitate the shape and the size of hotspot. Therefore, this research adopts the kernel density estimation based on the network distance to compute the density of hotspot and proposes a simple and efficient algorithm. The algorithm extends the 2D dilation operator to the 1D morphological operator, thus computing the density of network unit. Through evaluation experiment, it is suggested that the algorithm is more efficient and scalable than the existing algorithms. Based on the case study on real POI data, the range of hotspot can highlight the spatial characteristic of urban functions along traffic routes, in order to provide valuable spatial knowledge and information services for the applications of region planning, navigation and geographic information inquiring.
DEFF Research Database (Denmark)
Buch-Kromann, Tine; Nielsen, Jens
2012-01-01
This paper introduces a multivariate density estimator for truncated and censored data with special emphasis on extreme values based on survival analysis. A local constant density estimator is considered. We extend this estimator by means of tail flattening transformation, dimension reducing prior...
Automatic breast tissue density estimation scheme in digital mammography images
Menechelli, Renan C.; Pacheco, Ana Luisa V.; Schiabel, Homero
2017-03-01
Cases of breast cancer have increased substantially each year. However, radiologists are subject to subjectivity and failures of interpretation which may affect the final diagnosis in this examination. The high density features in breast tissue are important factors related to these failures. Thus, among many functions some CADx (Computer-Aided Diagnosis) schemes are classifying breasts according to the predominant density. In order to aid in such a procedure, this work attempts to describe automated software for classification and statistical information on the percentage change in breast tissue density, through analysis of sub regions (ROIs) from the whole mammography image. Once the breast is segmented, the image is divided into regions from which texture features are extracted. Then an artificial neural network MLP was used to categorize ROIs. Experienced radiologists have previously determined the ROIs density classification, which was the reference to the software evaluation. From tests results its average accuracy was 88.7% in ROIs classification, and 83.25% in the classification of the whole breast density in the 4 BI-RADS density classes - taking into account a set of 400 images. Furthermore, when considering only a simplified two classes division (high and low densities) the classifier accuracy reached 93.5%, with AUC = 0.95.
International Nuclear Information System (INIS)
Briggs, C.K.; Tsugawa, R.T.; Hendricks, C.D.; Souers, P.C.
1975-01-01
The literature values for the 0.55-Î¼m refractive index N of liquid and gaseous H 2 and D 2 are combined to yield the equation (N - 1) = [(3.15 +- 0.12) x 10 -6 ]rho, where rho is the density in moles per cubic meter. This equation can be extrapolated to 300 0 K for use on DT in solid, liquid, and gas phases. The equation is based on a review of solid-hydrogen densities measured in bulk and also by diffraction methods. By extrapolation, the estimated densities and 0.55-Î¼m refractive indices for DT are given. Radiation-induced point defects could possibly cause optical absorption and a resulting increased refractive index in solid DT and T 2 . The effect of the DT refractive index in measuring glass and cryogenic DT laser targets is also described
Bernard R. Parresol; Charles E. Thomas
1996-01-01
In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...
Digital Repository Service at National Institute of Oceanography (India)
Madhupratap, M.; Achuthankutty, C.T.; Nair, S.R.S.
Direct sampling of the sandy substratus of the Agatti Lagoon with a corer showed the presence of vary high densities of epibenthic forms. On average, densities were about 25 times higher than previously estimated with emergence traps. About 80...
Directory of Open Access Journals (Sweden)
Marco Lombardo
Full Text Available PURPOSE: To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. METHODS: Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320Ã—320 Âµm, 160Ã—160 Âµm and 64Ã—64 Âµm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL. The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr, the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. RESULTS: The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. CONCLUSIONS: The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi
Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals
Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew
2011-01-01
Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.
Local-scaling density-functional method: Intraorbit and interorbit density optimizations
International Nuclear Information System (INIS)
Koga, T.; Yamamoto, Y.; Ludena, E.V.
1991-01-01
The recently proposed local-scaling density-functional theory provides us with a practical method for the direct variational determination of the electron density function Ï(r). The structure of ''orbits,'' which ensures the one-to-one correspondence between the electron density Ï(r) and the N-electron wave function Î¨({r k }), is studied in detail. For the realization of the local-scaling density-functional calculations, procedures for intraorbit and interorbit optimizations of the electron density function are proposed. These procedures are numerically illustrated for the helium atom in its ground state at the beyond-Hartree-Fock level
Efficient pseudospectral methods for density functional calculations
International Nuclear Information System (INIS)
Murphy, R. B.; Cao, Y.; Beachy, M. D.; Ringnalda, M. N.; Friesner, R. A.
2000-01-01
Novel improvements of the pseudospectral method for assembling the Coulomb operator are discussed. These improvements consist of a fast atom centered multipole method and a variation of the Head-Gordan J-engine analytic integral evaluation. The details of the methodology are discussed and performance evaluations presented for larger molecules within the context of DFT energy and gradient calculations. (c) 2000 American Institute of Physics
A Fast Soft Bit Error Rate Estimation Method
Directory of Open Access Journals (Sweden)
Ait-Idir Tarik
2010-01-01
Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.
Methods for risk estimation in nuclear energy
Energy Technology Data Exchange (ETDEWEB)
Gauvenet, A [CEA, 75 - Paris (France)
1979-01-01
The author presents methods for estimating the different risks related to nuclear energy: immediate or delayed risks, individual or collective risks, risks of accidents and long-term risks. These methods have attained a highly valid level of elaboration and their application to other industrial or human problems is currently under way, especially in English-speaking countries.
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Bayesian Inference Methods for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand
2013-01-01
This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...
Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods
Zhaodi Guo; Jingyun Fang; Yude Pan; Richard. Birdsey
2010-01-01
Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...
Method of making stepped photographic density standards of radiographic photographs
International Nuclear Information System (INIS)
Borovin, I.V.; Kondina, M.A.
1987-01-01
In industrial radiography practice the need often arises for a prompt evaluation of the photographic density of an x-ray film. A method of making stepped photographic density standards for industrial radiography by contact printing from a negative is described. The method is intended for industrial radiation flaw detection laboratories not having specialized sensitometric equipment
Comparison of methods for estimating premorbid intelligence
Bright, Peter; van der Linde, Ian
2018-01-01
To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or â€˜premorbidâ€™) estimate of a patientâ€™s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded â€˜hold/no holdâ€™ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed ...
Directory of Open Access Journals (Sweden)
Jiang Ge
2017-01-01
Full Text Available System degradation was usually caused by multiple-parameter degradation. The assessment result of system reliability by universal generating function was low accurate when compared with the Monte Carlo simulation. And the probability density function of the system output performance cannot be got. So the reliability assessment method based on the probability density evolution with multi-parameter was presented for complexly degraded system. Firstly, the system output function was founded according to the transitive relation between component parameters and the system output performance. Then, the probability density evolution equation based on the probability conservation principle and the system output function was established. Furthermore, probability distribution characteristics of the system output performance was obtained by solving differential equation. Finally, the reliability of the degraded system was estimated. This method did not need to discrete the performance parameters and can establish continuous probability density function of the system output performance with high calculation efficiency and low cost. Numerical example shows that this method is applicable to evaluate the reliability of multi-parameter degraded system.
A new approach on seismic mortality estimations based on average population density
Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong
2016-12-01
This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.
A pdf-Free Change Detection Test Based on Density Difference Estimation.
Bu, Li; Alippi, Cesare; Zhao, Dongbin
2018-02-01
The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Directory of Open Access Journals (Sweden)
Rongda Chen
Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolioâ€™s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moodyâ€™s. However, it has a fatal defect that it canâ€™t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moodyâ€™s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Ge, Zhenpeng; Wang, Yi
2017-04-20
Molecular dynamics simulations of nanoparticles (NPs) are increasingly used to study their interactions with various biological macromolecules. Such simulations generally require detailed knowledge of the surface composition of the NP under investigation. Even for some well-characterized nanoparticles, however, this knowledge is not always available. An example is nanodiamond, a nanoscale diamond particle with surface dominated by oxygen-containing functional groups. In this work, we explore using the harmonic restraint method developed by Venable et al., to estimate the surface charge density (Ïƒ) of nanodiamonds. Based on the Gouy-Chapman theory, we convert the experimentally determined zeta potential of a nanodiamond to an effective charge density (Ïƒ eff ), and then use the latter to estimate Ïƒ via molecular dynamics simulations. Through scanning a series of nanodiamond models, we show that the above method provides a straightforward protocol to determine the surface charge density of relatively large (> âˆ¼100 nm) NPs. Overall, our results suggest that despite certain limitation, the above protocol can be readily employed to guide the model construction for MD simulations, which is particularly useful when only limited experimental information on the NP surface composition is available to a modeler.
Method of measuring density of gas in a vessel
International Nuclear Information System (INIS)
Shono, Kosuke.
1981-01-01
Purpose: To accurately measure the density of a gas in a vessel even at a loss-of-coolant accident in a BWR type reactor. Method: When at least one of the pressure or the temperature of gas in a vessel exceeds the usable range of a gas density measuring instrument due to a loss-of-coolant accident, the gas in the vessel is sampled, and the pressure or the temperature of the sampled gas are measured by matching them to the usable conditions of the gas density measuring instrument. Hydrogen gas and oxygen gas densities exceeding the usable range of the gas density measuring instrument are calculated by the following formulae based on the measured values. C'sub(O) = P sub(T).C sub(O)/P sub(T), C'sub(H) = C''sub(H).C'sub(O)/C''sub(O), where C sub(O), P sub(T), C'sub(H) represent the oxygen density, the total pressure and the hydrogen density of the internal pressure gas of the vessel after the respective gas density measuring instruments exceed the usable ranges; C sub(O), P sub(T) represent the oxygen density and the total pressure of the gas in the vessel before the gas density measuring instruments exceeded the usable range, and C''sub(H), C''sub(O) represent the hydrogen density and oxygen density of the respective sampled gases. (Kamimura, M.)
Energy Technology Data Exchange (ETDEWEB)
Ren, Shangjie [Tianjin Key Laboratory of Process Measurement and Control, School of Electrical Engineering and Automation, Tianjin University, Tianjin (China); Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Hara, Wendy; Wang, Lei; Buyyounouski, Mark K.; Le, Quynh-Thu; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Li, Ruijiang, E-mail: rli2@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States)
2017-03-15
Purpose: ToÂ develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a reference anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analysesÂ for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.
A simple method to estimate interwell autocorrelation
Energy Technology Data Exchange (ETDEWEB)
Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Directory of Open Access Journals (Sweden)
Gibson Lucinda
2010-02-01
Full Text Available Abstract Background Studies involving the built environment have typically relied on US Census data to measure residential density. However, census geographic units are often unsuited to health-related research, especially in rural areas where development is clustered and discontinuous. Objective We evaluated the accuracy of both standard census methods and alternative GIS-based methods to measure rural density. Methods We compared residential density (units/acre in 335 Vermont school neighborhoods using conventional census geographic units (tract, block group and block with two GIS buffer measures: a 1-kilometer (km circle around the school and a 1-km circle intersected with a 100-meter (m road-network buffer. The accuracy of each method was validated against the actual residential density for each neighborhood based on the Vermont e911 database, which provides an exact geo-location for all residential structures in the state. Results Standard census measures underestimate residential density in rural areas. In addition, the degree of error is inconsistent so even the relative rank of neighborhood densities varies across census measures. Census measures explain only 61% to 66% of the variation in actual residential density. In contrast, GIS buffer measures explain approximately 90% of the variation. Combining a 1-km circle with a road-network buffer provides the closest approximation of actual residential density. Conclusion Residential density based on census units can mask clusters of development in rural areas and distort associations between residential density and health-related behaviors and outcomes. GIS-defined buffers, including a 1-km circle and a road-network buffer, can be used in conjunction with census data to obtain a more accurate measure of residential density.
Directory of Open Access Journals (Sweden)
Jan Horbowy
Full Text Available Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR, which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.
Two methods for isolating the lung area of a CT scan for density information
International Nuclear Information System (INIS)
Hedlund, L.W.; Anderson, R.F.; Goulding, P.L.; Beck, J.W.; Effmann, E.L.; Putman, C.E.
1982-01-01
Extracting density information from irregularly shaped tissue areas of CT scans requires automated methods when many scans are involved. We describe two computer methods that automatically isolate the lung area of a CT scan. Each starts from a single, operator specified point in the lung. The first method follows the steep density gradient boundary between lung and adjacent tissues; this tracking method is useful for estimating the overall density and total area of lung in a scan because all pixels within the lung area are available for statistical sampling. The second method finds all contiguous pixels of lung that are within the CT number range of air to water and are not a part of strong density gradient edges; this method is useful for estimating density and area of the lung parenchyma. Structures within the lung area that are surrounded by strong density gradient edges, such as large blood vessels, airways and nodules, are excluded from the lung sample while lung areas with diffuse borders, such as an area of mild or moderate edema, are retained. Both methods were tested on scans from an animal model of pulmonary edema and were found to be effective in isolating normal and diseased lungs. These methods are also suitable for isolating other organ areas of CT scans that are bounded by density gradient edges
Kernel density estimation-based real-time prediction for respiratory motion
International Nuclear Information System (INIS)
Ruan, Dan
2010-01-01
Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the
Efficient Methods of Estimating Switchgrass Biomass Supplies
Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...
Coalescent methods for estimating phylogenetic trees.
Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V
2009-10-01
We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.
Adjusting forest density estimates for surveyor bias in historical tree surveys
Brice B. Hanberry; Jian Yang; John M. Kabrick; Hong S. He
2012-01-01
The U.S. General Land Office surveys, conducted between the late 1700s to early 1900s, provide records of trees prior to widespread European and American colonial settlement. However, potential and documented surveyor bias raises questions about the reliability of historical tree density estimates and other metrics based on density estimated from these records. In this...
Reliability and precision of pellet-group counts for estimating landscape-level deer density
David S. deCalesta
2013-01-01
This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...
Owens, Peter M; Titus-Ernstoff, Linda; Gibson, Lucinda; Beach, Michael L; Beauregard, Sandy; Dalton, Madeline A
2010-02-12
Studies involving the built environment have typically relied on US Census data to measure residential density. However, census geographic units are often unsuited to health-related research, especially in rural areas where development is clustered and discontinuous. We evaluated the accuracy of both standard census methods and alternative GIS-based methods to measure rural density. We compared residential density (units/acre) in 335 Vermont school neighborhoods using conventional census geographic units (tract, block group and block) with two GIS buffer measures: a 1-kilometer (km) circle around the school and a 1-km circle intersected with a 100-meter (m) road-network buffer. The accuracy of each method was validated against the actual residential density for each neighborhood based on the Vermont e911 database, which provides an exact geo-location for all residential structures in the state. Standard census measures underestimate residential density in rural areas. In addition, the degree of error is inconsistent so even the relative rank of neighborhood densities varies across census measures. Census measures explain only 61% to 66% of the variation in actual residential density. In contrast, GIS buffer measures explain approximately 90% of the variation. Combining a 1-km circle with a road-network buffer provides the closest approximation of actual residential density. Residential density based on census units can mask clusters of development in rural areas and distort associations between residential density and health-related behaviors and outcomes. GIS-defined buffers, including a 1-km circle and a road-network buffer, can be used in conjunction with census data to obtain a more accurate measure of residential density.
Histogram specification as a method of density modification
Energy Technology Data Exchange (ETDEWEB)
Harrison, R.W.
1988-12-01
A new method for improving the quality and extending the resolution of Fourier maps is described. The method is based on a histogram analysis of the electron density. The distribution of electron density values in the map is forced to be 'ideal'. The 'ideal' distribution is assumed to be Gaussian. The application of the method to improve the electron density map for the protein Acinetobacter asparaginase, which is a tetrameric enzyme of molecular weight 140000 daltons, is described.
Moments Method for Shell-Model Level Density
International Nuclear Information System (INIS)
Zelevinsky, V; Horoi, M; Sen'kov, R A
2016-01-01
The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)
Directory of Open Access Journals (Sweden)
Yongjun Ahn
Full Text Available The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric
EuroMInd-D: A Density Estimate of Monthly Gross Domestic Product for the Euro Area
DEFF Research Database (Denmark)
Proietti, Tommaso; Marczak, Martyna; Mazzi, Gianluigi
EuroMInd-D is a density estimate of monthly gross domestic product (GDP) constructed according to a bottomâ€“up approach, pooling the density estimates of eleven GDP components, by output and expenditure type. The components density estimates are obtained from a medium-size dynamic factor model...... of a set of coincident time series handling mixed frequencies of observation and raggedâ€“edged data structures. They reflect both parameter and filtering uncertainty and are obtained by implementing a bootstrap algorithm for simulating from the distribution of the maximum likelihood estimators of the model...
Estimating the mass density in the thermosphere with the CYGNSS mission.
Bussy-Virat, C.; Ridley, A. J.
2017-12-01
The Cyclone Global Navigation Satellite System (CYGNSS) mission, launched in December 2016, is a constellation of eight satellites orbiting the Earth at 510 km. Its goal is to improve our understanding of rapid hurricane wind intensification. Each CYGNSS satellite uses GPS signals that are reflected off of the ocean's surface to measure the wind. The GPS can also be used to specify the orbit of the satellites quite precisely. The motion of satellites in low Earth orbit are greatly influenced by the neutral density of the surrounding atmosphere through drag. Modeling the neutral density in the upper atmosphere is a major challenge as it involves a comprehensive understanding of the complex coupling between the thermosphere and the ionosphere, the magnetosphere, and the Sun. This is why thermospheric models (such as NRLMSIS, Jacchia-Bowman, HASDM, GITM, or TIEGCM) can only approximate it with a limited accuracy, which decreases during strong geomagnetic events. Because atmospheric drag directly depends on the thermospheric density, it can be estimated applying filtering methods to the trajectories of the CYGNSS observatories. The CYGNSS mission can provide unique results since the constellation of eight satellites enables multiple measurements of the same region at close intervals ( 10 minutes), which can be used to detect short time scale features. Moreover, the CYGNSS spacecraft can be pitched from a low to high drag attitude configuration, which can be used in the filtering methods to improve the accuracy of the atmospheric density estimation. The methodology and the results of this approach applied to the CYGNSS mission will be presented.
Energy Technology Data Exchange (ETDEWEB)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A. [Physics and Astronomy Department, University of California, Los Angeles, California 90095 (United States); Bobrek, M. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6006 (United States)
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.
A model to optimize trap systems used for small mammal (Rodentia, Insectivora density estimates
Directory of Open Access Journals (Sweden)
Damiano Preatoni
1997-12-01
Full Text Available Abstract The environment found in the upper and lower Padane Plain and the adjoining hills isn't very homogeneous. In fact it is impossible to find biotopes extended enough to satisfy the necessary criteria for density estimate of small mammals based on the Removal method. This limitation has been partially overcome by adopting a reduced grid, counting 39 traps whose spacing depends on the studied species. Aim of this work was to verify - and eventually measure - the efficiency of a sampling method based on a "reduced" number of catch points. The efficiency of 18 trapping cycles, realized from 1991 to 1993, was evaluated as percent bias. For each of the trapping cycles, 100 computer simulations were performed, so obtaining a Monte-Carlo estimate of bias in density values. Then later, the efficiency of different trap arrangements was examined by varying the criteria. The numbers of traps ranged from 9 to 49, with trap spacing varying from 5 to 15 m and a trapping period duration from 5 to 9 nights. In this way an optimal grid system was found both for dimensions and time duration. The simulation processes involved, as a whole, 1511 different grid types, for 11347 virtual trapping cycles. Our results indicate that density estimates based on "reduced" grids are affected by an average -16% bias, that is an underestimate, and that an optimally sized grid must consist of 6x6 traps square, with about 8.7 m spacing. and be in operation for 7 nights.
2014-09-30
No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...will be applied also to other species such as sperm whale (Physeter macrocephalus) (whose high source level assures long range detection and amplifies...improve the accuracy of marine mammal density estimation based on counting echolocation clicks, and will be applicable to density estimates obtained
Reliability of Estimation Pile Load Capacity Methods
Directory of Open Access Journals (Sweden)
Yudhi Lastiasih
2014-04-01
Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The authorâ€™s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansenâ€™s method (1963, Chinâ€™s method (1970, Decourtâ€™s Extrapolation Method (1999, Mazurkiewiczâ€™s method (1972, Van der Veenâ€™s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and Oâ€™Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.
Computing thermal Wigner densities with the phase integration method
International Nuclear Information System (INIS)
Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.
2014-01-01
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Methods to estimate the genetic risk
International Nuclear Information System (INIS)
Ehling, U.H.
1989-01-01
The estimation of the radiation-induced genetic risk to human populations is based on the extrapolation of results from animal experiments. Radiation-induced mutations are stochastic events. The probability of the event depends on the dose; the degree of the damage dose not. There are two main approaches in making genetic risk estimates. One of these, termed the direct method, expresses risk in terms of expected frequencies of genetic changes induced per unit dose. The other, referred to as the doubling dose method or the indirect method, expresses risk in relation to the observed incidence of genetic disorders now present in man. The advantage of the indirect method is that not only can Mendelian mutations be quantified, but also other types of genetic disorders. The disadvantages of the method are the uncertainties in determining the current incidence of genetic disorders in human and, in addition, the estimasion of the genetic component of congenital anomalies, anomalies expressed later and constitutional and degenerative diseases. Using the direct method we estimated that 20-50 dominant radiation-induced mutations would be expected in 19 000 offspring born to parents exposed in Hiroshima and Nagasaki, but only a small proportion of these mutants would have been detected with the techniques used for the population study. These methods were used to predict the genetic damage from the fallout of the reactor accident at Chernobyl in the vicinity of Southern Germany. The lack of knowledge for the interaction of chemicals with ionizing radiation and the discrepancy between the high safety standards for radiation protection and the low level of knowledge for the toxicological evaluation of chemical mutagens will be emphasized. (author)
Asiri, Sharefa M.
2017-10-19
In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.
International Nuclear Information System (INIS)
Bakosi, Jozsef; Ristorcelli, Raymond J.
2010-01-01
Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.
Use of spatial captureâ€“recapture to estimate density of Andean bears in northern Ecuador
Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew
2017-01-01
The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial captureâ€“recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2Â within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.
Comparison of subjective and fully automated methods for measuring mammographic density.
Moshina, Nataliia; Roman, Marta; SebuÃ¸degÃ¥rd, Sofie; Waade, Gunvor G; Ursin, Giske; Hofvind, Solveig
2018-02-01
Background Breast radiologists of the Norwegian Breast Cancer Screening Program subjectively classified mammographic density using a three-point scale between 1996 and 2012 and changed into the fourth edition of the BI-RADS classification since 2013. In 2015, an automated volumetric breast density assessment software was installed at two screening units. Purpose To compare volumetric breast density measurements from the automated method with two subjective methods: the three-point scale and the BI-RADS density classification. Material and Methods Information on subjective and automated density assessment was obtained from screening examinations of 3635 women recalled for further assessment due to positive screening mammography between 2007 and 2015. The score of the three-point scale (Iâ€‰=â€‰fatty; IIâ€‰=â€‰medium dense; IIIâ€‰=â€‰dense) was available for 2310 women. The BI-RADS density score was provided for 1325 women. Mean volumetric breast density was estimated for each category of the subjective classifications. The automated software assigned volumetric breast density to four categories. The agreement between BI-RADS and volumetric breast density categories was assessed using weighted kappa (k w ). Results Mean volumetric breast density was 4.5%, 7.5%, and 13.4% for categories I, II, and III of the three-point scale, respectively, and 4.4%, 7.5%, 9.9%, and 13.9% for the BI-RADS density categories, respectively ( P for trendâ€‰density categories was k w â€‰=â€‰0.5 (95% CIâ€‰=â€‰0.47-0.53; Pâ€‰density increased with increasing density category of the subjective classifications. The agreement between BI-RADS and volumetric breast density categories was moderate.
A generalized model for estimating the energy density of invertebrates
James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.
2012-01-01
Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19Â y. Evaluation of these data revealed a significant relationship between ED and pDM (r2Â â€Š=â€Š 0.96,Â pÂ cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.
Power spectral density of velocity fluctuations estimated from phase Doppler data
Jedelsky, Jan; Lizal, Frantisek; Jicha, Miroslav
2012-04-01
Laser Doppler Anemometry (LDA) and its modifications such as PhaseDoppler Particle Anemometry (P/DPA) is point-wise method for optical nonintrusive measurement of particle velocity with high data rate. Conversion of the LDA velocity data from temporal to frequency domain - calculation of power spectral density (PSD) of velocity fluctuations, is a non trivial task due to nonequidistant data sampling in time. We briefly discuss possibilities for the PSD estimation and specify limitations caused by seeding density and other factors of the flow and LDA setup. Arbitrary results of LDA measurements are compared with corresponding Hot Wire Anemometry (HWA) data in the frequency domain. Slot correlation (SC) method implemented in software program Kern by Nobach (2006) is used for the PSD estimation. Influence of several input parameters on resulting PSDs is described. Optimum setup of the software for our data of particle-laden air flow in realistic human airway model is documented. Typical character of the flow is described using PSD plots of velocity fluctuations with comments on specific properties of the flow. Some recommendations for improvements of future experiments to acquire better PSD results are given.
Power spectral density of velocity fluctuations estimated from phase Doppler data
Directory of Open Access Journals (Sweden)
Jicha Miroslav
2012-04-01
Full Text Available Laser Doppler Anemometry (LDA and its modifications such as PhaseDoppler Particle Anemometry (P/DPA is point-wise method for optical nonintrusive measurement of particle velocity with high data rate. Conversion of the LDA velocity data from temporal to frequency domain â€“ calculation of power spectral density (PSD of velocity fluctuations, is a non trivial task due to nonequidistant data sampling in time. We briefly discuss possibilities for the PSD estimation and specify limitations caused by seeding density and other factors of the flow and LDA setup. Arbitrary results of LDA measurements are compared with corresponding Hot Wire Anemometry (HWA data in the frequency domain. Slot correlation (SC method implemented in software program Kern by Nobach (2006 is used for the PSD estimation. Influence of several input parameters on resulting PSDs is described. Optimum setup of the software for our data of particle-laden air flow in realistic human airway model is documented. Typical character of the flow is described using PSD plots of velocity fluctuations with comments on specific properties of the flow. Some recommendations for improvements of future experiments to acquire better PSD results are given.
Chen, Jacqueline; Kostenko, Volodymyr; Pioro, Erik P; Trapp, Bruce D
2018-01-23
Purpose To determine if magnetic resonance (MR) imaging metrics can estimate primary motor cortex (PMC) motor neuron (MN) density in patients with amyotrophic lateral sclerosis (ALS). Materials and Methods Between 2012 and 2014, in situ brain MR imaging was performed in 11 patients with ALS (age range, 35-81 years; seven women and four men) soon after death (mean, 5.5 hours after death; range, 3.2-9.6 hours). The brain was removed, right PMC (RPMC) was excised, and MN density was quantified. RPMC metrics (thickness, volume, and magnetization transfer ratio) were calculated from MR images. Regression modeling was used to estimate MN density by using RPMC and global MR imaging metrics (brain and tissue volumes); clinical variables were subsequently evaluated as additional estimators. Models were tested at in vivo MR imaging by using the same imaging protocol (six patients with ALS; age range, 54-66 years; three women and three men). Results RPMC mean MN density varied over a greater than threefold range across patients and was estimated by a linear function of normalized gray matter volume (adjusted R 2 = 0.51; P = .008; <10% error in most patients). When considering only sporadic ALS, a linear function of normalized RPMC and white matter volumes estimated MN density (adjusted R 2 = 0.98; P = .01; <10% error in all patients). In vivo data analyses detected decreases in MN density over time. Conclusion PMC mean MN density varies widely in end-stage ALS possibly because of disease heterogeneity. MN density can potentially be estimated by MR imaging metrics. Â© RSNA, 2018 Online supplemental material is available for this article.
Directory of Open Access Journals (Sweden)
Shanshan Yang
Full Text Available Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD, and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS and kernel principal component analysis (KPCA methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP decision rule and support vector machine (SVM with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified.
On Convergence of Kernel Density Estimates in Particle Filtering
Czech Academy of Sciences Publication Activity Database
Coufal, David
2016-01-01
RoÄ. 52, Ä. 5 (2016), s. 735-756 ISSN 0023-5954 Grant - others:GA ÄŒR(CZ) GA16-03708S; SVV(CZ) 260334/2016 Institutional support: RVO:67985807 Keywords : Fourier analysis * kernel methods * particle filter Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.379, year: 2016
Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment
Energy Technology Data Exchange (ETDEWEB)
Mohayai, Tanaz Angelina [IIT, Chicago; Snopok, Pavel [IIT, Chicago; Neuffer, David [Fermilab; Rogers, Chris [Rutherford
2017-10-12
The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.
Evaluation of methods to determine sperm density for the european eel, anguilla anguilla
DEFF Research Database (Denmark)
SÃ¸rensen, Sune Riis; Gallego, V.; PÃ©rez, L.
2013-01-01
, computer-assisted sperm analysis (CASA) and flow cytometry (FCM), using Neubauer Improved haemocytometer as benchmark. Initially, relationships between spermatocrit, haemocytometer counts and sperm motility were analysed, as well as the effect of sperm dilution on haemocytometer counts. Furthermore......, accuracy and precision of spermatocrit, applying a range of G-forces, were tested and the best G-force used in method comparisons. We found no effect of dilution on haemocytometer sperm density estimates, whereas motility associated positively with haemocytometer counts, but not with spermatocrit. Results......European eel, Anguilla anguilla, is a target species for future captive breeding, yet best methodology to estimate sperm density for application in in vitro fertilization is not established. Thus, our objectives were to evaluate methods to estimate European eel sperm density including spermatocrit...
Ko, Hoon; Jeong, Kwanmoon; Lee, Chang-Hoon; Jun, Hong Young; Jeong, Changwon; Lee, Myeung Su; Nam, Yunyoung; Yoon, Kwon-Ha; Lee, Jinseok
2016-01-01
Image artifacts affect the quality of medical images and may obscure anatomic structure and pathology. Numerous methods for suppression and correction of scattered image artifacts have been suggested in the past three decades. In this paper, we assessed the feasibility of use of information on scattered artifacts for estimation of bone mineral density (BMD) without dual-energy X-ray absorptiometry (DXA) or quantitative computed tomographic imaging (QCT). To investigate the relationship between scattered image artifacts and BMD, we first used a forearm phantom and cone-beam computed tomography. In the phantom, we considered two regions of interest-bone-equivalent solid material containing 50Â mgÂ HAÂ perÂ cm(-3) and water-to represent low- and high-density trabecular bone, respectively. We compared the scattered image artifacts in the high-density material with those in the low-density material. The technique was then applied to osteoporosis patients and healthy subjects to assess its feasibility for BMD estimation. The high-density material produced a greater number of scattered image artifacts than the low-density material. Moreover, the radius and ulna of healthy subjects produced a greater number of scattered image artifacts than those from osteoporosis patients. Although other parameters, such as bone thickness and X-ray incidence, should be considered, our technique facilitated BMD estimation directly without DXA or QCT. We believe that BMD estimation based on assessment of scattered image artifacts may benefit the prevention, early treatment and management of osteoporosis.
Density Estimation and Anomaly Detection in Large Social Networks
2014-07-15
Time of Single Trial DMD MD (a) Loss curves for proposed dynamic mirror de - scent (DMD) method and mirror descent (MD) against time for a single...curves for DMD and MD against time over 100 trials. Figure 2.2: Simulation results for the experiment in Section 2.4.1. The vertical dashed lines indicate ...landscape, 2012. http: //strata.oreilly.com/2012/01/what-is-big-data.html. [24] A. Gyorgy, T. Linder , and G. Lugosi. Efficient tracking of large classes
Three methods for estimating a range of vehicular interactions
KrbÃ¡lek, Milan; Apeltauer, JiÅ™Ã; Apeltauer, TomÃ¡Å¡; SzabovÃ¡, Zuzana
2018-02-01
We present three different approaches how to estimate the number of preceding cars influencing a decision-making procedure of a given driver moving in saturated traffic flows. The first method is based on correlation analysis, the second one evaluates (quantitatively) deviations from the main assumption in the convolution theorem for probability, and the third one operates with advanced instruments of the theory of counting processes (statistical rigidity). We demonstrate that universally-accepted premise on short-ranged traffic interactions may not be correct. All methods introduced have revealed that minimum number of actively-followed vehicles is two. It supports an actual idea that vehicular interactions are, in fact, middle-ranged. Furthermore, consistency between the estimations used is surprisingly credible. In all cases we have found that the interaction range (the number of actively-followed vehicles) drops with traffic density. Whereas drivers moving in congested regimes with lower density (around 30 vehicles per kilometer) react on four or five neighbors, drivers moving in high-density flows respond to two predecessors only.
Magnetic Method to Characterize the Current Densities in Breaker Arc
International Nuclear Information System (INIS)
Machkour, Nadia
2005-01-01
The purpose of this research was to use magnetic induction measurements from a low voltage breaker arc, to reconstruct the arc's current density. The measurements were made using Hall effect sensors, which were placed close to, but outside the breaking device. The arc was modelled as a rectangular current sheet, composed of a mix of threadlike current segments and with a current density varying across the propagation direction. We found the magnetic induction of the arc is a convolution product of the current density, and a function depending on the breaker geometry and arc model. Using deconvolution methods, the current density in the electric arc was determined.The method is used to study the arc behavior into the breaker device. Notably, position, arc size, and electric conductivity could all be determined, and then used to characterize the arc mode, diffuse or concentrated, and study the condition of its mode changing
Directory of Open Access Journals (Sweden)
CÃ©cile Souty
2016-11-01
Full Text Available Abstract Background In surveillance networks based on voluntary participation of health-care professionals, there is little choice regarding the selection of participantsâ€™ characteristics. External information about participants, for example local physician density, can help reduce bias in incidence estimates reported by the surveillance network. Methods There is an inverse association between the number of reported influenza-like illness (ILI cases and local general practitioners (GP density. We formulated and compared estimates of ILI incidence using this relationship. To compare estimates, we simulated epidemics using a spatially explicit disease model and their observation by surveillance networks with different characteristics: random, maximum coverage, largest cities, etc. Results In the French practice-based surveillance network â€“ the â€œSentinellesâ€ network â€“ GPs reported 3.6% (95% CI [3;4] less ILI cases as local GP density increased by 1 GP per 10,000 inhabitants. Incidence estimates varied markedly depending on scenarios for participant selection in surveillance. Yet accounting for change in GP density for participants allowed reducing bias. Applied on data from the Sentinelles network, changes in overall incidence ranged between 1.6 and 9.9%. Conclusions Local GP density is a simple measure that provides a way to reduce bias in estimating disease incidence in general practice. It can contribute to improving disease monitoring when it is not possible to choose the characteristics of participants.
Correlations between different methods of UO2 pellet density measurement
International Nuclear Information System (INIS)
Yanagisawa, Kazuaki
1977-07-01
Density of UO 2 pellets was measured by three different methods, i.e., geometrical, water-immersed and meta-xylene immersed and treated statistically, to find out the correlations between UO 2 pellets are of six kinds but with same specifications. The correlations are linear 1 : 1 for pellets of 95% theoretical densities and above, but such do not exist below the level and variated statistically due to interaction between open and close pores. (auth.)
Apparatus and method for generating high density pulses of electrons
International Nuclear Information System (INIS)
Lee, C.; Oettinger, P.E.
1981-01-01
An apparatus and method are described for the production of high density pulses of electrons using a laser energized emitter. Caesium atoms from a low pressure vapour atmosphere are absorbed on and migrate from a metallic target rapidly heated by a laser to a high temperature. Due to this heating time being short compared with the residence time of the caesium atoms adsorbed on the target surface, copious electrons are emitted which form a high current density pulse. (U.K.)
A note on the conditional density estimate in single functional index model
2010-01-01
Abstract In this paper, we consider estimation of the conditional density of a scalar response variable Y given a Hilbertian random variable X when the observations are linked with a single-index structure. We establish the pointwise and the uniform almost complete convergence (with the rate) of the kernel estimate of this model. As an application, we show how our result can be applied in the prediction problem via the conditional mode estimate. Finally, the estimation of the funct...
Dual ant colony operational modal analysis parameter estimation method
Sitarz, Piotr; PowaÅ‚ka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
EnviroAtlas - New Bedford, MA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Woodbine, IA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Green Bay, WI - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Des Moines, IA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Durham, NC - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Fresno, CA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Cleveland, OH - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Portland, ME - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - New York, NY - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Memphis, TN - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Milwaukee, WI - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas Estimated Intersection Density of Walkable Roads Web Service
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in each EnviroAtlas community....
EnviroAtlas - Portland, OR - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Tampa, FL - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Austin, TX - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Paterson, NJ - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Phoenix, AZ - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
EnviroAtlas - Pittsburgh, PA - Estimated Intersection Density of Walkable Roads
U.S. Environmental Protection Agency â€” This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...
Density meter algorithm and system for estimating sampling/mixing uncertainty
International Nuclear Information System (INIS)
Shine, E.P.
1986-01-01
The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses
Density meter algorithm and system for estimating sampling/mixing uncertainty
International Nuclear Information System (INIS)
Shine, E.P.
1986-01-01
The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses
International Nuclear Information System (INIS)
Trapero, Juan R.
2016-01-01
In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the â€œbestâ€ forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: â€¢ This work explores uncertainty forecasting models to build prediction intervals. â€¢ Kernel density estimators, exponential smoothing and GARCH models are compared. â€¢ An optimal combination of methods provides the best results. â€¢ A good compromise between coverage and average interval width is shown.
Remote estimation of crown size and tree density in snowy areas
Kishi, R.; Ito, A.; Kamada, K.; Fukita, T.; Lahrita, L.; Kawase, Y.; Murahashi, K.; Kawamata, H.; Naruse, N.; Takahashi, Y.
2017-12-01
Precise estimation of tree density in the forest leads us to understand the amount of carbon dioxide fixed by plants. Aerial photographs have been used to measure the number of trees. Campaign using aircraft, however, is expensive ( $50,000/1 campaign flight) and the research area is limited in drone. In addition, previous studies estimating the density of trees from aerial photographs have been performed in the summer, so there was a gap of 15% in the estimation due to the overlapping of the leaves. Here, we have proposed a method to accurately estimate the number of forest trees from the satellite images of snow-covered deciduous forest area, using the ratio of branches to snow. The advantages of our method are as follows; 1) snow area could be excluded easily due to the high reflectance, 2) tree branches are small overlapping compared to leaves. Although our method can use only in the snowfall region, the area covered with snow in the world becomes more than 12,800,000 km2. Our proposition should play an important role in discussing global warming. As a test area, we have chosen the forest near Mt. Amano in Iwate prefecture in Japan. First, we made a new index of (Band1-Band5)/(Band1+Band5), which will be suitable to distinguish between the snow and the tree trunk using the corresponding spectral reflection data. Next, the index values of changing the ratio in 1% increments were listed. From the satellite image analysis at 4 points, the ratio of snow to tree trunk showed the following values, I:61%, II:65%, III:66% and IV:65%. To confirm the estimation, we used the aerial photograph from Google earth; the rate was I:42.05%, II:48.89%, III:50.64%, IV:49.05%, respectively. There is a correlation between the numerical values of both, but there are differences. We will discuss in detail at this point, focusing on the effect of shadows.
Effects of stand density on top height estimation for ponderosa pine
Martin Ritchie; Jianwei Zhang; Todd Hamilton
2012-01-01
Site index, estimated as a function of dominant-tree height and age, is often used as an expression of site quality. This expression is assumed to be effectively independent of stand density. Observation of dominant height at two different ponderosa pine levels-of-growing-stock studies revealed that top height stability with respect to stand density depends on the...
Directory of Open Access Journals (Sweden)
Yu Xu
2016-06-01
Full Text Available Estimates of abundance or density are essential for wildlife management and conservation. There are few effective density estimates for the Buff-throated Partridge Tetraophasis szechenyii, a rare and elusive high-mountain Galliform species endemic to western China. In this study, we used the temporary emigration N-mixture model to estimate density of this species, with data acquired from playback point count surveys around a sacred area based on indigenous Tibetan culture of protection of wildlife, in Yajiang County, Sichuan, China, during April-June 2009. Within 84 125-m radius points, we recorded 53 partridge groups during three repeats. The best model indicated that detection probability was described by covariates of vegetation cover type, week of visit, time of day, and weather with weak effects, and a partridge group was present during a sampling period with a constant probability. The abundance component was accounted for by vegetation association. Abundance was substantially higher in rhododendron shrubs, fir-larch forests, mixed spruce-larch-birch forests, and especially oak thickets than in pine forests. The model predicted a density of 5.14 groups/kmÂ², which is similar to an estimate of 4.7 - 5.3 groups/kmÂ² quantified via an intensive spot-mapping effort. The post-hoc estimate of individual density was 14.44 individuals/kmÂ², based on the estimated mean group size of 2.81. We suggest that the method we employed is applicable to estimate densities of Buff-throated Partridges in large areas. Given importance of a mosaic habitat for this species, local logging should be regulated. Despite no effect of the conservation area (sacred on the abundance of Buff-throated Partridges, we suggest regulations linking the sacred mountain conservation area with the official conservation system because of strong local participation facilitated by sacred mountains in land conservation.
Urinary density measurement and analysis methods in neonatal unit care
Directory of Open Access Journals (Sweden)
Maria Vera LÃºcia Moreira LeitÃ£o Cardoso
2013-09-01
Full Text Available The objective was to assess urine collection methods through cotton in contact with genitalia and urinary collector to measure urinary density in newborns. This is a quantitative intervention study carried out in a neonatal unit of Fortaleza-CE, Brazil, in 2010. The sample consisted of 61 newborns randomly chosen to compose the study group. Most neonates were full term (31/50.8% males (33/54%. Data on urinary density measurement through the methods of cotton and collector presented statistically significant differences (p<0.05. The analysis of interquartile ranges between subgroups resulted in statistical differences between urinary collector/reagent strip (1005 and cotton/reagent strip (1010, however there was no difference between urinary collector/ refractometer (1008 and cotton/ refractometer. Therefore, further research should be conducted with larger sampling using methods investigated in this study and whenever possible, comparing urine density values to laboratory tests.
Unification of field theory and maximum entropy methods for learning probability densities
Kinney, Justin B.
2014-01-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...
On the expected value and variance for an estimator of the spatio-temporal product density function
DEFF Research Database (Denmark)
RodrÃguez-CortÃ©, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge
Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...
Investigating the impact of uneven magnetic flux density distribution on core loss estimation
DEFF Research Database (Denmark)
Niroumand, Farideh Javidi; Nymand, Morten; Wang, Yiren
2017-01-01
is calculated according to an effective flux density value and the macroscopic dimensions of the cores. However, the flux distribution in the core can alter by core shapes and/or operating conditions due to nonlinear material properties. This paper studies the element-wise estimation of the loss in magnetic......There are several approaches for loss estimation in magnetic cores, and all these approaches highly rely on accurate information about flux density distribution in the cores. It is often assumed that the magnetic flux density evenly distributes throughout the core and the overall core loss...
Maadooliat, Mehdi
2015-10-21
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.
2015-01-01
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index
DEFF Research Database (Denmark)
Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle
2013-01-01
We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...
A method to estimate stellar ages from kinematical data
Almeida-Fernandes, F.; Rocha-Pinto, H. J.
2018-05-01
We present a method to build a probability density function (PDF) for the age of a star based on its peculiar velocities U, V, and W and its orbital eccentricity. The sample used in this work comes from the Geneva-Copenhagen Survey (GCS) that contains the spatial velocities, orbital eccentricities, and isochronal ages for about 14 000 stars. Using the GCS stars, we fitted the parameters that describe the relations between the distributions of kinematical properties and age. This parametrization allows us to obtain an age probability from the kinematical data. From this age PDF, we estimate an individual average age for the star using the most likely age and the expected age. We have obtained the stellar age PDF for the age of 9102 stars from the GCS and have shown that the distribution of individual ages derived from our method is in good agreement with the distribution of isochronal ages. We also observe a decline in the mean metallicity with our ages for stars younger than 7 Gyr, similar to the one observed for isochronal ages. This method can be useful for the estimation of rough stellar ages for those stars that fall in areas of the Hertzsprung-Russell diagram where isochrones are tightly crowded. As an example of this method, we estimate the age of Trappist-1, which is a M8V star, obtaining the age of t(UVW) = 12.50(+0.29 - 6.23) Gyr.
Unification of field theory and maximum entropy methods for learning probability densities
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Kernel and wavelet density estimators on manifolds and more general metric spaces
DEFF Research Database (Denmark)
Cleanthous, G.; Georgiadis, Athanasios; Kerkyacharian, G.
We consider the problem of estimating the density of observations taking values in classical or nonclassical spaces such as manifolds and more general metric spaces. Our setting is quite general but also sufficiently rich in allowing the development of smooth functional calculus with well localized...... spectral kernels, Besov regularity spaces, and wavelet type systems. Kernel and both linear and nonlinear wavelet density estimators are introduced and studied. Convergence rates for these estimators are established, which are analogous to the existing results in the classical setting of real...
Method of estimation of scanning system quality
Larkin, Eugene; Kotov, Vladislav; Kotova, Natalya; Privalov, Alexander
2018-04-01
Estimation of scanner parameters is an important part in developing electronic document management system. This paper suggests considering the scanner as a system that contains two main channels: a photoelectric conversion channel and a channel for measuring spatial coordinates of objects. Although both of channels consist of the same elements, the testing of their parameters should be executed separately. The special structure of the two-dimensional reference signal is offered for this purpose. In this structure, the fields for testing various parameters of the scanner are sp atially separated. Characteristics of the scanner are associated with the loss of information when a document is digitized. The methods to test grayscale transmitting ability, resolution and aberrations level are offered.
A method for determination of the superficial charge density
International Nuclear Information System (INIS)
Vila, F.
1992-10-01
In this article is presented a new methodism for determination of superficial charge density in nonconducting materials which is based in the combination of laboratory calibrated experiments in conducting surfaces with theoretical calculations for nonconducting surfaces. (author). 19 refs, 7 figs, 1 tab
International Nuclear Information System (INIS)
Brown, M.L.; Savage, D.J.
1986-04-01
The application of density measurement to heavy metal monitoring in the solvent phase is described, including practical experience gained during three fast reactor fuel reprocessing campaigns. An experimental algorithm relating heavy metal concentration and sample density was generated from laboratory-measured density data, for uranyl nitrate dissolved in nitric acid loaded tri-butyl phosphate in odourless kerosene. Differences in odourless kerosene batch densities are mathematically interpolated, and the algorithm can be used to estimate heavy metal concentrations from the density to within +1.5 g/l. An Anton Paar calculating digital densimeter with remote cell operation was used for all density measurements, but the algorithm will give similar accuracy with any density measuring device capable of a precision of better than 0.0005 g/cm 3 . For plant control purposes, the algorithm was simplified using a density referencing system, whereby the density of solvent not yet loaded with heavy metal is subtracted from the sample density. This simplified algorithm compares very favourably with empirical algorithms, derived from numerical analysis of density data and chemically measured uranium and plutonium data obtained during fuel reprocessing campaigns, particularly when differences in the acidity of the solvent are considered before and after loading with heavy metal. This simplified algorithm had been successfully used for plant control of heavy metal loaded solvent during four fast reactor fuel reprocessing campaigns. (author)
Precise charge density studies by maximum entropy method
Takata, M
2003-01-01
For the production research and development of nanomaterials, their structural information is indispensable. Recently, a sophisticated analytical method, which is based on information theory, the Maximum Entropy Method (MEM) using synchrotron radiation powder data, has been successfully applied to determine precise charge densities of metallofullerenes and nanochannel microporous compounds. The results revealed various endohedral natures of metallofullerenes and one-dimensional array formation of adsorbed gas molecules in nanochannel microporous compounds. The concept of MEM analysis was also described briefly. (author)
Yamashita, M.; Yoshimura, M.
2018-04-01
Photosynthetic photon flux density (PPFD: Âµmol m-2 s-1) is indispensable for plant physiology processes in photosynthesis. However, PPFD is seldom measured, so that PPFD has been estimated by using solar radiation (SR: W m-2) measured in world wide. In method using SR, there are two steps: first to estimate photosynthetically active radiation (PAR: W m-2) by the fraction of PAR to SR (PF) and second: to convert PAR to PPFD using the ratio of quanta to energy (Q / E: Âµmol J-1). PF and Q/E usually have been used as the constant values, however, recent studies point out that PF and Q / E would not be constants under various sky conditions. In this study, we use the numeric data of sky-conditions factors such cloud cover, sun appearance/hiding and relative sky brightness derived from whole-sky image processing and examine the influences of sky-conditions factors on PF and Q / E of global and diffuse PAR. Furthermore, we discuss our results by comparing with the existing methods.
Minimum entropy density method for the time series analysis
Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae
2009-01-01
The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poorâ€™s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.
Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.
Joo, Hyun; Chavan, Archana G; Day, Ryan; Lennox, Kristin P; Sukhanov, Paul; Dahl, David B; Vannucci, Marina; Tsai, Jerry
2011-10-01
Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The Ï†,Ïˆ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Ã… RMSD and with a worst case of 3.66 Ã… were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Ã…), this sampling method produces a population of loop structures to around 3.66 Ã… for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.
Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.
Directory of Open Access Journals (Sweden)
Hyun Joo
2011-10-01
Full Text Available Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The Ï†,Ïˆ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM. Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Ã… RMSD and with a worst case of 3.66 Ã… were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Ã…, this sampling method produces a population of loop structures to around 3.66 Ã… for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.
Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity
Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry
2011-01-01
Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The Ï†,Ïˆ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Ã… RMSD and with a worst case of 3.66 Ã… were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Ã…), this sampling method produces a population of loop structures to around 3.66 Ã… for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638
Estimating population density and connectivity of American mink using spatial capture-recapture.
Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P
2016-06-01
Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 kmÂ² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.
Estimating population density and connectivity of American mink using spatial capture-recapture
Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.
2016-01-01
Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using captureâ€“recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial captureâ€“recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388Â km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of captureâ€“recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.
Sutherland, Andrew M; Parrella, Michael P
2011-08-01
Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.
Application of texture analysis method for mammogram density classification
Nithya, R.; Santhi, B.
2017-07-01
Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-03-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as Ëœ104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.
Trap array configuration influences estimates and precision of black bear density and abundance.
Directory of Open Access Journals (Sweden)
Clay M Wilton
Full Text Available Spatial capture-recapture (SCR models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CIâ€Š=â€Š193-406 bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of
International Nuclear Information System (INIS)
Yashiki, Taturou; Yagawa, Genki; Okuda, Hiroshi
1995-01-01
The adaptive finite element method based on an 'a posteriori error estimation' is known to be a powerful technique for analyzing the engineering practical problems, since it excludes the instinctive aspect of the mesh subdivision and gives high accuracy with relatively low computational cost. In the adaptive procedure, both the error estimation and the mesh generation according to the error estimator are essential. In this paper, the adaptive procedure is realized by the automatic mesh generation based on the control of node density distribution, which is decided according to the error estimator. The global percentage error, CPU time, the degrees of freedom and the accuracy of the solution of the adaptive procedure are compared with those of the conventional method using regular meshes. Such numerical examples as the driven cavity flows of various Reynolds numbers and the flows around a cylinder have shown the very high performance of the proposed adaptive procedure. (author)
A Method for Estimating Surveillance Video Georeferences
Directory of Open Access Journals (Sweden)
Aleksandar MilosavljeviÄ‡
2017-07-01
Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenbergâ€“Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.
Estimating peer density effects on oral health for community-based older adults.
Chakraborty, Bibhas; Widener, Michael J; Mirzaei Salehabadi, Sedigheh; Northridge, Mary E; Kum, Susan S; Jin, Zhu; Kunzel, Carol; Palmer, Harvey D; Metcalf, Sara S
2017-12-29
As part of a long-standing line of research regarding how peer density affects health, researchers have sought to understand the multifaceted ways that the density of contemporaries living and interacting in proximity to one another influence social networks and knowledge diffusion, and subsequently health and well-being. This study examined peer density effects on oral health for racial/ethnic minority older adults living in northern Manhattan and the Bronx, New York, NY. Peer age-group density was estimated by smoothing US Census data with 4 kernel bandwidths ranging from 0.25 to 1.50 mile. Logistic regression models were developed using these spatial measures and data from the ElderSmile oral and general health screening program that serves predominantly racial/ethnic minority older adults at community centers in northern Manhattan and the Bronx. The oral health outcomes modeled as dependent variables were ordinal dentition status and binary self-rated oral health. After construction of kernel density surfaces and multiple imputation of missing data, logistic regression analyses were performed to estimate the effects of peer density and other sociodemographic characteristics on the oral health outcomes of dentition status and self-rated oral health. Overall, higher peer density was associated with better oral health for older adults when estimated using smaller bandwidths (0.25 and 0.50 mile). That is, statistically significant relationships (pâ€‰density and improved dentition status were found when peer density was measured assuming a more local social network. As with dentition status, a positive significant association was found between peer density and fair or better self-rated oral health when peer density was measured assuming a more local social network. This study provides novel evidence that the oral health of community-based older adults is affected by peer density in an urban environment. To the extent that peer density signifies the potential for
Estimating the amount and distribution of radon flux density from the soil surface in China
International Nuclear Information System (INIS)
Zhuo Weihai; Guo Qiuju; Chen Bo; Cheng Guan
2008-01-01
Based on an idealized model, both the annual and the seasonal radon ( 222 Rn) flux densities from the soil surface at 1099 sites in China were estimated by linking a database of soil 226 Ra content and a global ecosystems database. Digital maps of the 222 Rn flux density in China were constructed in a spatial resolution of 25 km x 25 km by interpolation among the estimated data. An area-weighted annual average 222 Rn flux density from the soil surface across China was estimated to be 29.7 Â± 9.4 mBq m -2 s -1 . Both regional and seasonal variations in the 222 Rn flux densities are significant in China. Annual average flux densities in the southeastern and northwestern China are generally higher than those in other regions of China, because of high soil 226 Ra content in the southeastern area and high soil aridity in the northwestern one. The seasonal average flux density is generally higher in summer/spring than winter, since relatively higher soil temperature and lower soil water saturation in summer/spring than other seasons are common in China
PEDO-TRANSFER FUNCTIONS FOR ESTIMATING SOIL BULK DENSITY IN CENTRAL AMAZONIA
Directory of Open Access Journals (Sweden)
Henrique Seixas Barros
2015-04-01
Full Text Available Under field conditions in the Amazon forest, soil bulk density is difficult to measure. Rigorous methodological criteria must be applied to obtain reliable inventories of C stocks and soil nutrients, making this process expensive and sometimes unfeasible. This study aimed to generate models to estimate soil bulk density based on parameters that can be easily and reliably measured in the field and that are available in many soil-related inventories. Stepwise regression models to predict bulk density were developed using data on soil C content, clay content and pH in water from 140 permanent plots in terra firme (upland forests near Manaus, Amazonas State, Brazil. The model results were interpreted according to the coefficient of determination (R2 and Akaike information criterion (AIC and were validated with a dataset consisting of 125 plots different from those used to generate the models. The model with best performance in estimating soil bulk density under the conditions of this study included clay content and pH in water as independent variables and had R2 = 0.73 and AIC = -250.29. The performance of this model for predicting soil density was compared with that of models from the literature. The results showed that the locally calibrated equation was the most accurate for estimating soil bulk density for upland forests in the Manaus region.
Quantal density functional theory II. Approximation methods and applications
International Nuclear Information System (INIS)
Sahni, Viraht
2010-01-01
This book is on approximation methods and applications of Quantal Density Functional Theory (QDFT), a new local effective-potential-energy theory of electronic structure. What distinguishes the theory from traditional density functional theory is that the electron correlations due to the Pauli exclusion principle, Coulomb repulsion, and the correlation contribution to the kinetic energy -- the Correlation-Kinetic effects -- are separately and explicitly defined. As such it is possible to study each property of interest as a function of the different electron correlations. Approximations methods based on the incorporation of different electron correlations, as well as a many-body perturbation theory within the context of QDFT, are developed. The applications are to the few-electron inhomogeneous electron gas systems in atoms and molecules, as well as to the many-electron inhomogeneity at metallic surfaces. (orig.)
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Estimation of current density distribution of PAFC by analysis of cell exhaust gas
Energy Technology Data Exchange (ETDEWEB)
Kato, S.; Seya, A. [Fuji Electric Co., Ltd., Ichihara-shi (Japan); Asano, A. [Fuji Electric Corporate, Ltd., Yokosuka-shi (Japan)
1996-12-31
To estimate distributions of Current densities, voltages, gas concentrations, etc., in phosphoric acid fuel cell (PAFC) stacks, is very important for getting fuel cells with higher quality. In this work, we leave developed a numerical simulation tool to map out the distribution in a PAFC stack. And especially to Study Current density distribution in the reaction area of the cell, we analyzed gas composition in several positions inside a gas outlet manifold of the PAFC stack. Comparing these measured data with calculated data, the current density distribution in a cell plane calculated by the simulation, was certified.
Brassine, ElÃ©anor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 Â± 0.18 cheetahs/100 kmÂ²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.
A hierarchical model for estimating density in camera-trap studies
Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.
2009-01-01
Estimating animal density using captureâ€“recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial captureâ€“recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic captureâ€“recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14Â·3 animals per 100Â km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential â€˜holesâ€™ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based â€˜capturesâ€™ of individual animals.
Brassine, ElÃ©anor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 Â± 0.18 cheetahs/100kmÂ²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574
Directory of Open Access Journals (Sweden)
ElÃ©anor Brassine
Full Text Available Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9 cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 Â± 0.18 cheetahs/100 kmÂ². While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200, no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.
The importance of spatial models for estimating the strength of density dependence
DEFF Research Database (Denmark)
Thorson, James T.; Skaug, Hans J.; Kristensen, Kasper
2014-01-01
the California Coast. In this case, the nonspatial model estimates implausible oscillatory dynamics on an annual time scale, while the spatial model estimates strong autocorrelation and is supported by model selection tools. We conclude by discussing the importance of improved data archiving techniques, so...... that spatial models can be used to re-examine classic questions regarding the presence and strength of density dependence in wild populations Read More: http://www.esajournals.org/doi/abs/10.1890/14-0739.1...
Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo
2017-01-01
The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 Â± 0.07 (mean Â± STD). Similar density values (0.51 Â± 0.07) were obtained using a fixed ROI radius of 450 Î¼m. The Ising model yielded two parameter estimates (Î² = 0.34 Â± 0.03; Î³ = 0.003 Â± 0.012; mean Â± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.
The MIRD method of estimating absorbed dose
International Nuclear Information System (INIS)
Weber, D.A.
1991-01-01
The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine
Psychological methods of subjective risk estimates
International Nuclear Information System (INIS)
Zimolong, B.
1980-01-01
Reactions to situations involving risks can be divided into the following parts/ perception of danger, subjective estimates of the risk and risk taking with respect to action. Several investigations have compared subjective estimates of the risk with an objective measure of that risk. In general there was a mis-match between subjective and objective measures of risk, especially, objective risk involved in routine activities is most commonly underestimated. This implies, for accident prevention, that attempts must be made to induce accurate subjective risk estimates by technical and behavioural measures. (orig.) [de
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
DEFF Research Database (Denmark)
Ã…kesson, Bernt Magnus; JÃ¸rgensen, John Bagterp; Poulsen, Niels KjÃ¸lstad
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
PERFORMANCE ANALYSIS OF METHODS FOR ESTIMATING ...
African Journals Online (AJOL)
2014-12-31
Dec 31, 2014 ... speed is the most significant parameter of the wind energy. ... wind-powered generators and applied to estimate potential power output at various ...... Wind and Solar Power Systems, U.S. Merchant Marine Academy Kings.
Yi, Wen; Xue, Xianghui; Reid, Iain M.; Younger, Joel P.; Chen, Jinsong; Chen, Tingdi; Li, Na
2018-04-01
Neutral mesospheric densities at a low latitude have been derived during April 2011 to December 2014 using data from the Kunming meteor radar in China (25.6Â°N, 103.8Â°E). The daily mean density at 90 km was estimated using the ambipolar diffusion coefficients from the meteor radar and temperatures from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument. The seasonal variations of the meteor radar-derived density are consistent with the density from the Mass Spectrometer and Incoherent Scatter (MSIS) model, show a dominant annual variation, with a maximum during winter, and a minimum during summer. A simple linear model was used to separate the effects of atmospheric density and the meteor velocity on the meteor radar peak detection height. We find that a 1 km/s difference in the vertical meteor velocity yields a change of approximately 0.42 km in peak height. The strong correlation between the meteor radar density and the velocity-corrected peak height indicates that the meteor radar density estimates accurately reflect changes in neutral atmospheric density and that meteor peak detection heights, when adjusted for meteoroid velocity, can serve as a convenient tool for measuring density variations around the mesopause. A comparison of the ambipolar diffusion coefficient and peak height observed simultaneously by two co-located meteor radars indicates that the relative errors of the daily mean ambipolar diffusion coefficient and peak height should be less than 5% and 6%, respectively, and that the absolute error of the peak height is less than 0.2 km.
Estimation methods for special nuclear materials holdup
International Nuclear Information System (INIS)
Pillay, K.K.S.; Picard, R.R.
1984-01-01
The potential value of statistical models for the estimation of residual inventories of special nuclear materials was examined using holdup data from processing facilities and through controlled experiments. Although the measurement of hidden inventories of special nuclear materials in large facilities is a challenging task, reliable estimates of these inventories can be developed through a combination of good measurements and the use of statistical models. 7 references, 5 figures
Statistical methods of estimating mining costs
Long, K.R.
2011-01-01
Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.
The time-dependent density matrix renormalisation group method
Ma, Haibo; Luo, Zhen; Yao, Yao
2018-04-01
Substantial progress of the time-dependent density matrix renormalisation group (t-DMRG) method in the recent 15 years is reviewed in this paper. By integrating the time evolution with the sweep procedures in density matrix renormalisation group (DMRG), t-DMRG provides an efficient tool for real-time simulations of the quantum dynamics for one-dimensional (1D) or quasi-1D strongly correlated systems with a large number of degrees of freedom. In the illustrative applications, the t-DMRG approach is applied to investigate the nonadiabatic processes in realistic chemical systems, including exciton dissociation and triplet fission in polymers and molecular aggregates as well as internal conversion in pyrazine molecule.
International Nuclear Information System (INIS)
Alves, Carolina Moura; Horodecki, Pawel; Oi, Daniel K. L.; Kwek, L. C.; Ekert, Artur K.
2003-01-01
We present a method of direct estimation of important properties of a shared bipartite quantum state, within the ''distant laboratories'' paradigm, using only local operations and classical communication. We apply this procedure to spectrum estimation of shared states, and locally implementable structural physical approximations to incompletely positive maps. This procedure can also be applied to the estimation of channel capacity and measures of entanglement
Urban birds in the Sonoran Desert: estimating population density from point counts
Directory of Open Access Journals (Sweden)
Karina Johnston LÃ³pez
2015-01-01
Full Text Available We conducted bird surveys in Hermosillo, Sonora using distance sampling to characterize detection functions at point-transects for native and non-native urban birds in a desert environment. From March to August 2013 we sampled 240 plots in the city and its surroundings; each plot was visited three times. Our purpose was to provide information for a rapid assessment of bird density in this region by using point counts. We identified 72 species, including six non-native species. Sixteen species had sufficient detections to accurately estimate the parameters of the detection functions. To illustrate the estimation of density from bird count data using our inferred detection functions, we estimated the density of the Eurasian Collared-Dove (Streptopelia decaocto under two different levels of urbanization: highly urbanized (90-100% of urban impact and moderately urbanized zones (39-50% of urban impact. Density ofÂ S. decaoctoÂ in the highly-urbanized and moderately-urbanized zones was 3.97Â±0.52 and 2.92Â±0.52 individuals/ha, respectively. By using our detection functions, avian ecologists can efficiently relocate time and effort that is regularly used for the estimation of detection distances, to increase the number of sites surveyed and to collect other relevant ecological information.
Multi-objective mixture-based iterated density estimation evolutionary algorithms
Thierens, D.; Bosman, P.A.N.
2001-01-01
We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability
Eurasian otter (Lutra lutra) density estimate based on radio tracking and other data sources
Czech Academy of Sciences Publication Activity Database
Quaglietta, L.; HÃ¡jkovÃ¡, Petra; Mira, A.; Boitani, L.
2015-01-01
RoÄ. 60, Ä. 2 (2015), s. 127-137 ISSN 2199-2401 R&D Projects: GA AV ÄŒR KJB600930804 Institutional support: RVO:68081766 Keywords : Lutra lutra * Density estimation * Edge effect * Known-to-be-alive * Linear habitats * Sampling scale Subject RIV: EG - Zoology
The Wegner Estimate and the Integrated Density of States for some ...
Indian Academy of Sciences (India)
The integrated density of states (IDS) for random operators is an important function describing many physical characteristics of a random system. Properties of the IDS are derived from the Wegner estimate that describes the influence of finite-volume perturbations on a background system. In this paper, we present a simpleÂ ...
Statistical Methods for Estimating the Uncertainty in the Best Basis Inventories
International Nuclear Information System (INIS)
WILMARTH, S.R.
2000-01-01
This document describes the statistical methods used to determine sample-based uncertainty estimates for the Best Basis Inventory (BBI). For each waste phase, the equation for the inventory of an analyte in a tank is Inventory (Kg or Ci) = Concentration x Density x Waste Volume. the total inventory is the sum of the inventories in the different waste phases. Using tanks sample data: statistical methods are used to obtain estimates of the mean concentration of an analyte the density of the waste, and their standard deviations. The volumes of waste in the different phases, and their standard deviations, are estimated based on other types of data. The three estimates are multiplied to obtain the inventory estimate. The standard deviations are combined to obtain a standard deviation of the inventory. The uncertainty estimate for the Best Basis Inventory (BBI) is the approximate 95% confidence interval on the inventory
System and method for traffic signal timing estimation
Dumazert, Julien; Claudel, Christian G.
2015-01-01
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
System and method for traffic signal timing estimation
Dumazert, Julien
2015-12-30
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim Ali Ali
2016-01-01
application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The third application
Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J
2012-07-01
1.â€‚We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2.â€‚The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3.â€‚An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4.â€‚We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. Â© 2012 The Authors. Journal of Animal Ecology Â© 2012 British Ecological Society.
International Nuclear Information System (INIS)
Sorini, D.
2017-01-01
Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k âˆ¼ 0.80 h Mpc âˆ’1 and within 10% up to k âˆ¼ 0.94 h Mpc âˆ’1 , well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.
Hickson, Dylan; Boivin, Alexandre; Daly, Michael G.; Ghent, Rebecca; Nolan, Michael C.; Tait, Kimberly; Cunje, Alister; Tsai, Chun An
2018-05-01
The variations in near-surface properties and regolith structure of asteroids are currently not well constrained by remote sensing techniques. Radar is a useful tool for such determinations of Near-Earth Asteroids (NEAs) as the power of the reflected signal from the surface is dependent on the bulk density, Ïbd, and dielectric permittivity. In this study, high precision complex permittivity measurements of powdered aluminum oxide and dunite samples are used to characterize the change in the real part of the permittivity with the bulk density of the sample. In this work, we use silica aerogel for the first time to increase the void space in the samples (and decrease the bulk density) without significantly altering the electrical properties. We fit various mixing equations to the experimental results. The Looyenga-Landau-Lifshitz mixing formula has the best fit and the Lichtenecker mixing formula, which is typically used to approximate planetary regolith, does not model the results well. We find that the Looyenga-Landau-Lifshitz formula adequately matches Lunar regolith permittivity measurements, and we incorporate it into an existing model for obtaining asteroid regolith bulk density from radar returns which is then used to estimate the bulk density in the near surface of NEA's (101955) Bennu and (25143) Itokawa. Constraints on the material properties appropriate for either asteroid give average estimates of Ïbd = 1.27 Â± 0.33g/cm3 for Bennu and Ïbd = 1.68 Â± 0.53g/cm3 for Itokawa. We conclude that our data suggest that the Looyenga-Landau-Lifshitz mixing model, in tandem with an appropriate radar scattering model, is the best method for estimating bulk densities of regoliths from radar observations of airless bodies.
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
A Qualitative Method to Estimate HSI Display Complexity
International Nuclear Information System (INIS)
Hugo, Jacques; Gertman, David
2013-01-01
There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation
A Qualitative Method to Estimate HSI Display Complexity
Energy Technology Data Exchange (ETDEWEB)
Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)
2013-04-15
There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.
Estimation of energy density of Li-S batteries with liquid and solid electrolytes
Li, Chunmei; Zhang, Heng; Otaegui, Laida; Singh, Gurpreet; Armand, Michel; Rodriguez-Martinez, Lide M.
2016-09-01
With the exponential growth of technology in mobile devices and the rapid expansion of electric vehicles into the market, it appears that the energy density of the state-of-the-art Li-ion batteries (LIBs) cannot satisfy the practical requirements. Sulfur has been one of the best cathode material choices due to its high charge storage (1675 mAh g-1), natural abundance and easy accessibility. In this paper, calculations are performed for different cell design parameters such as the active material loading, the amount/thickness of electrolyte, the sulfur utilization, etc. to predict the energy density of Li-S cells based on liquid, polymeric and ceramic electrolytes. It demonstrates that Li-S battery is most likely to be competitive in gravimetric energy density, but not volumetric energy density, with current technology, when comparing with LIBs. Furthermore, the cells with polymer and thin ceramic electrolytes show promising potential in terms of high gravimetric energy density, especially the cells with the polymer electrolyte. This estimation study of Li-S energy density can be used as a good guidance for controlling the key design parameters in order to get desirable energy density at cell-level.
Directory of Open Access Journals (Sweden)
Yuqi Guo
2017-08-01
Full Text Available In order to estimate traffic densities in a large-scale urban freeway network in an accurate and timely fashion when traffic sensors do not cover the freeway network completely and thus only local measurement data can be utilized, this paper proposes a decentralized state observer approach based on a macroscopic traffic flow model. Firstly, by using the well-known cell transmission model (CTM, the urban freeway network is modeled in the way of distributed systems. Secondly, based on the model, a decentralized observer is designed. With the help of the Lyapunov function and S-procedure theory, the observer gains are computed by using linear matrix inequality (LMI technique. So, the traffic densities of the whole road network can be estimated by the designed observer. Finally, this method is applied to the outer ring of the Beijingâ€™s second ring road and experimental results demonstrate the effectiveness and applicability of the proposed approach.
Canepa, Edward S.; Claudel, Christian G.
2012-01-01
This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. Â© 2012 IEEE.
Directory of Open Access Journals (Sweden)
Chinthaka GOONERATNE
2008-04-01
Full Text Available Hyperthermia treatment has been gaining momentum in the past few years as a possible method to manage cancer. Cancer cells are different to normal cells in many ways including how they react to heat. Due to this difference it is possible for hyperthermia treatment to destroy cancer cells without harming the healthy normal cells surrounding the tumor. Magnetic particles injected into the body generate heat by hysteresis loss and temperature is increased when a time varying external magnetic field is applied. Successful treatment depends on how efficiently the heat is controlled. Thus, it is very important to estimate the magnetic fluid density in the body. Experimental apparatus designed for testing, numerical analysis, and results obtained by experimentation using a simple yet novel and minimally invasive needle type spin-valve giant magnetoresistance (SV-GMR sensor, to estimate low concentration magnetic fluid weight density and detection of magnetic fluid in a reference medium is reported.
Canepa, Edward S.
2012-09-01
This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. Â© 2012 IEEE.
Kittisuwan, Pichid
2015-03-01
The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.
Method for providing a low density high strength polyurethane foam
Whinnery, Jr., Leroy L.; Goods, Steven H.; Skala, Dawn M.; Henderson, Craig C.; Keifer, Patrick N.
2013-06-18
Disclosed is a method for making a polyurethane closed-cell foam material exhibiting a bulk density below 4 lbs/ft.sup.3 and high strength. The present embodiment uses the reaction product of a modified MDI and a sucrose/glycerine based polyether polyol resin wherein a small measured quantity of the polyol resin is "pre-reacted" with a larger quantity of the isocyanate in a defined ratio such that when the necessary remaining quantity of the polyol resin is added to the "pre-reacted" resin together with a tertiary amine catalyst and water as a blowing agent, the polymerization proceeds slowly enough to provide a stable foam body.
Linear density response function in the projector augmented wave method
DEFF Research Database (Denmark)
Yan, Jun; Mortensen, Jens JÃ¸rgen; Jacobsen, Karsten Wedel
2011-01-01
We present an implementation of the linear density response function within the projector-augmented wave method with applications to the linear optical and dielectric properties of both solids, surfaces, and interfaces. The response function is represented in plane waves while the single...... functions of Si, C, SiC, AlP, and GaAs compare well with previous calculations. While optical properties of semiconductors, in particular excitonic effects, are generally not well described by ALDA, we obtain excellent agreement with experiments for the surface loss function of graphene and the Mg(0001...
Comparison of direct and precipitation methods for the estimation of ...
African Journals Online (AJOL)
Background: There is increase in use of direct assays for analysis of high and low density lipoprotein cholesterol by clinical laboratories despite differences in performance characteristics with conventional precipitation methods. Calculation of low density lipoprotein cholesterol in precipitation methods is based on totalÂ ...
International Nuclear Information System (INIS)
Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.
2004-01-01
Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, â€œA TSP Software Maintenance...Life Cycleâ€, CrossTalk, March, 2005. 2. Koch, Alan S, â€œTSP Can Be the Building blocks for CMMIâ€, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets
Joint estimation of crown of thorns (Acanthaster planci densities on the Great Barrier Reef
Directory of Open Access Journals (Sweden)
M. Aaron MacNeil
2016-08-01
Full Text Available Crown-of-thorns starfish (CoTS; Acanthaster spp. are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1 estimating the detectability of adult CoTS on typical underwater visual count (UVC surveys using covariates and (2 inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR. We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD and [95% uncertainty intervals], with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014.
Estimation method for volumes of hot spots created by heavy ions
International Nuclear Information System (INIS)
Kanno, Ikuo; Kanazawa, Satoshi; Kajii, Yuji
1999-01-01
As a ratio of volumes of hot spots to cones, which have the same lengths and bottom radii with the ones of hot spots, a simple and convenient method for estimating the volumes of hot spots is described. This calculation method is useful for the study of damage producing mechanism in hot spots, and is also convenient for the estimation of the electron-hole densities in plasma columns created by heavy ions in semiconductor detectors. (author)
Bin mode estimation methods for Compton camera imaging
International Nuclear Information System (INIS)
Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.
2014-01-01
We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods
Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods
Directory of Open Access Journals (Sweden)
aboalhasan fathabadi
2017-02-01
3000 times. Sediment rating curves equation was fitted to each sampled suspended sediment and discharge data sets. Using these sediment rating curve and their residual suspended sediment concentration were calculate for test data. Finally using the 2.5 and 97.5 percentile of the B bootstrap realizations, 95% bootstrap prediction intervals were predicted. Results and Discussion: Results showed that Motorkhane and MiyaneTonelShomare 7 stations were best fitted by a sigmoid function and Stor and Glinak stations were best fitted by second order polynomial and liner function, respectively The first 50 of the B bootstrapped curves were plotted for all stations.with respect to these plots implied that bootstrapped curves more scattered whereas observed data were less. The suspended sediment curve parameters estimated more accurately where, the suspended sediments were sampled more, as a result of reduced uncertainty in estimated suspended sediment concentration due to parameter uncertainty. In addition to sampling density bootstrapped curves, uncertainty depends on the curve shape. For GLUE methodology to assess the impact of threshold values on the uncertainty results, threshold values systematically changed from 0.1 to 0.45. Study results showed that 95% confidence intervals are sensitive to the selected threshold values and higher threshold values will result in an increasing 95% confidence interval. However, the highest 95% confidence intervals obtained by GLUE method (when threshold value was set to 0.1 was little than those values obtained by Bootstrap. Conclusions: The uncertainty of sediment rating curves was addressed in this study by considering two different procedures based on the GLUE and bootstrap methods for four stations in Sefidrod watershed.Results showed that nonlinear equation fitted log-transformed values of sediment concentration and discharge better than linear equation. Uncertainty result using GLUE depend on chosen threshold values. As threshold
Empirical methods for estimating future climatic conditions
International Nuclear Information System (INIS)
Anon.
1990-01-01
Applying the empirical approach permits the derivation of estimates of the future climate that are nearly independent of conclusions based on theoretical (model) estimates. This creates an opportunity to compare these results with those derived from the model simulations of the forthcoming changes in climate, thus increasing confidence in areas of agreement and focusing research attention on areas of disagreements. The premise underlying this approach for predicting anthropogenic climate change is based on associating the conditions of the climatic optimums of the Holocene, Eemian, and Pliocene with corresponding stages of the projected increase of mean global surface air temperature. Provided that certain assumptions are fulfilled in matching the value of the increased mean temperature for a certain epoch with the model-projected change in global mean temperature in the future, the empirical approach suggests that relationships leading to the regional variations in air temperature and other meteorological elements could be deduced and interpreted based on use of empirical data describing climatic conditions for past warm epochs. Considerable care must be taken, of course, in making use of these spatial relationships, especially in accounting for possible large-scale differences that might, in some cases, result from different factors contributing to past climate changes than future changes and, in other cases, might result from the possible influences of changes in orography and geography on regional climatic conditions over time
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads GrÃ¦sbÃ¸ll; Jensen, SÃ¸ren Holdt
2013-01-01
, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...
Automated voxelization of 3D atom probe data through kernel density estimation
International Nuclear Information System (INIS)
Srinivasan, Srikant; Kaluskar, Kaustubh; Dumpala, Santoshrupa; Broderick, Scott; Rajan, Krishna
2015-01-01
Identifying nanoscale chemical features from atom probe tomography (APT) data routinely involves adjustment of voxel size as an input parameter, through visual supervision, making the final outcome user dependent, reliant on heuristic knowledge and potentially prone to error. This work utilizes Kernel density estimators to select an optimal voxel size in an unsupervised manner to perform feature selection, in particular targeting resolution of interfacial features and chemistries. The capability of this approach is demonstrated through analysis of the Î³ / Î³â€™ interface in a Niâ€“Alâ€“Cr superalloy. - Highlights: â€¢ Develop approach for standardizing aspects of atom probe reconstruction. â€¢ Use Kernel density estimators to select optimal voxel sizes in an unsupervised manner. â€¢ Perform interfacial analysis of Niâ€“Alâ€“Cr superalloy, using new automated approach. â€¢ Optimize voxel size to preserve the feature of interest and minimizing loss / noise.
portfolio optimization based on nonparametric estimation methods
Directory of Open Access Journals (Sweden)
mahsa ghandehari
2017-03-01
Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.
Kernel density estimation and transition maps of Moldavian Neolithic and Eneolithic settlement
Directory of Open Access Journals (Sweden)
Robin Brigand
2018-04-01
Full Text Available The data presented in this article are related to the research article entitled â€œNeo-Eneolithic settlement pattern and salt exploitation in Romanian Moldaviaâ€ (Brigand and Weller, 2018 [1]. Kernel density estimation (KDE is used in order to move beyond the discrete distribution of sites and to enable us to work on a continuous surface that reflects the intensity of the occupation in the space. Maps of density per period â€“ Neolithic I (Cris, Neolithic II (LBK, Eneolithic I (Precucuteni, Eneolithic II (Cucuteni A, Eneolithic III-IV (Cucuteni A-B and B â€“ are used to create maps of density difference (Figs. 1â€“4 in order to analyse the dynamic (either non-existent, negative or positive between two chronological sequences.
Moser , Gabriele; Zerubia , Josiane; Serpico , Sebastiano B.
2006-01-01
International audience; In remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of the pixel intensities. This paper deals with the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis. Several theoretical and heuristic models for the pdfs of SAR data have been proposed in the literature, which have been proved to be effective for different land-cov...
On the estimation of the current density in space plasmas: Multi- versus single-point techniques
Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco
2017-06-01
Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.
Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'
International Nuclear Information System (INIS)
Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi
1996-01-01
To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)
Modification of low-density lipoprotein by different radioiodination methods
International Nuclear Information System (INIS)
Sobal, G.; Resch, U.; Sinzinger, H.
2004-01-01
Scintigraphic imaging of radiolabeled low-density lipoproteins (LDL) is an interesting tool for the understanding of its role in pathomechanism of atherosclerosis. Metabolism of native LDL shows quite different pattern and kinetics as compared to that of modified LDL which is not mediated by classical LDL-receptor and accumulates in atherosclerotic lesions to form lipid-laden foam cells. Therefore we were interested whether radiolabelling of LDL induces structural modifications. We performed the iodine labeling of LDL for scintigraphic imaging of atherosclerosis by three different methods: chloramine-T (A), iodine monochloride (B) and iodogen (C). The highest radiolabelling yield of 125 I was obtained by the iodogen method (75.44Â±13.52%) and the lowest (49.01Â±12.74%) by iodine monochloride. Chloramine T showed a labeling yield of 62.82Â±6.17%. The stability of the tracer was very high with all the methods, persisting up to 6 h (98.83Â±1.2% - 91.38Â±4.7%, 15 min vs 6 h after labeling). For the first time we not only investigated the influence of radiolabelling on relative electrophoretic mobility (REM), but also various oxidation parameters such as baseline dienes (BD), thiobarbituric acid reactive substances (TBARS), endogenous peroxides (POX) and oxidation resistance in the copper-mediated oxidation system (expressed as lag-time) were measured. Furthermore, oxidation- derived fragmentation of the lipoproteins was examined with SDS-PAGE electrophoresis. Data are expressed as % change compared to native LDL before radiolabeling. BD were reduced by 32% using the method (A), but increased by 33% and 47% with the monochloride (B) and iodogen method (C), respectively. The effect on lag-time was comparable for all the three methods, ranging from 25 to 36% reduction in lag-time. TBARS were strongly increased 5-7 fold by all the methods. REM was changed by all three methods. While by methods A and C we have found a moderate increase in REM by 1.75 and 2.0 fold
Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.
Å½upunski, Vesna; JevtiÄ‡, Radivoje; JokiÄ‡, Vesna SpasiÄ‡; Å½upunski, Ljubica; LaloÅ¡eviÄ‡, Mirjana; Ä†iriÄ‡, Mihajlo; Ä†urÄiÄ‡, Å½ivko
2017-06-01
Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with Ï‡ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.
Thermodynamic properties of organic compounds estimation methods, principles and practice
Janz, George J
1967-01-01
Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the
Methods for reconstruction of the density distribution of nuclear power
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2015-01-01
Highlights: â€¢ Two methods for reconstruction of the pin power distribution are presented. â€¢ The ARM method uses analytical solution of the 2D diffusion equation. â€¢ The PRM method uses polynomial solution without boundary conditions. â€¢ The maximum errors in pin power reconstruction occur in the peripheral water region. â€¢ The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and
Probability Density Function Method for Observing Reconstructed Attractor Structure
Institute of Scientific and Technical Information of China (English)
é™†å®ä¼Ÿ; é™ˆäºšç ; å«é’
2004-01-01
Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6 - 6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.
System and method for correcting attitude estimation
Josselson, Robert H. (Inventor)
2010-01-01
A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.
Mesin, Luca
2015-02-01
Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2Â° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright Â© 2014 Elsevier Ltd. All rights reserved.
Control and estimation methods over communication networks
Mahmoud, Magdi S
2014-01-01
This book provides a rigorous framework in which to study problems in the analysis, stability and design of networked control systems. Four dominant sources of difficulty are considered: packet dropouts, communication bandwidth constraints, parametric uncertainty, and time delays. Past methods and results are reviewed from a contemporary perspective, present trends are examined, and future possibilities proposed. Emphasis is placed on robust and reliable design methods. New control strategies for improving the efficiency of sensor data processing and reducing associated time delay are presented. The coverage provided features: Â·Â Â Â Â Â Â Â an overall assessment of recent and current fault-tolerant control algorithms; Â·Â Â Â Â Â Â Â treatment of several issues arising at the junction of control and communications; Â·Â Â Â Â Â Â Â key concepts followed by their proofs and efficient computational methods for their implementation; and Â·Â Â Â Â Â Â Â simulation examples (including TrueTime simulations) to...
Directory of Open Access Journals (Sweden)
FERNANDO CARBAYO
Full Text Available ABSTRACT Land planarians (Platyhelminthes are likely important components of the soil cryptofauna, although relevant aspects of their ecology such as their density remain largely unstudied. We investigated absolute and relative densities of flatworms in three patches of secondary Brazilian Atlantic rainforest in an urban environment. Two methods of sampling were carried out, one consisting of 90 hours of active search in delimited plots covering 6,000 mÂ² over a year, and the other consisting of leaf litter extraction from a 60 mÂ² soil area, totaling 480-600 l leaf litter. We found 288 specimens of 16 species belonging to the genera Geobia, Geoplana, Issoca, Luteostriata, Obama, Paraba, Pasipha, Rhynchodemus, Xerapoa, and the exotic species Bipalium kewense and Dolichoplana striata. Specimens up to 10 mm long were mostly sampled only with the leaf litter extraction method. Absolute densities, calculated from data obtained with leaf litter extraction, ranged between 1.25 and 2.10 individuals m-2. These values are 30 to 161 times higher than relative densities, calculated from data obtained by active search. Since most common sampling method used in land planarian studies on species composition and faunal inventories is active search for a few hours in a locality, our results suggest that small species might be overlooked. It remains to be tested whether similar densities of this cryptofauna are also found in primary forests.
Bayesian methods to estimate urban growth potential
Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; DupÃ©y, Lauren Nicole; MÃ©ley, AndrÃ©anne; Meentemeyer, Ross K.
2017-01-01
Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landownersâ€™ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United Statesâ€™ most rapidly urbanizing regions âˆ’ the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the regionâ€™s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest ownersâ€™ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the regionâ€™s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a regionâ€™s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.
Settar, Abdelhakim; Abboudi, SaÃ¯d; Madani, Brahim; Nebbali, Rachid
2018-02-01
Due to the endothermic nature of the steam methane reforming reaction, the process is often limited by the heat transfer behavior in the reactors. Poor thermal behavior sometimes leads to slow reaction kinetics, which is characterized by the presence of cold spots in the catalytic zones. Within this framework, the present work consists on a numerical investigation, in conjunction with an experimental one, on the one-dimensional heat transfer phenomenon during the heat supply of a catalytic-wall reactor, which is designed for hydrogen production. The studied reactor is inserted in an electric furnace where the heat requirement of the endothermic reaction is supplied by electric heating system. During the heat supply, an unknown heat flux density, received by the reactive flow, is estimated using inverse methods. In the basis of the catalytic-wall reactor model, an experimental setup is engineered in situ to measure the temperature distribution. Then after, the measurements are injected in the numerical heat flux estimation procedure, which is based on the Function Specification Method (FSM). The measured and estimated temperatures are confronted and the heat flux density which crosses the reactor wall is determined.
Directory of Open Access Journals (Sweden)
Wenhao Yu
Full Text Available The urban facility, one of the most important service providers is usually represented by sets of points in GIS applications using POI (Point of Interest model associated with certain human social activities. The knowledge about distribution intensity and pattern of facility POIs is of great significance in spatial analysis, including urban planning, business location choosing and social recommendations. Kernel Density Estimation (KDE, an efficient spatial statistics tool for facilitating the processes above, plays an important role in spatial density evaluation, because KDE method considers the decay impact of services and allows the enrichment of the information from a very simple input scatter plot to a smooth output density surface. However, the traditional KDE is mainly based on the Euclidean distance, ignoring the fact that in urban street network the service function of POI is carried out over a network-constrained structure, rather than in a Euclidean continuous space. Aiming at this question, this study proposes a computational method of KDE on a network and adopts a new visualization method by using 3-D "wall" surface. Some real conditional factors are also taken into account in this study, such as traffic capacity, road direction and facility difference. In practical works the proposed method is implemented in real POI data in Shenzhen city, China to depict the distribution characteristic of services under impacts of multi-factors.
Local linear density estimation for filtered survival data, with bias correction
DEFF Research Database (Denmark)
Nielsen, Jens Perch; Tanggaard, Carsten; Jones, M.C.
2009-01-01
it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a 'pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias-correction methods...... within our framework. The multiplicative bias-correction method proves to be the best in a simulation study comparing the performance of the considered estimators. An example concerning old-age mortality demonstrates the importance of the improvements provided....
Local Linear Density Estimation for Filtered Survival Data with Bias Correction
DEFF Research Database (Denmark)
Tanggaard, Carsten; Nielsen, Jens Perch; Jones, M.C.
it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a â€˜pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias correction methods...... within our framework. The multiplicative bias correction method proves to be best in a simulation study comparing the performance of the considered estimators. An example concerning old age mortality demonstrates the importance of the improvements provided....
Energy Technology Data Exchange (ETDEWEB)
Khodr, Zeina G.; Pfeiffer, Ruth M.; Gierach, Gretchen L., E-mail: GierachG@mail.nih.gov [Department of Health and Human Services, Division of Cancer Epidemiology and Genetics, National Cancer Institute, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States); Sak, Mark A.; Bey-Knight, Lisa [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 (United States); Duric, Nebojsa; Littrup, Peter [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 and Delphinus Medical Technologies, 46701 Commerce Center Drive, Plymouth, Michigan 48170 (United States); Ali, Haythem; Vallieres, Patricia [Henry Ford Health System, 2799 W Grand Boulevard, Detroit, Michigan 48202 (United States); Sherman, Mark E. [Division of Cancer Prevention, National Cancer Institute, Department of Health and Human Services, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States)
2015-10-15
Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (two scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authorsâ€™ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1â€“13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (âˆ¼0%). When evaluating change in sound speed between scans, 2.7% and âˆ¼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced raterâ€™s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density.
Internal Dosimetry Intake Estimation using Bayesian Methods
International Nuclear Information System (INIS)
Miller, G.; Inkret, W.C.; Martz, H.F.
1999-01-01
New methods for the inverse problem of internal dosimetry are proposed based on evaluating expectations of the Bayesian posterior probability distribution of intake amounts, given bioassay measurements. These expectation integrals are normally of very high dimension and hence impractical to use. However, the expectations can be algebraically transformed into a sum of terms representing different numbers of intakes, with a Poisson distribution of the number of intakes. This sum often rapidly converges, when the average number of intakes for a population is small. A simplified algorithm using data unfolding is described (UF code). (author)
International Nuclear Information System (INIS)
Tanaka, Toshiyuki; Nakajima, Makoto; Kobayashi, Ichizo; Toida, Masaru; Fukuda, Katsumi; Sato, Tatsuro; Nonaka, Katsumi; Gozu, Keisuke
2007-01-01
The authors have developed a new method of constructing high density bentonite barriers by means of a wet spraying method. Using this method, backfill material can be placed in narrow upper and side parts in a low-level radioactive waste disposal facility. Using a new supplying machine for bentonite, spraying tests were conducted to investigate the conditions during construction. On the basis of the test results, the various parameters for the spraying method were investigated. The test results are summarized as follows: 1. The new machine supplied about twice the weight of material supplied by a screw conveyor. A dry density of spraying bentonite 0.05 Mg/m 3 higher than that of a screw conveyor with the same water content could be achieved. 2. The dry density of sprayed bentonite at a boundary with concrete was the same as that at the center of the cross section. 3. The variation in densities of bentonite sprayed in the vertical downward and horizontal directions was small. Also, density reduction due to rebound during spraying was not seen. 4. Bentonite controlled by water content could be sprayed smoothly in the horizontal direction by a small machine. Also rebound could be collected by a machine conveying air. (author)
Directory of Open Access Journals (Sweden)
Tara Chestnut
Full Text Available Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd, is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1. The highest density observed was âˆ¼3 million zoospores L(-1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free
Comparison of methods for estimating carbon in harvested wood products
International Nuclear Information System (INIS)
Claudia Dias, Ana; Louro, Margarida; Arroja, Luis; Capela, Isabel
2009-01-01
There is a great diversity of methods for estimating carbon storage in harvested wood products (HWP) and, therefore, it is extremely important to agree internationally on the methods to be used in national greenhouse gas inventories. This study compares three methods for estimating carbon accumulation in HWP: the method suggested by Winjum et al. (Winjum method), the tier 2 method proposed by the IPCC Good Practice Guidance for Land Use, Land-Use Change and Forestry (GPG LULUCF) (GPG tier 2 method) and a method consistent with GPG LULUCF tier 3 methods (GPG tier 3 method). Carbon accumulation in HWP was estimated for Portugal under three accounting approaches: stock-change, production and atmospheric-flow. The uncertainty in the estimates was also evaluated using Monte Carlo simulation. The estimates of carbon accumulation in HWP obtained with the Winjum method differed substantially from the estimates obtained with the other methods, because this method tends to overestimate carbon accumulation with the stock-change and the production approaches and tends to underestimate carbon accumulation with the atmospheric-flow approach. The estimates of carbon accumulation provided by the GPG methods were similar, but the GPG tier 3 method reported the lowest uncertainties. For the GPG methods, the atmospheric-flow approach produced the largest estimates of carbon accumulation, followed by the production approach and the stock-change approach, by this order. A sensitivity analysis showed that using the ''best'' available data on production and trade of HWP produces larger estimates of carbon accumulation than using data from the Food and Agriculture Organization. (author)
New methods of testing nonlinear hypothesis using iterative NLLS estimator
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
Novel method for quantitative estimation of biofilms
DEFF Research Database (Denmark)
Syal, Kirtimaan
2017-01-01
Biofilm protects bacteria from stress and hostile environment. Crystal violet (CV) assay is the most popular method for biofilm determination adopted by different laboratories so far. However, biofilm layer formed at the liquid-air interphase known as pellicle is extremely sensitive to its washing...... and staining steps. Early phase biofilms are also prone to damage by the latter steps. In bacteria like mycobacteria, biofilm formation occurs largely at the liquid-air interphase which is susceptible to loss. In the proposed protocol, loss of such biofilm layer was prevented. In place of inverting...... and discarding the media which can lead to the loss of the aerobic biofilm layer in CV assay, media was removed from the formed biofilm with the help of a syringe and biofilm layer was allowed to dry. The staining and washing steps were avoided, and an organic solvent-tetrahydrofuran (THF) was deployed...
Vibrational Spectroscopic Studies of Tenofovir Using Density Functional Theory Method
Directory of Open Access Journals (Sweden)
G. R. Ramkumaar
2013-01-01
Full Text Available A systematic vibrational spectroscopic assignment and analysis of tenofovir has been carried out by using FTIR and FT-Raman spectral data. The vibrational analysis was aided by electronic structure calculationsâ€”hybrid density functional methods (B3LYP/6-311++G(d,p, B3LYP/6-31G(d,p, and B3PW91/6-31G(d,p. Molecular equilibrium geometries, electronic energies, IR intensities, and harmonic vibrational frequencies have been computed. The assignments proposed based on the experimental IR and Raman spectra have been reviewed and complete assignment of the observed spectra have been proposed. UV-visible spectrum of the compound was also recorded and the electronic properties such as HOMO and LUMO energies and were determined by time-dependent DFT (TD-DFT method. The geometrical, thermodynamical parameters, and absorption wavelengths were compared with the experimental data. The B3LYP/6-311++G(d,p-, B3LYP/6-31G(d,p-, and B3PW91/6-31G(d,p-based NMR calculation procedure was also done. It was used to assign the 13C and 1H NMR chemical shift of tenofovir.
International Nuclear Information System (INIS)
Yamaguchi, T.; Hatono, T.; Izumi, S.; Nishihara, S.; Kimura, K.; Torigoe, H.; Tanaka, T.; Miyaji, K.; Hara, Y.; Ueda, A.; Shigei, F.
2008-01-01
The sweet potato weevil, Cylas formicarius (Fabricius) is a major insect pest of the sweet potato, Ipomoea batatas (L.) Lam. throughout the tropical and subtropical regions of the world. We estimated the entire adult male population of C. formicarius at its low-density period on Kikai Island, Kagoshima Pref., Japan. The population of adult males at the high-density period in September was about 5 times larger than that at its low-density period in May, both of which were estimated by Yamamura's method. Using this calculation in combination with an estimate of the maximal population size (4 x 10E6) by Sugimoto et al. in 1994, the total number of male weevils at their low-density period can be assumed to be less than 8 x 10E5
Andriopoulou, M.; Nakamura, R.; Torkar, K.; Baumjohann, W.; Torbert, R. B.; Lindqvist, P.-A.; Khotyaintsev, Y. V.; Dorelli, John Charles; Burch, J. L.; Russell, C. T.
2016-01-01
Each spacecraft of the recently launched magnetospheric multiscale MMS mission is equipped with Active Spacecraft Potential Control (ASPOC) Instruments, which control the spacecraft potential in order to reduce spacecraft charging effects. ASPOC typically reduces the spacecraft potential to a few volts. On several occasions during the commissioning phase of the mission, the ASPOC instruments were operating only on one spacecraft at a time. Taking advantage of such intervals, we derive photoelectron curves and also perform reconstructions of the uncontrolled spacecraft potential for the spacecraft with active control and estimate the electron plasma density during those periods. We also establish the criteria under which our methods can be applied.
Novel Method for 5G Systems NLOS Channels Parameter Estimation
Directory of Open Access Journals (Sweden)
Vladeta Milenkovic
2017-01-01
Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.
Huang, Chengjun; Chen, Xiang; Cao, Shuai; Qiu, Bensheng; Zhang, Xu
2017-08-01
Objective. To realize accurate muscle force estimation, a novel framework is proposed in this paper which can extract the input of the prediction model from the appropriate activation area of the skeletal muscle. Approach. Surface electromyographic (sEMG) signals from the biceps brachii muscle during isometric elbow flexion were collected with a high-density (HD) electrode grid (128 channels) and the external force at three contraction levels was measured at the wrist synchronously. The sEMG envelope matrix was factorized into a matrix of basis vectors with each column representing an activation pattern and a matrix of time-varying coefficients by a nonnegative matrix factorization (NMF) algorithm. The activation pattern with the highest activation intensity, which was defined as the sum of the absolute values of the time-varying coefficient curve, was considered as the major activation pattern, and its channels with high weighting factors were selected to extract the input activation signal of a force estimation model based on the polynomial fitting technique. Main results. Compared with conventional methods using the whole channels of the grid, the proposed method could significantly improve the quality of force estimation and reduce the electrode number. Significance. The proposed method provides a way to find proper electrode placement for force estimation, which can be further employed in muscle heterogeneity analysis, myoelectric prostheses and the control of exoskeleton devices.
VHTRC experiment for verification test of Hâˆž reactivity estimation method
International Nuclear Information System (INIS)
Fujii, Yoshio; Suzuki, Katsuo; Akino, Fujiyoshi; Yamane, Tsuyoshi; Fujisaki, Shingo; Takeuchi, Motoyoshi; Ono, Toshihiko
1996-02-01
This experiment was performed at VHTRC to acquire the data for verifying the Hâˆž reactivity estimation method. In this report, the experimental method, the measuring circuits and data processing softwares are described in details. (author)
Carbon footprint: current methods of estimation.
Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker
2011-07-01
Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.
THE METHODS FOR ESTIMATING REGIONAL PROFESSIONAL MOBILE RADIO MARKET POTENTIAL
Directory of Open Access Journals (Sweden)
Y.Ã€. Korobeynikov
2008-12-01
Full Text Available The paper represents the authorâ€™s methods of estimating regional professional mobile radio market potential, that belongs to high-tech b2b markets. These methods take into consideration such market peculiarities as great range and complexity of products, technological constraints and infrastructure development for the technological systems operation. The paper gives an estimation of professional mobile radio potential in Perm region. This estimation is already used by one of the systems integrator for its strategy development.
Evaluation and reliability of bone histological age estimation methods
African Journals Online (AJOL)
Human age estimation at death plays a vital role in forensic anthropology and bioarchaeology. Researchers used morphological and histological methods to estimate human age from their skeletal remains. This paper discussed different histological methods that used human long bones and ribs to determine ageÂ ...
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
A Fast LMMSE Channel Estimation Method for OFDM Systems
Directory of Open Access Journals (Sweden)
Zhou Wen
2009-01-01
Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.
A Method for Estimation of Death Tolls in Disastrous Earthquake
Pai, C.; Tien, Y.; Teng, T.
2004-12-01
Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on
Investigation of MLE in nonparametric estimation methods of reliability function
International Nuclear Information System (INIS)
Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo
2001-01-01
There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not
Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.
2018-01-01
Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.
Density meters utilizing ionizing radiation: definitions and test methods
International Nuclear Information System (INIS)
Anon.
1981-01-01
This standard is applicable to density meters utilizing ionizing radiation, designed for the measurement of the density of liquids, slurries or fluidized solids. The standard applies to transmission-type instruments only. Reference to compliance with this standard shall identify any deviations and the reasons for such deviations. Safety aspects are not included but should fulfill the requirements of all relevant internationally accepted standards
Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data
Directory of Open Access Journals (Sweden)
Wei-Kuang Lai
2016-02-01
Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.
The determination of bulk (apparent) density of plant fibres by density method
International Nuclear Information System (INIS)
Sharifah Hanisah Syed Abd Aziz; Raja Jamal Raja hedar; Zahid Abdullah
2004-01-01
The absolute density of plant fibres excludes all pores and lumen and therefore is a measure of the solid matter of the fibres. On the other hand the bulk density, which is being discussed here, includes all the solid matter and the pores of the fibres. In this work, the apparent density of the fibre was measured by using the Archimedes principle, which involves the immersion of a known weight of fibre into a solvent of lower density than the fibre. Toluene with a density of about 860 kg/m3 was chosen as a solvent. A tuft of fibre was weighed and recorded as W fa . The fibre was then immersed in toluene, which wetted the fibre, and made to rest on the weighing pan submerged in the solvent and the weight of the immersed fibre was recorded as W fs . The apparent density was then calculated using the equation. All the measurements were taken at room temperature. The fibre samples were not oven dried prior to measurement. (Author)
International Nuclear Information System (INIS)
Sorel, C.; Moisy, Ph.; Dinh, B.; Blanc, P.
2000-01-01
In order to calculate criticality parameters of nuclear fuel solution systems, number density of nuclides are needed and they are generally estimated from density equations. Most of the relations allowing the calculation of the density of aqueous solutions containing the electrolytes HNO 3 -UO 2 (NO 3 ) 2 -Pu(NO 3 ) 4 , usually called 'nitrate dilution laws' are strictly empirical. They are obtained from a fit of assumed polynomial expressions on experimental density data. Out of their interpolation range, such mathematical expressions show discrepancies between calculated and experimental data appearing in the high concentrations range. In this study, a physico-chemical approach based on the isopiestic mixtures rule is suggested. The behaviour followed by these mixtures was first observed in 1936 by Zdanovskii and expressed as: 'Binary solutions (i.e. one electrolyte in water) having a same water activity are mixed without variation of this water activity value'. With regards to this behaviour, a set of basic thermodynamic expressions has been pointed out by Ryazanov and Vdovenko in 1965 concerning enthalpy, entropy, volume of mixtures, activity and osmotic coefficient of the components. In particular, a very simple relation for the density is obtained from the volume mixture expression depending on only two physico-chemical variables: i) concentration of each component in the mixture and in their respectively binary solutions having the same water activity as the mixture and ii), density of each component respectively in the binary solution having the same water activity as the mixture. Therefore, the calculation needs the knowledge of binary data (water activity, density and concentration) of each component at the same temperature as the mixture. Such experimental data are largely published in the literature and are available for nitric acid and uranyl nitrate. Nevertheless, nitric acid binary data show large discrepancies between the authors and need to be
Henshall, John M; Dierens, Leanne; Sellars, Melony J
2014-09-02
While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker SequenomÂ® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are
A photometric method for the estimation of the oil yield of oil shale
Cuttitta, Frank
1951-01-01
A method is presented for the distillation and photometric estimation of the oil yield of oil-bearing shales. The oil shale is distilled in a closed test tube and the oil extracted with toluene. The optical density of the toluene extract is used in the estimation of oil content and is converted to percentage of oil by reference to a standard curve. This curve is obtained by relating the oil yields determined by the Fischer assay method to the optical density of the toluene extract of the oil evolved by the new procedure. The new method gives results similar to those obtained by the Fischer assay method in a much shorter time. The applicability of the new method to oil-bearing shale and phosphatic shale has been tested.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Humbert, L.; Hazrati Marangalou, J.; Del RÃo Barquero, L.M.; van Lenthe, G.H.; van Rietbergen, B.
2016-01-01
Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical
International Nuclear Information System (INIS)
Hay, R.V.; Ryan, J.W.; Williams, K.A.; Atcher, R.W.; Brechbiel, M.W.; Gansow, O.A.; Fleming, R.M.; Stark, V.J.; Lathrop, K.A.; Harper, P.V.
1992-01-01
The authors propose a model to generate radiation absorbed dose estimates for radiolabeled low density lipoprotein (LDL), based upon eight studies of LDL biodistribution in three adult human subjects. Autologous plasma LDL was labeled with Tc-99m, I-123, or In-111 and injected intravenously. Biodistribution of each LDL derivative was monitored by quantitative analysis of scintigrams and direct counting of excreta and of serial blood samples. Assuming that transhepatic flux accounts for the majority of LDL clearance from the bloodstream, they obtained values of cumulated activity (A) and of mean dose per unit administered activity (D) for each study. In each case highest D values were calculated for liver, with mean doses of 5 rads estimated at injected activities of 27 mCi, 9 mCi, and 0.9 mCi for Tc-99m-LDL, I-123-LDL, and In-111-LDL, respectively
Joint Pitch and DOA Estimation Using the ESPRIT method
DEFF Research Database (Denmark)
Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom
2015-01-01
In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...
Comparison of precision orbit derived density estimates for CHAMP and GRACE satellites
Fattig, Eric Dale
Current atmospheric density models cannot adequately represent the density variations observed by satellites in Low Earth Orbit (LEO). Using an optimal orbit determination process, precision orbit ephemerides (POE) are used as measurement data to generate corrections to density values obtained from existing atmospheric models. Densities obtained using these corrections are then compared to density data derived from the onboard accelerometers of satellites, specifically the CHAMP and GRACE satellites. This comparison takes two forms, cross correlation analysis and root mean square analysis. The densities obtained from the POE method are nearly always superior to the empirical models, both in matching the trends observed by the accelerometer (cross correlation), and the magnitudes of the accelerometer derived density (root mean square). In addition, this method consistently produces better results than those achieved by the High Accuracy Satellite Drag Model (HASDM). For satellites orbiting Earth that pass through Earth's upper atmosphere, drag is the primary source of uncertainty in orbit determination and prediction. Variations in density, which are often not modeled or are inaccurately modeled, cause difficulty in properly calculating the drag acting on a satellite. These density variations are the result of many factors; however, the Sun is the main driver in upper atmospheric density changes. The Sun influences the densities in Earth's atmosphere through solar heating of the atmosphere, as well as through geomagnetic heating resulting from the solar wind. Data are examined for fourteen hour time spans between November 2004 and July 2009 for both the CHAMP and GRACE satellites. This data spans all available levels of solar and geomagnetic activity, which does not include data in the elevated and high solar activity bins due to the nature of the solar cycle. Density solutions are generated from corrections to five different baseline atmospheric models, as well as
Reverse survival method of fertility estimation: An evaluation
Directory of Open Access Journals (Sweden)
Thomas Spoorenberg
2014-07-01
Full Text Available Background: For the most part, demographers have relied on the ever-growing body of sample surveys collecting full birth history to derive total fertility estimates in less statistically developed countries. Yet alternative methods of fertility estimation can return very consistent total fertility estimates by using only basic demographic information. Objective: This paper evaluates the consistency and sensitivity of the reverse survival method -- a fertility estimation method based on population data by age and sex collected in one census or a single-round survey. Methods: A simulated population was first projected over 15 years using a set of fertility and mortality age and sex patterns. The projected population was then reverse survived using the Excel template FE_reverse_4.xlsx, provided with TimÃ¦us and Moultrie (2012. Reverse survival fertility estimates were then compared for consistency to the total fertility rates used to project the population. The sensitivity was assessed by introducing a series of distortions in the projection of the population and comparing the difference implied in the resulting fertility estimates. Results: The reverse survival method produces total fertility estimates that are very consistent and hardly affected by erroneous assumptions on the age distribution of fertility or by the use of incorrect mortality levels, trends, and age patterns. The quality of the age and sex population data that is 'reverse survived' determines the consistency of the estimates. The contribution of the method for the estimation of past and present trends in total fertility is illustrated through its application to the population data of five countries characterized by distinct fertility levels and data quality issues. Conclusions: Notwithstanding its simplicity, the reverse survival method of fertility estimation has seldom been applied. The method can be applied to a large body of existing and easily available population data
Analytical method for reconstruction pin to pin of the nuclear power density distribution
Energy Technology Data Exchange (ETDEWEB)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)
2013-07-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
Analytical method for reconstruction pin to pin of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2013-01-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
Energy Technology Data Exchange (ETDEWEB)
Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo [Div. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of)
2016-10-15
This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.
Directory of Open Access Journals (Sweden)
M. Abdul Aziz
2017-10-01
Full Text Available Population density is a key parameter to monitor endangered carnivores in the wild. The photographic capture-recapture method has been widely used for decades to monitor tigers, Panthera tigris, however the application of this method in the Sundarbans tiger landscape is challenging due to logistical difficulties. Therefore, we carried out molecular analyses of DNA contained in non-invasively collected genetic samples to assess the tiger population in the Bangladesh Sundarbans within a spatially explicit capture-recapture (SECR framework. By surveying four representative sample areas totalling 1994Â km2 of the Bangladesh Sundarbans, we collected 440 suspected tiger scat and hair samples. Genetic screening of these samples provided 233 authenticated tiger samples, which we attempted to amplify at 10 highly polymorphic microsatellite loci. Of these, 105 samples were successfully amplified, representing 45 unique genotype profiles. The capture-recapture analyses of these unique genotypes within the SECR model provided a density estimate of 2.85Â Â±Â SE 0.44 tigers/100Â km2 (95% CI: 1.99â€“3.71 tigers/100Â km2 for the area sampled, and an estimate of 121 tigers (95% CI: 84â€“158 tigers for the total area of the Bangladesh Sundarbans. We demonstrate that this non-invasive genetic surveillance can be an additional approach for monitoring tiger populations in a landscape where camera-trapping is challenging.
International Nuclear Information System (INIS)
Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo
2016-01-01
This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage
Directory of Open Access Journals (Sweden)
Jong Kyeom Lee
2016-10-01
Full Text Available This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.
A comparison of five methods of measuring mammographic density: a case-control study.
Astley, Susan M; Harkness, Elaine F; Sergeant, Jamie C; Warwick, Jane; Stavrinos, Paula; Warren, Ruth; Wilson, Mary; Beetles, Ursula; Gadde, Soujanya; Lim, Yit; Jain, Anil; Bundred, Sara; Barr, Nicola; Reece, Valerie; Brentnall, Adam R; Cuzick, Jack; Howell, Tony; Evans, D Gareth
2018-02-05
High mammographic density is associated with both risk of cancers being missed at mammography, and increased risk of developing breast cancer. Stratification of breast cancer prevention and screening requires mammographic density measures predictive of cancer. This study compares five mammographic density measures to determine the association with subsequent diagnosis of breast cancer and the presence of breast cancer at screening. Women participating in the "Predicting Risk Of Cancer At Screening" (PROCAS) study, a study of cancer risk, completed questionnaires to provide personal information to enable computation of the Tyrer-Cuzick risk score. Mammographic density was assessed by visual analogue scale (VAS), thresholding (Cumulus) and fully-automated methods (Densitas, Quantra, Volpara) in contralateral breasts of 366 women with unilateral breast cancer (cases) detected at screening on entry to the study (Cumulus 311/366) and in 338 women with cancer detected subsequently. Three controls per case were matched using age, body mass index category, hormone replacement therapy use and menopausal status. Odds ratios (OR) between the highest and lowest quintile, based on the density distribution in controls, for each density measure were estimated by conditional logistic regression, adjusting for classic risk factors. The strongest predictor of screen-detected cancer at study entry was VAS, OR 4.37 (95% CI 2.72-7.03) in the highest vs lowest quintile of percent density after adjustment for classical risk factors. Volpara, Densitas and Cumulus gave ORs for the highest vs lowest quintile of 2.42 (95% CI 1.56-3.78), 2.17 (95% CI 1.41-3.33) and 2.12 (95% CI 1.30-3.45), respectively. Quantra was not significantly associated with breast cancer (OR 1.02, 95% CI 0.67-1.54). Similar results were found for subsequent cancers, with ORs of 4.48 (95% CI 2.79-7.18), 2.87 (95% CI 1.77-4.64) and 2.34 (95% CI 1.50-3.68) in highest vs lowest quintiles of VAS, Volpara and Densitas
Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar
2014-01-01
Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinsonâ€™s disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisherâ€™s linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406
Combined backscatter and transmission method for nuclear density gauge
Directory of Open Access Journals (Sweden)
Golgoun Seyed Mohammad
2015-01-01
Full Text Available Nowadays, the use of nuclear density gauges, due to the ability to work in harsh industrial environments, is very common. In this study, to reduce error related to the Ï of continuous measuring density, the combination of backscatter and transmission are used simultaneously. For this reason, a 137Cs source for Compton scattering dominance and two detectors are simulated by MCNP4C code for measuring the density of 3 materials. Important advantages of this combined radiometric gauge are diminished influence of Î¼ and therefore improving linear regression.
A NEW METHOD FOR NON DESTRUCTIVE ESTIMATION OF Jc IN YBaCuO CERAMIC SAMPLES
Directory of Open Access Journals (Sweden)
Giancarlo Cordeiro Costa
2014-12-01
Full Text Available This work presents a new method for estimation of Jc as a bulk characteristic of YBCO blocks. The experimentalÂ magnetic interaction force between a SmCo permanent magnet and a YBCO block was compared to finite elementÂ method (FEM simulations results, allowing us to search a best fitting value to the critical current of the superconductingÂ sample. As FEM simulations were based on Bean model , the critical current density was taken as an unknownÂ parameter. This is a non destructive estimation method. since there is no need of breaking even a little piece of theÂ sample for analysis.
Consumptive use of upland rice as estimated by different methods
International Nuclear Information System (INIS)
Chhabda, P.R.; Varade, S.B.
1985-01-01
The consumptive use of upland rice (Oryza sativa Linn.) grown during the wet season (kharif) as estimated by modified Penman, radiation, pan-evaporation and Hargreaves methods showed a variation from computed consumptive use estimated by the gravimetric method. The variability increased with an increase in the irrigation interval, and decreased with an increase in the level of N applied. The average variability was less in pan-evaporation method, which could reliably be used for estimating water requirement of upland rice if percolation losses are considered
A new empirical model to estimate hourly diffuse photosynthetic photon flux density
Foyo-Moreno, I.; Alados, I.; Alados-Arboledas, L.
2018-05-01
Knowledge of the photosynthetic photon flux density (Qp) is critical in different applications dealing with climate change, plant physiology, biomass production, and natural illumination in greenhouses. This is particularly true regarding its diffuse component (Qpd), which can enhance canopy light-use efficiency and thereby boost carbon uptake. Therefore, diffuse photosynthetic photon flux density is a key driving factor of ecosystem-productivity models. In this work, we propose a model to estimate this component, using a previous model to calculate Qp and furthermore divide it into its components. We have used measurements in urban Granada (southern Spain), of global solar radiation (Rs) to study relationships between the ratio Qpd/Rs with different parameters accounting for solar position, water-vapour absorption and sky conditions. The model performance has been validated with experimental measurements from sites having varied climatic conditions. The model provides acceptable results, with the mean bias error and root mean square error varying between - 0.3 and - 8.8% and between 9.6 and 20.4%, respectively. Direct measurements of this flux are very scarce so that modelling simulations are needed, this is particularly true regarding its diffuse component. We propose a new parameterization to estimate this component using only measured data of solar global irradiance, which facilitates its use for the construction of long-term data series of PAR in regions where continuous measurements of PAR are not yet performed.
Chestnut, Tara E.; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie
2014-01-01
Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungusÂ Batrachochytrium dendrobatidisÂ (Bd), is a contributor to amphibian declines worldwide.Â BdÂ research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-livingÂ Bd. Therefore, we investigated patterns ofÂ BdÂ occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-livingÂ BdÂ in North American surface waters to determineÂ BdÂ seasonality, relationships betweenÂ BdÂ site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns ofÂ BdÂ density from a 4-year case study of aÂ Bd-positive wetland. We provide evidence thatÂ BdÂ occurs in the environment year-round.Â BdÂ exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy.Â BdÂ was detected in all months, typically at less than 100 zoospores Lâˆ’1. The highest density observed was âˆ¼3 million zoospores Lâˆ’1. We detectedÂ BdÂ in 47% of sites sampled, but estimated thatÂ BdÂ occupied 61% of sites, highlighting the importance of accounting for imperfect detection. WhenÂ BdÂ was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study ofÂ BdÂ disease ecology, and advance our understanding of amphibian exposure
Raleigh, M. S.; Smyth, E.; Small, E. E.
2017-12-01
The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE
Directory of Open Access Journals (Sweden)
GEE-YONG PARK
2014-02-01
Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.
Population Estimation with Mark and Recapture Method Program
International Nuclear Information System (INIS)
Limohpasmanee, W.; Kaewchoung, W.
1998-01-01
Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied
Ore reserve estimation: a summary of principles and methods
International Nuclear Information System (INIS)
Marques, J.P.M.
1985-01-01
The mining industry has experienced substantial improvements with the increasing utilization of computerized and electronic devices throughout the last few years. In the ore reserve estimation field the main methods have undergone recent advances in order to improve their overall efficiency. This paper presents the three main groups of ore reserve estimation methods presently used worldwide: Conventional, Statistical and Geostatistical, and elaborates a detaited description and comparative analysis of each. The Conventional Methods are the oldest, less complex and most employed ones. The Geostatistical Methods are the most recent precise and more complex ones. The Statistical Methods are intermediate to the others in complexity, diffusion and chronological order. (D.J.M.) [pt
A deep learning method for classifying mammographic breast density categories.
Mohamed, Aly A; Berg, Wendie A; Peng, Hong; Luo, Yahong; Jankowitz, Rachel C; Wu, Shandong
2018-01-01
Mammographic breast density is an established risk marker for breast cancer and is visually assessed by radiologists in routine mammogram image reading, using four qualitative Breast Imaging and Reporting Data System (BI-RADS) breast density categories. It is particularly difficult for radiologists to consistently distinguish the two most common and most variably assigned BI-RADS categories, i.e., "scattered density" and "heterogeneously dense". The aim of this work was to investigate a deep learning-based breast density classifier to consistently distinguish these two categories, aiming at providing a potential computerized tool to assist radiologists in assigning a BI-RADS category in current clinical workflow. In this study, we constructed a convolutional neural network (CNN)-based model coupled with a large (i.e., 22,000 images) digital mammogram imaging dataset to evaluate the classification performance between the two aforementioned breast density categories. All images were collected from a cohort of 1,427 women who underwent standard digital mammography screening from 2005 to 2016 at our institution. The truths of the density categories were based on standard clinical assessment made by board-certified breast imaging radiologists. Effects of direct training from scratch solely using digital mammogram images and transfer learning of a pretrained model on a large nonmedical imaging dataset were evaluated for the specific task of breast density classification. In order to measure the classification performance, the CNN classifier was also tested on a refined version of the mammogram image dataset by removing some potentially inaccurately labeled images. Receiver operating characteristic (ROC) curves and the area under the curve (AUC) were used to measure the accuracy of the classifier. The AUC was 0.9421 when the CNN-model was trained from scratch on our own mammogram images, and the accuracy increased gradually along with an increased size of training samples
Methods for design flood estimation in South Africa | Smithers ...
African Journals Online (AJOL)
The estimation of design floods is necessary for the design of hydraulic structures and to quantify the risk of failure of the structures. Most of the methods used for design flood estimation in South Africa were developed in the late 1960s and early 1970s and are in need of updating with more than 40 years of additional dataÂ ...
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner mealâ€¦
Performance of sampling methods to estimate log characteristics for wildlife.
Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton
2004-01-01
Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...
Estimation of pump operational state with model-based methods
International Nuclear Information System (INIS)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
2010-01-01
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
How does spatial study design influence density estimates from spatial capture-recapture models?
Directory of Open Access Journals (Sweden)
Rahel Sollmann
Full Text Available When estimating population density from data collected on non-invasive detector arrays, recently developed spatial capture-recapture (SCR models present an advance over non-spatial models by accounting for individual movement. While these models should be more robust to changes in trapping designs, they have not been well tested. Here we investigate how the spatial arrangement and size of the trapping array influence parameter estimates for SCR models. We analysed black bear data collected with 123 hair snares with an SCR model accounting for differences in detection and movement between sexes and across the trapping occasions. To see how the size of the trap array and trap dispersion influence parameter estimates, we repeated analysis for data from subsets of traps: 50% chosen at random, 50% in the centre of the array and 20% in the South of the array. Additionally, we simulated and analysed data under a suite of trap designs and home range sizes. In the black bear study, we found that results were similar across trap arrays, except when only 20% of the array was used. Black bear density was approximately 10 individuals per 100 km(2. Our simulation study showed that SCR models performed well as long as the extent of the trap array was similar to or larger than the extent of individual movement during the study period, and movement was at least half the distance between traps. SCR models performed well across a range of spatial trap setups and animal movements. Contrary to non-spatial capture-recapture models, they do not require the trapping grid to cover an area several times the average home range of the studied species. This renders SCR models more appropriate for the study of wide-ranging mammals and more flexible to design studies targeting multiple species.
Comparison of breast percent density estimation from raw versus processed digital mammograms
Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina
2011-03-01
We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Methods of multicriterion estimations in system total quality management
Directory of Open Access Journals (Sweden)
Nikolay V. Diligenskiy
2011-05-01
Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin WÃ¦ver; Berg, Casper Willestofte; Thygesen, Uffe HÃ¸gsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
Methods for design flood estimation in South Africa
African Journals Online (AJOL)
2012-07-04
Jul 4, 2012 ... 1970s and are in need of updating with more than 40 years of additional data ... This paper reviews methods used for design flood estimation in South Africa and .... transposition of past experience, or a deterministic approach,.
A simple method for estimating the convection- dispersion equation ...
African Journals Online (AJOL)
Jane
2011-08-31
Aug 31, 2011 ... approach of modeling solute transport in porous media uses the deterministic ... Methods of estimating CDE transport parameters can be divided into statistical ..... diffusion-type model for longitudinal mixing of fluids in flow.
Methods for the estimation of uranium ore reserves
International Nuclear Information System (INIS)
1985-01-01
The Manual is designed mainly to provide assistance in uranium ore reserve estimation methods to mining engineers and geologists with limited experience in estimating reserves, especially to those working in developing countries. This Manual deals with the general principles of evaluation of metalliferous deposits but also takes into account the radioactivity of uranium ores. The methods presented have been generally accepted in the international uranium industry
Evaluation of three paediatric weight estimation methods in Singapore.
Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong
2013-04-01
Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (pâ‚â‚€) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; pâ‚â‚€ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; pâ‚â‚€ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; pâ‚â‚€ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. Â© 2013 The Authors. Journal of Paediatrics and Child Health Â© 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
Assessing a learning process with functional ANOVA estimators of EEG power spectral densities.
GutiÃ©rrez, David; RamÃrez-Moreno, Mauricio A
2016-04-01
We propose to assess the process of learning a task using electroencephalographic (EEG) measurements. In particular, we quantify changes in brain activity associated to the progression of the learning experience through the functional analysis-of-variances (FANOVA) estimators of the EEG power spectral density (PSD). Such functional estimators provide a sense of the effect of training in the EEG dynamics. For that purpose, we implemented an experiment to monitor the process of learning to type using the Colemak keyboard layout during a twelve-lessons training. Hence, our aim is to identify statistically significant changes in PSD of various EEG rhythms at different stages and difficulty levels of the learning process. Those changes are taken into account only when a probabilistic measure of the cognitive state ensures the high engagement of the volunteer to the training. Based on this, a series of statistical tests are performed in order to determine the personalized frequencies and sensors at which changes in PSD occur, then the FANOVA estimates are computed and analyzed. Our experimental results showed a significant decrease in the power of [Formula: see text] and [Formula: see text] rhythms for ten volunteers during the learning process, and such decrease happens regardless of the difficulty of the lesson. These results are in agreement with previous reports of changes in PSD being associated to feature binding and memory encoding.
A novel method for estimating soil precompression stress from uniaxial confined compression tests
DEFF Research Database (Denmark)
LamandÃ©, Mathieu; SchjÃ¸nning, Per; Labouriau, Rodrigo
2017-01-01
. Stress-strain curves were obtained by performing uniaxial, confined compression tests on undisturbed soil cores for three soil types at three soil water potentials. The new method performed better than the Gompertz fitting method in estimating precompression stress. The values of precompression stress...... obtained from the new method were linearly related to the maximum stress experienced by the soil samples prior to the uniaxial, confined compression test at each soil condition with a slope close to 1. Precompression stress determined with the new method was not related to soil type or dry bulk density......The concept of precompression stress is used for estimating soil strength of relevance to fieldtraffic. It represents the maximum stress experienced by the soil. The most recently developed fitting method to estimate precompression stress (Gompertz) is based on the assumption of an S-shape stress...
A Channelization-Based DOA Estimation Method for Wideband Signals
Directory of Open Access Journals (Sweden)
Rui Guo
2016-07-01
Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.
Recent advances in density functional methods, pt. 1-2
Chong, Delano P
1995-01-01
Of all the different areas in computational chemistry, density functional theory (DFT) enjoys the most rapid development. Even at the level of the local density approximation (LDA), which is computationally less demanding, DFT can usually provide better answers than Hartree-Fock formalism for large systems such as clusters and solids. For atoms and molecules, the results from DFT often rival those obtained by ab initio quantum chemistry, partly because larger basis sets can be used. Such encouraging results have in turn stimulated workers to further investigate the formal theory as well as the
Methods and apparatus for measuring the density of geological formations
International Nuclear Information System (INIS)
Seeman, B.
1975-01-01
A tool for measuring the density of the geological formations traversed by a borehole is described. An apparatus corrects the effects of barite on the count rate of the pulses which are used for the density measurement and have an amplitude higher than a given threshold, by determining the deformations in the amplitude spectrum of these pulses and adjusting this threshold so as to compensate by the variation in the number of pulses taken into account, resulting from the adjustment for the variation in the number of counted pulses resulting from the said deformations
Density Functional Methods for Shock Physics and High Energy Density Science
Desjarlais, Michael
2017-06-01
Molecular dynamics with density functional theory has emerged over the last two decades as a powerful and accurate framework for calculating thermodynamic and transport properties with broad application to dynamic compression, high energy density science, and warm dense matter. These calculations have been extensively validated against shock and ramp wave experiments, are a principal component of high-fidelity equation of state generation, and are having wide-ranging impacts on inertial confinement fusion, planetary science, and shock physics research. In addition to thermodynamic properties, phase boundaries, and the equation of state, one also has access to electrical conductivity, thermal conductivity, and lower energy optical properties. Importantly, all these properties are obtained within the same theoretical framework and are manifestly consistent. In this talk I will give a brief history and overview of molecular dynamics with density functional theory and its use in calculating a wide variety of thermodynamic and transport properties for materials ranging from ambient to extreme conditions and with comparisons to experimental data. I will also discuss some of the limitations and difficulties, as well as active research areas. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Methods for Estimation of Market Power in Electric Power Industry
Turcik, M.; Oleinikova, I.; Junghans, G.; Kolcun, M.
2012-01-01
The article is related to a topical issue of the newly-arisen market power phenomenon in the electric power industry. The authors point out to the importance of effective instruments and methods for credible estimation of the market power on liberalized electricity market as well as the forms and consequences of market power abuse. The fundamental principles and methods of the market power estimation are given along with the most common relevant indicators. Furthermore, in the work a proposal for determination of the relevant market place taking into account the specific features of power system and a theoretical example of estimating the residual supply index (RSI) in the electricity market are given.
Stock price estimation using ensemble Kalman Filter square root method
Karya, D. F.; Katias, P.; Herlambang, T.
2018-04-01
Shares are securities as the possession or equity evidence of an individual or corporation over an enterprise, especially public companies whose activity is stock trading. Investment in stocks trading is most likely to be the option of investors as stocks trading offers attractive profits. In determining a choice of safe investment in the stocks, the investors require a way of assessing the stock prices to buy so as to help optimize their profits. An effective method of analysis which will reduce the risk the investors may bear is by predicting or estimating the stock price. Estimation is carried out as a problem sometimes can be solved by using previous information or data related or relevant to the problem. The contribution of this paper is that the estimates of stock prices in high, low, and close categorycan be utilized as investorsâ€™ consideration for decision making in investment. In this paper, stock price estimation was made by using the Ensemble Kalman Filter Square Root method (EnKF-SR) and Ensemble Kalman Filter method (EnKF). The simulation results showed that the resulted estimation by applying EnKF method was more accurate than that by the EnKF-SR, with an estimation error of about 0.2 % by EnKF and an estimation error of 2.6 % by EnKF-SR.
A numerical integration-based yield estimation method for integrated circuits
International Nuclear Information System (INIS)
Liang Tao; Jia Xinzhang
2011-01-01
A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)
A numerical integration-based yield estimation method for integrated circuits
Energy Technology Data Exchange (ETDEWEB)
Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)
2011-04-15
A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)
A simple method for estimating the size of nuclei on fractal surfaces
Zeng, Qiang
2017-10-01
Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.
Energy Technology Data Exchange (ETDEWEB)
Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife
2001-07-01
Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.
A Computationally Efficient Method for Polyphonic Pitch Estimation
Directory of Open Access Journals (Sweden)
Ruohua Zhou
2009-01-01
Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Nishimoto, Yoshio
2015-09-07
We develop a formalism for the calculation of excitation energies and excited state gradients for the self-consistent-charge density-functional tight-binding method with the third-order contributions of a Taylor series of the density functional theory energy with respect to the fluctuation of electron density (time-dependent density-functional tight-binding (TD-DFTB3)). The formulation of the excitation energy is based on the existing time-dependent density functional theory and the older TD-DFTB2 formulae. The analytical gradient is computed by solving Z-vector equations, and it requires one to calculate the third-order derivative of the total energy with respect to density matrix elements due to the inclusion of the third-order contributions. The comparison of adiabatic excitation energies for selected small and medium-size molecules using the TD-DFTB2 and TD-DFTB3 methods shows that the inclusion of the third-order contributions does not affect excitation energies significantly. A different set of parameters, which are optimized for DFTB3, slightly improves the prediction of adiabatic excitation energies statistically. The application of TD-DFTB for the prediction of absorption and fluorescence energies of cresyl violet demonstrates that TD-DFTB3 reproduced the experimental fluorescence energy quite well.
Comparing Methods for Estimating Direct Costs of Adverse Drug Events.
Gyllensten, Hanna; JÃ¶nsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; HÃ¤gg, Staffan; Rehnberg, Clas
2017-12-01
To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~â‚¬1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~â‚¬414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~â‚¬500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright Â© 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Comparison of volatility function technique for risk-neutral densities estimation
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.
PÃ©rez-Patricio, MadaÃn; Camas-Anzueto, Jorge Luis; Sanchez-AlegrÃa, AvisaÃ; Aguilar-GonzÃ¡lez, Abiel; GutiÃ©rrez-Miceli, Federico; Escobar-GÃ³mez, ElÃas; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-CoutiÃ±o, Ruben
2018-02-22
This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.
Optical Method for Estimating the Chlorophyll Contents in Plant Leaves
Directory of Open Access Journals (Sweden)
MadaÃn PÃ©rez-Patricio
2018-02-01
Full Text Available This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance, a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica, Canavalia ensiforme, and Lycopersicon esculentum. Experimental results showed thatâ€”in terms of accuracy and processing speedâ€”the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica, where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.
Training Methods for Image Noise Level Estimation on Wavelet Components
Directory of Open Access Journals (Sweden)
A. De Stefano
2004-12-01
Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.
A Group Contribution Method for Estimating Cetane and Octane Numbers
Energy Technology Data Exchange (ETDEWEB)
Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group
2016-07-28
Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.
Fiber density estimation from single q-shell diffusion imaging by tensor divergence.
Reisert, Marco; Mader, Irina; Umarova, Roza; Maier, Simon; Tebartz van Elst, Ludger; Kiselev, Valerij G
2013-08-15
Diffusion-weighted magnetic resonance imaging provides information about the nerve fiber bundle geometry of the human brain. While the inference of the underlying fiber bundle orientation only requires single q-shell measurements, the absolute determination of their volume fractions is much more challenging with respect to measurement techniques and analysis. Unfortunately, the usually employed multi-compartment models cannot be applied to single q-shell measurements, because the compartment's diffusivities cannot be resolved. This work proposes an equation for fiber orientation densities that can infer the absolute fraction up to a global factor. This equation, which is inspired by the classical mass preservation law in fluid dynamics, expresses the fiber conservation associated with the assumption that fibers do not terminate in white matter. Simulations on synthetic phantoms show that the approach is able to derive the densities correctly for various configurations. Experiments with a pseudo ground truth phantom show that even for complex, brain-like geometries the method is able to infer the densities correctly. In-vivo results with 81 healthy volunteers are plausible and consistent. A group analysis with respect to age and gender show significant differences, such that the proposed maps can be used as a quantitative measure for group and longitudinal analysis. Copyright Â© 2013 Elsevier Inc. All rights reserved.
Estimation of arsenic in nail using silver diethyldithiocarbamate method
Directory of Open Access Journals (Sweden)
Habiba Akhter Bhuiyan
2015-08-01
Full Text Available Spectrophotometric method of arsenic estimation in nails has four steps: a washing of nails, b digestion of nails, c arsenic generation, and finally d reading absorbance using spectrophotometer. Although the method is a cheapest one, widely used and effective, it is time consuming, laborious and need caution while using four acids.
Comparison of estimation methods for fitting weibull distribution to ...
African Journals Online (AJOL)
Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.
A simple and rapid method to estimate radiocesium in man
International Nuclear Information System (INIS)
Kindl, P.; Steger, F.
1990-09-01
A simple and rapid method for monitoring internal contamination of radiocesium in man was developed. This method is based on measurements of the Î³-rays emitted from the muscular parts between the thights by a simple NaJ(Tl)-system. The experimental procedure, the calibration, the estimation of the body activity and results are explained and discussed. (Authors)
On the Methods for Estimating the Corneoscleral Limbus.
Jesus, Danilo A; Iskander, D Robert
2017-08-01
The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean Â± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 Â± 0.14Â mm and reached up to 0.55Â mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
An improved method for estimating the frequency correlation function
Chelli, Ali; Pä tzold, Matthias
2012-01-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. Â© 2012 IEEE.
An improved method for estimating the frequency correlation function
Chelli, Ali
2012-04-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. Â© 2012 IEEE.
The estimation of the measurement results with using statistical methods
International Nuclear Information System (INIS)
Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T
2015-01-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed
The estimation of the measurement results with using statistical methods
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Evaluation of non cyanide methods for hemoglobin estimation
Directory of Open Access Journals (Sweden)
Vinaya B Shah
2011-01-01
Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers
Davis, Amy J; Leland, Bruce; Bodenchuk, Michael; VerCauteren, Kurt C; Pepin, Kim M
2017-06-01
Population density is a key driver of disease dynamics in wildlife populations. Accurate disease risk assessment and determination of management impacts on wildlife populations requires an ability to estimate population density alongside management actions. A common management technique for controlling wildlife populations to monitor and mitigate disease transmission risk is trapping (e.g., box traps, corral traps, drop nets). Although abundance can be estimated from trapping actions using a variety of analytical approaches, inference is limited by the spatial extent to which a trap attracts animals on the landscape. If the "area of influence" were known, abundance estimates could be converted to densities. In addition to being an important predictor of contact rate and thus disease spread, density is more informative because it is comparable across sites of different sizes. The goal of our study is to demonstrate the importance of determining the area sampled by traps (area of influence) so that density can be estimated from management-based trapping designs which do not employ a trapping grid. To provide one example of how area of influence could be calculated alongside management, we conducted a small pilot study on wild pigs (Sus scrofa) using two removal methods 1) trapping followed by 2) aerial gunning, at three sites in northeast Texas in 2015. We estimated abundance from trapping data with a removal model. We calculated empirical densities as aerial counts divided by the area searched by air (based on aerial flight tracks). We inferred the area of influence of traps by assuming consistent densities across the larger spatial scale and then solving for area impacted by the traps. Based on our pilot study we estimated the area of influence for corral traps in late summer in Texas to be âˆ¼8.6km 2 . Future work showing the effects of behavioral and environmental factors on area of influence will help mangers obtain estimates of density from management data, and
Adaptive Methods for Permeability Estimation and Smart Well Management
Energy Technology Data Exchange (ETDEWEB)
Lien, Martha Oekland
2005-04-01
The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement
International Nuclear Information System (INIS)
Cortina, E.; D'Atellis, C.E.
1990-01-01
This paper reports on the problem of simultaneously estimating neutron density and reactivity while operating a nuclear reactor. It is solved by using a bank of Kalman filters as an estimator and applying a probabilistic test to determine which filter of the bank has the best performance
Simple method for quick estimation of aquifer hydrogeological parameters
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Assessment of Methods for Estimating Risk to Birds from ...
The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probability of ingestion by birds of contaminated particles such as pesticide granules or lead particles (i.e. shot or bullet fragments). In addition, it presents an approach for using this information to estimate the risk of mortality to birds from ingestion of lead particles. Response to ERASC Request #16
Plant-available soil water capacity: estimation methods and implications
Directory of Open Access Journals (Sweden)
Bruno Montoani Silva
2014-04-01
Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.
Functional methods for arbitrary densities in curved spacetime
International Nuclear Information System (INIS)
Basler, M.
1993-01-01
This paper gives an introduction to the technique of functional differentiation and integration in curved spacetime, applied to examples from quantum field theory. Special attention is drawn on the choice of functional integral measure. Referring to a suggestion by Toms, fields are choosen as arbitrary scalar, spinorial or vectorial densities. The technique developed by Toms for a pure quadratic Lagrangian are extended to the calculation of the generating functional with external sources. Included are two examples of interacting theories, a self-interacting scalar field and a Yang-Mills theory. For these theories the complete set of Feynman graphs depending on the weight of variables is derived. (orig.)
A second-order unconstrained optimization method for canonical-ensemble density-functional methods
Nygaard, Cecilie R.; Olsen, Jeppe
2013-03-01
A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.
An Estimation Method for number of carrier frequency
Directory of Open Access Journals (Sweden)
Xiong Peng
2015-01-01
Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.
Validity of anthropometric procedures to estimate body density and body fat percent in military men
Directory of Open Access Journals (Sweden)
Ciro RomÃ©lio Rodriguez-AÃ±ez
1999-12-01
Full Text Available The objective of this study was to verify the validity of the Katch e McArdleâ€™s equation (1973,which uses the circumferences of the arm, forearm and abdominal to estimate the body density and the procedure of Cohen (1986 which uses the circumferences of the neck and abdominal to estimate the body fat percent (%F in military men. Therefore data from 50 military men, with mean age of 20.26 Â± 2.04 years serving in Santa Maria, RS, was collected. The circumferences were measured according with Katch e McArdle (1973 and Cohen (1986 procedures. The body density measured (Dm obtained under water weighting was used as criteria and its mean value was 1.0706 Â± 0.0100 g/ml. The residual lung volume was estimated using the Goldmanâ€™s e Becklakeâ€™s equation (1959. The %F was obtained with the Siriâ€™s equation (1961 and its mean value was 12.70 Â± 4.71%. The validation criterion suggested by Lohman (1992 was followed. The analysis of the results indicated that the procedure developed by Cohen (1986 has concurrent validity to estimate %F in military men or in other samples with similar characteristics with standard error of estimate of 3.45%. . RESUMO AtravÃ©s deste estudo objetivou-se verificar a validade: da equaÃ§Ã£o de Katch e McArdle (1973 que envolve os perÃmetros do braÃ§o, antebraÃ§o e abdÃ´men, para estimar a densidade corporal; e, o procedimento de Cohen (1986 que envolve os perÃmetros do pescoÃ§o e abdÃ´men, para estimar o % de gordura (%G; para militares. Para tanto, coletou-se os dados de 50 militares masculinos, com idade mÃ©dia de 20,26 Â± 2,04 anos, lotados na cidade de Santa Maria, RS. Mensurou-se os perÃmetros conforme procedimentos de Katch e McArdle (1973 e Cohen (1986. Utilizou-se a densidade corporal mensurada (Dm atravÃ©s da pesagem hidrostÃ¡tica como critÃ©rio de validaÃ§Ã£o, cujo valor mÃ©dio foi de 1,0706 Â± 0,0100 g/ml. Estimou-se o volume residual pela equaÃ§Ã£o de Goldman e Becklake (1959. O %G derivado da Dm estimou
Improved computation method in residual life estimation of structural components
Directory of Open Access Journals (Sweden)
MaksimoviÄ‡ Stevan M.
2013-01-01
Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001
A New Method for Estimation of Velocity Vectors
DEFF Research Database (Denmark)
Jensen, JÃ¸rgen Arendt; Munk, Peter
1998-01-01
The paper describes a new method for determining the velocity vector of a remotely sensed object using either sound or electromagnetic radiation. The movement of the object is determined from a field with spatial oscillations in both the axial direction of the transducer and in one or two...... directions transverse to the axial direction. By using a number of pulse emissions, the inter-pulse movement can be estimated and the velocity found from the estimated movement and the time between pulses. The method is based on the principle of using transverse spatial modulation for making the received...
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. Â© 2018 Society for the Experimental Analysis of Behavior.