WorldWideScience

Sample records for copula density estimation

  1. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  2. On Structure, Family and Parameter Estimation of Hierarchical Archimedean Copulas

    Czech Academy of Sciences Publication Activity Database

    Górecki, J.; Hofert, M.; Holeňa, Martin

    2017-01-01

    Roč. 87, č. 17 (2017), s. 3261-3324 ISSN 0094-9655 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : copula estimation * goodness-of-fit * Hierarchical Archimedean copula * structure determination Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016

  3. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    International Nuclear Information System (INIS)

    Rupšys, P.

    2015-01-01

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE

  4. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    Energy Technology Data Exchange (ETDEWEB)

    Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)

    2015-10-28

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  5. Application of selection and estimation regular vine copula on go public company share

    Science.gov (United States)

    Hasna Afifah, R.; Noviyanti, Lienda; Bachrudin, Achmad

    2018-03-01

    The accuracy of financial risk management involving a large number of assets is needed, but information about dependencies among assets cannot be adequately analyzed. To analyze dependencies on a number of assets, several tools have been added to standard multivariate copula. However, these tools have not been adequately used in apps with higher dimensions. The bivariate parametric copula families can be used to solve it. The multivariate copula can be built from the bivariate parametric copula which is connected by a graphical representation to become Pair Copula Constructions (PCCs) or vine copula. The application of C-vine and D-vine copula have been used in some researches, but the use of C-vine and D-vine copula is more limited than R-vine copula. Therefore, this study used R-vine copula to provide flexibility for modeling complex dependencies on a high dimension. Since copula is a static model, while stock values change over time, then copula should be combined with the ARMA- GARCH model for modeling the movement of shares (volatility). The objective of this paper is to select and estimate R-vine copula which is used to analyze PT Jasa Marga (Persero) Tbk (JSMR), PT Waskita Karya (Persero) Tbk (WSKT), and PT Bank Mandiri (Persero) Tbk (BMRI) from august 31, 2014 to august 31, 2017. From the method it is obtained that the selected copulas for 2 edges at the first tree are survival Gumbel and the copula for edge at the second tree is Gaussian.

  6. Estimating Drilling Cost and Duration Using Copulas Dependencies Models

    Directory of Open Access Journals (Sweden)

    M. Al Kindi

    2017-03-01

    Full Text Available Estimation of drilling budget and duration is a high-level challenge for oil and gas industry. This is due to the many uncertain activities in the drilling procedure such as material prices, overhead cost, inflation, oil prices, well type, and depth of drilling. Therefore, it is essential to consider all these uncertain variables and the nature of relationships between them. This eventually leads into the minimization of the level of uncertainty and yet makes a "good" estimation points for budget and duration given the well type. In this paper, the copula probability theory is used in order to model the dependencies between cost/duration and MRI (mechanical risk index. The MRI is a mathematical computation, which relates various drilling factors such as: water depth, measured depth, true vertical depth in addition to mud weight and horizontal displacement. In general, the value of MRI is utilized as an input for the drilling cost and duration estimations. Therefore, modeling the uncertain dependencies between MRI and both cost and duration using copulas is important. The cost and duration estimates for each well were extracted from the copula dependency model where research study simulate over 10,000 scenarios. These new estimates were later compared to the actual data in order to validate the performance of the procedure. Most of the wells show moderate - weak relationship of MRI dependence, which means that the variation in these wells can be related to MRI but to the extent that it is not the primary source.

  7. Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation

    NARCIS (Netherlands)

    Segers, J.; van den Akker, R.; Werker, B.J.M.

    2014-01-01

    We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of

  8. A survey of kernel-type estimators for copula and their applications

    Science.gov (United States)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  9. Nested and Hierarchical Archimax copulas

    KAUST Repository

    Hofert, Marius; Huser, Raphaë l; Prasad, Avinash

    2017-01-01

    The class of Archimax copulas is generalized to nested and hierarchical Archimax copulas in several ways. First, nested extreme-value copulas or nested stable tail dependence functions are introduced to construct nested Archimax copulas based on a single frailty variable. Second, a hierarchical construction of d-norm generators is presented to construct hierarchical stable tail dependence functions and thus hierarchical extreme-value copulas. Moreover, one can, by itself or additionally, introduce nested frailties to extend Archimax copulas to nested Archimax copulas in a similar way as nested Archimedean copulas extend Archimedean copulas. Further results include a general formula for the density of Archimax copulas.

  10. Nested and Hierarchical Archimax copulas

    KAUST Repository

    Hofert, Marius

    2017-07-03

    The class of Archimax copulas is generalized to nested and hierarchical Archimax copulas in several ways. First, nested extreme-value copulas or nested stable tail dependence functions are introduced to construct nested Archimax copulas based on a single frailty variable. Second, a hierarchical construction of d-norm generators is presented to construct hierarchical stable tail dependence functions and thus hierarchical extreme-value copulas. Moreover, one can, by itself or additionally, introduce nested frailties to extend Archimax copulas to nested Archimax copulas in a similar way as nested Archimedean copulas extend Archimedean copulas. Further results include a general formula for the density of Archimax copulas.

  11. Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model.

    Science.gov (United States)

    Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak

    2015-01-01

    This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels.

  12. Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model

    Directory of Open Access Journals (Sweden)

    Jiechen Tang

    2015-01-01

    Full Text Available This paper concentrates on estimating the risk of Title Transfer Facility (TTF Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR and conditional value at risk (CVaR. Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels.

  13. Comparing the Accuracy of Copula-Based Multivariate Density Forecasts in Selected Regions of Support

    NARCIS (Netherlands)

    C.G.H. Diks (Cees); V. Panchenko (Valentyn); O. Sokolinskiy (Oleg); D.J.C. van Dijk (Dick)

    2013-01-01

    textabstractThis paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample)

  14. Comparing the accuracy of copula-based multivariate density forecasts in selected regions of support

    NARCIS (Netherlands)

    Diks, C.; Panchenko, V.; Sokolinskiy, O.; van Dijk, D.

    2013-01-01

    This paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample) conditional

  15. An Approach to Structure Determination and Estimation of Hierarchical Archimedean Copulas and its Application to Bayesian Classification

    Czech Academy of Sciences Publication Activity Database

    Górecki, J.; Hofert, M.; Holeňa, Martin

    2016-01-01

    Roč. 46, č. 1 (2016), s. 21-59 ISSN 0925-9902 R&D Projects: GA ČR GA13-17187S Grant - others:Slezská univerzita v Opavě(CZ) SGS/21/2014 Institutional support: RVO:67985807 Keywords : Copula * Hierarchical archimedean copula * Copula estimation * Structure determination * Kendall’s tau * Bayesian classification Subject RIV: IN - Informatics, Computer Science Impact factor: 1.294, year: 2016

  16. A Copula-Based Method for Estimating Shear Strength Parameters of Rock Mass

    Directory of Open Access Journals (Sweden)

    Da Huang

    2014-01-01

    Full Text Available The shear strength parameters (i.e., the internal friction coefficient f and cohesion c are very important in rock engineering, especially for the stability analysis and reinforcement design of slopes and underground caverns. In this paper, a probabilistic method, Copula-based method, is proposed for estimating the shear strength parameters of rock mass. The optimal Copula functions between rock mass quality Q and f, Q and c for the marbles are established based on the correlation analyses of the results of 12 sets of in situ tests in the exploration adits of Jinping I-Stage Hydropower Station. Although the Copula functions are derived from the in situ tests for the marbles, they can be extended to be applied to other types of rock mass with similar geological and mechanical properties. For another 9 sets of in situ tests as an extensional application, by comparison with the results from Hoek-Brown criterion, the estimated values of f and c from the Copula-based method achieve better accuracy. Therefore, the proposed Copula-based method is an effective tool in estimating rock strength parameters.

  17. Estimation of value at risk in currency exchange rate portfolio using asymmetric GJR-GARCH Copula

    Science.gov (United States)

    Nurrahmat, Mohamad Husein; Noviyanti, Lienda; Bachrudin, Achmad

    2017-03-01

    In this study, we discuss the problem in measuring the risk in a portfolio based on value at risk (VaR) using asymmetric GJR-GARCH Copula. The approach based on the consideration that the assumption of normality over time for the return can not be fulfilled, and there is non-linear correlation for dependent model structure among the variables that lead to the estimated VaR be inaccurate. Moreover, the leverage effect also causes the asymmetric effect of dynamic variance and shows the weakness of the GARCH models due to its symmetrical effect on conditional variance. Asymmetric GJR-GARCH models are used to filter the margins while the Copulas are used to link them together into a multivariate distribution. Then, we use copulas to construct flexible multivariate distributions with different marginal and dependence structure, which is led to portfolio joint distribution does not depend on the assumptions of normality and linear correlation. VaR obtained by the analysis with confidence level 95% is 0.005586. This VaR derived from the best Copula model, t-student Copula with marginal distribution of t distribution.

  18. A method of moments to estimate bivariate survival functions: the copula approach

    Directory of Open Access Journals (Sweden)

    Silvia Angela Osmetti

    2013-05-01

    Full Text Available In this paper we discuss the problem on parametric and non parametric estimation of the distributions generated by the Marshall-Olkin copula. This copula comes from the Marshall-Olkin bivariate exponential distribution used in reliability analysis. We generalize this model by the copula and different marginal distributions to construct several bivariate survival functions. The cumulative distribution functions are not absolutely continuous and they unknown parameters are often not be obtained in explicit form. In order to estimate the parameters we propose an easy procedure based on the moments. This method consist in two steps: in the first step we estimate only the parameters of marginal distributions and in the second step we estimate only the copula parameter. This procedure can be used to estimate the parameters of complex survival functions in which it is difficult to find an explicit expression of the mixed moments. Moreover it is preferred to the maximum likelihood one for its simplex mathematic form; in particular for distributions whose maximum likelihood parameters estimators can not be obtained in explicit form.

  19. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  20. Factor Copula Models for Replicated Spatial Data

    KAUST Repository

    Krupskii, Pavel

    2016-12-19

    We propose a new copula model that can be used with replicated spatial data. Unlike the multivariate normal copula, the proposed copula is based on the assumption that a common factor exists and affects the joint dependence of all measurements of the process. Moreover, the proposed copula can model tail dependence and tail asymmetry. The model is parameterized in terms of a covariance function that may be chosen from the many models proposed in the literature, such as the Matérn model. For some choice of common factors, the joint copula density is given in closed form and therefore likelihood estimation is very fast. In the general case, one-dimensional numerical integration is needed to calculate the likelihood, but estimation is still reasonably fast even with large data sets. We use simulation studies to show the wide range of dependence structures that can be generated by the proposed model with different choices of common factors. We apply the proposed model to spatial temperature data and compare its performance with some popular geostatistics models.

  1. Factor Copula Models for Replicated Spatial Data

    KAUST Repository

    Krupskii, Pavel; Huser, Raphaë l; Genton, Marc G.

    2016-01-01

    We propose a new copula model that can be used with replicated spatial data. Unlike the multivariate normal copula, the proposed copula is based on the assumption that a common factor exists and affects the joint dependence of all measurements of the process. Moreover, the proposed copula can model tail dependence and tail asymmetry. The model is parameterized in terms of a covariance function that may be chosen from the many models proposed in the literature, such as the Matérn model. For some choice of common factors, the joint copula density is given in closed form and therefore likelihood estimation is very fast. In the general case, one-dimensional numerical integration is needed to calculate the likelihood, but estimation is still reasonably fast even with large data sets. We use simulation studies to show the wide range of dependence structures that can be generated by the proposed model with different choices of common factors. We apply the proposed model to spatial temperature data and compare its performance with some popular geostatistics models.

  2. Understanding similarity of groundwater systems with empirical copulas

    Science.gov (United States)

    Haaf, Ezra; Kumar, Rohini; Samaniego, Luis; Barthel, Roland

    2016-04-01

    Within the classification framework for groundwater systems that aims for identifying similarity of hydrogeological systems and transferring information from a well-observed to an ungauged system (Haaf and Barthel, 2015; Haaf and Barthel, 2016), we propose a copula-based method for describing groundwater-systems similarity. Copulas are an emerging method in hydrological sciences that make it possible to model the dependence structure of two groundwater level time series, independently of the effects of their marginal distributions. This study is based on Samaniego et al. (2010), which described an approach calculating dissimilarity measures from bivariate empirical copula densities of streamflow time series. Subsequently, streamflow is predicted in ungauged basins by transferring properties from similar catchments. The proposed approach is innovative because copula-based similarity has not yet been applied to groundwater systems. Here we estimate the pairwise dependence structure of 600 wells in Southern Germany using 10 years of weekly groundwater level observations. Based on these empirical copulas, dissimilarity measures are estimated, such as the copula's lower- and upper corner cumulated probability, copula-based Spearman's rank correlation - as proposed by Samaniego et al. (2010). For the characterization of groundwater systems, copula-based metrics are compared with dissimilarities obtained from precipitation signals corresponding to the presumed area of influence of each groundwater well. This promising approach provides a new tool for advancing similarity-based classification of groundwater system dynamics. Haaf, E., Barthel, R., 2015. Methods for assessing hydrogeological similarity and for classification of groundwater systems on the regional scale, EGU General Assembly 2015, Vienna, Austria. Haaf, E., Barthel, R., 2016. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs EGU General Assembly

  3. Using Copulas in the Estimation of the Economic Project Value in the Mining Industry, Including Geological Variability

    Science.gov (United States)

    Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal

    2017-12-01

    Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.

  4. OPTIMUM VOLUME OF BANK RESERVE: FORECASTING OF OVERDUE CREDIT INDEBTEDNESS USING COPULA MODELS

    Directory of Open Access Journals (Sweden)

    Kazakova K. A.

    2015-12-01

    Full Text Available The article propose to consider the possibility of RLUF-copulas application for the creation of joint distributions of overdue credit indebtedness ranks with macroeconomic indicators for the purpose of indebtedness forecasting and also for the definition of optimum volumes of reserve requirements for the corresponding losses. In this research the comparative analysis of multivariate distributions of RLUF-copula estimation with such classical copulas, as FGM-copula, Frank's copula and Gauss's copula is made. In the article the method of maximum likelihood is used for receiving estimates of model parameters. In case of RLUF-copula Bayesian estimates of parameters are received using the Metropolis algorithm with random volatility. Forecasting of bank reserve volumes for all received models is executed in the form of random sample generation by the means of the algorithm of acceptance-deviation for the creation of the corresponding sample of joint distribution using the copula density function. As the result of playing of hundred possible scenarios of indebtedness volumes is obtained the 95 % confidence level for the possible volume of credit indebtedness which can fully act as the optimum volume of reserve requirements for the corresponding credit losses.

  5. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel; Genton, Marc G.

    2018-01-01

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  6. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel

    2018-04-25

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  7. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    Science.gov (United States)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  8. Supplementary Material for: Factor Copula Models for Replicated Spatial Data

    KAUST Repository

    Krupskii, Pavel; Huser, Raphaë l; Genton, Marc G.

    2016-01-01

    We propose a new copula model that can be used with replicated spatial data. Unlike the multivariate normal copula, the proposed copula is based on the assumption that a common factor exists and affects the joint dependence of all measurements of the process. Moreover, the proposed copula can model tail dependence and tail asymmetry. The model is parameterized in terms of a covariance function that may be chosen from the many models proposed in the literature, such as the Matérn model. For some choice of common factors, the joint copula density is given in closed form and therefore likelihood estimation is very fast. In the general case, one-dimensional numerical integration is needed to calculate the likelihood, but estimation is still reasonably fast even with large data sets. We use simulation studies to show the wide range of dependence structures that can be generated by the proposed model with different choices of common factors. We apply the proposed model to spatial temperature data and compare its performance with some popular geostatistics models.

  9. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  10. Goodness-of-fit test for copulas

    Science.gov (United States)

    Panchenko, Valentyn

    2005-09-01

    Copulas are often used in finance to characterize the dependence between assets. However, a choice of the functional form for the copula is an open question in the literature. This paper develops a goodness-of-fit test for copulas based on positive definite bilinear forms. The suggested test avoids the use of plug-in estimators that is the common practice in the literature. The test statistics can be consistently computed on the basis of V-estimators even in the case of large dimensions. The test is applied to a dataset of US large cap stocks to assess the performance of the Gaussian copula for the portfolios of assets of various dimension. The Gaussian copula appears to be inadequate to characterize the dependence between assets.

  11. Application of Vine Copulas to Credit Portfolio Risk Modeling

    Directory of Open Access Journals (Sweden)

    Marco Geidosch

    2016-06-01

    Full Text Available In this paper, we demonstrate the superiority of vine copulas over conventional copulas when modeling the dependence structure of a credit portfolio. We show statistical and economic implications of replacing conventional copulas by vine copulas for a subportfolio of the Euro Stoxx 50 and the S&P 500 companies, respectively. Our study includes D-vines and R-vines where the bivariate building blocks are chosen from the Gaussian, the t and the Clayton family. Our findings are (i the conventional Gauss copula is deficient in modeling the dependence structure of a credit portfolio and economic capital is seriously underestimated; (ii D-vine structures offer a better statistical fit to the data than classical copulas, but underestimate economic capital compared to R-vines; (iii when mixing different copula families in an R-vine structure, the best statistical fit to the data can be achieved which corresponds to the most reliable estimate for economic capital.

  12. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  13. Two-stage estimation in copula models used in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2005-01-01

    by Shih and Louis (Biometrics vol. 51, pp. 1384-1399, 1995b) and Glidden (Lifetime Data Analysis vol. 6, pp. 141-156, 2000). Because register based family studies often involve very large cohorts a method for analysing a sampled cohort is also derived together with the asymptotic properties...... of the estimators. The proposed methods are studied in simulations and the estimators are found to be highly efficient. Finally, the methods are applied to a study of mortality in twins....

  14. A Note on Upper Tail Behavior of Liouville Copulas

    Directory of Open Access Journals (Sweden)

    Lei Hua

    2016-11-01

    Full Text Available The family of Liouville copulas is defined as the survival copulas of multivariate Liouville distributions, and it covers the Archimedean copulas constructed by Williamson’s d-transform. Liouville copulas provide a very wide range of dependence ranging from positive to negative dependence in the upper tails, and they can be useful in modeling tail risks. In this article, we study the upper tail behavior of Liouville copulas through their upper tail orders. Tail orders of a more general scale mixture model that covers Liouville distributions is first derived, and then tail order functions and tail order density functions of Liouville copulas are derived. Concrete examples are given after the main results.

  15. A Theoretical Argument Why the t-Copula Explains Credit Risk Contagion Better than the Gaussian Copula

    Directory of Open Access Journals (Sweden)

    Didier Cossin

    2010-01-01

    Full Text Available One of the key questions in credit dependence modelling is the specfication of the copula function linking the marginals of default variables. Copulae functions are important because they allow to decouple statistical inference into two parts: inference of the marginals and inference of the dependence. This is particularly important in the area of credit risk where information on dependence is scant. Whereas the techniques to estimate the parameters of the copula function seem to be fairly well established, the choice of the copula function is still an open problem. We find out by simulation that the t-copula naturally arises from a structural model of credit risk, proposed by Cossin and Schellhorn (2007. If revenues are linked by a Gaussian copula, we demonstrate that the t-copula provides a better fit to simulations than does a Gaussian copula. This is done under various specfications of the marginals and various configurations of the network. Beyond its quantitative importance, this result is qualitatively intriguing. Student's t-copulae induce fatter (joint tails than Gaussian copulae ceteris paribus. On the other hand observed credit spreads have generally fatter joint tails than the ones implied by the Gaussian distribution. We thus provide a new statistical explanation why (i credit spreads have fat joint tails, and (ii financial crises are amplified by network effects.

  16. Estimating risk of foreign exchange portfolio: Using VaR and CVaR based on GARCH-EVT-Copula model

    Science.gov (United States)

    Wang, Zong-Run; Chen, Xiao-Hong; Jin, Yan-Bo; Zhou, Yan-Ju

    2010-11-01

    This paper introduces GARCH-EVT-Copula model and applies it to study the risk of foreign exchange portfolio. Multivariate Copulas, including Gaussian, t and Clayton ones, were used to describe a portfolio risk structure, and to extend the analysis from a bivariate to an n-dimensional asset allocation problem. We apply this methodology to study the returns of a portfolio of four major foreign currencies in China, including USD, EUR, JPY and HKD. Our results suggest that the optimal investment allocations are similar across different Copulas and confidence levels. In addition, we find that the optimal investment concentrates on the USD investment. Generally speaking, t Copula and Clayton Copula better portray the correlation structure of multiple assets than Normal Copula.

  17. Optimization of Barron density estimates

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; van der Meulen, E. C.

    2001-01-01

    Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001

  18. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  19. ESTIMASI NILAI VaR PORTOFOLIO MENGGUNAKAN FUNGSI ARCHIMEDEAN COPULA

    Directory of Open Access Journals (Sweden)

    AULIA ATIKA PRAWIBTA SUHARTO

    2017-01-01

    Full Text Available Value at Risk explains the magnitude of the worst losses occurred in financial products investments with a certain level of confidence and time interval. The purpose of this study is to estimate the VaR of portfolio using Archimedean Copula family. The methods for calculating the VaR are as follows: (1 calculating the stock return; (2 calculating descriptive statistics of return; (3 checking for the nature of autocorrelation and heteroscedasticity effects on stock return data; (4 checking for the presence of extreme value by using Pareto tail; (5 estimating the parameters of Achimedean Copula family; (6 conducting simulations of Archimedean Copula; (7 estimating the value of the stock portfolio VaR. This study uses the closing price of TLKM and GGRM. At 90% the VaR obtained using Clayton, Gumbel, Frank copulas are 0.9562%, 1.0189%, 0.9827% respectively. At 95% the VaR obtained using Clayton, Gumbel, Frank copulas are 1.2930%, 1.2522%, 1.3152% respectively. At 99% the VaR obtained using Clayton, Gumbel, Frank copulas are 2.0327%, 1.9164%, is 1.8678% respectively. In conclusion estimation of VaR using Clayton copula yields the highest VaR.

  20. Competing Risks Copula Models for Unemployment Duration

    DEFF Research Database (Denmark)

    Lo, Simon M. S.; Stephan, Gesine; Wilke, Ralf

    2017-01-01

    The copula graphic estimator (CGE) for competing risks models has received little attention in empirical research, despite having been developed into a comprehensive research method. In this paper, we bridge the gap between theoretical developments and applied research by considering a general...... class of competing risks copula models, which nests popular models such as the Cox proportional hazards model, the semiparametric multivariate mixed proportional hazards model (MMPHM), and the CGE as special cases. Analyzing the effects of a German Hartz reform on unemployment duration, we illustrate...

  1. Convolution copula econometrics

    CERN Document Server

    Cherubini, Umberto; Mulinacci, Sabrina

    2016-01-01

    This book presents a novel approach to time series econometrics, which studies the behavior of nonlinear stochastic processes. This approach allows for an arbitrary dependence structure in the increments and provides a generalization with respect to the standard linear independent increments assumption of classical time series models. The book offers a solution to the problem of a general semiparametric approach, which is given by a concept called C-convolution (convolution of dependent variables), and the corresponding theory of convolution-based copulas. Intended for econometrics and statistics scholars with a special interest in time series analysis and copula functions (or other nonparametric approaches), the book is also useful for doctoral students with a basic knowledge of copula functions wanting to learn about the latest research developments in the field.

  2. On the censored cost-effectiveness analysis using copula information

    Directory of Open Access Journals (Sweden)

    Charles Fontaine

    2017-02-01

    Full Text Available Abstract Background Information and theory beyond copula concepts are essential to understand the dependence relationship between several marginal covariates distributions. In a therapeutic trial data scheme, most of the time, censoring occurs. That could lead to a biased interpretation of the dependence relationship between marginal distributions. Furthermore, it could result in a biased inference of the joint probability distribution function. A particular case is the cost-effectiveness analysis (CEA, which has shown its utility in many medico-economic studies and where censoring often occurs. Methods This paper discusses a copula-based modeling of the joint density and an estimation method of the costs, and quality adjusted life years (QALY in a cost-effectiveness analysis in case of censoring. This method is not based on any linearity assumption on the inferred variables, but on a punctual estimation obtained from the marginal distributions together with their dependence link. Results Our results show that the proposed methodology keeps only the bias resulting statistical inference and don’t have anymore a bias based on a unverified linearity assumption. An acupuncture study for chronic headache in primary care was used to show the applicability of the method and the obtained ICER keeps in the confidence interval of the standard regression methodology. Conclusion For the cost-effectiveness literature, such a technique without any linearity assumption is a progress since it does not need the specification of a global linear regression model. Hence, the estimation of the a marginal distributions for each therapeutic arm, the concordance measures between these populations and the right copulas families is now sufficient to process to the whole CEA.

  3. Premium analysis for copula model: A case study for Malaysian motor insurance claims

    Science.gov (United States)

    Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah

    2014-06-01

    This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.

  4. Selecting Copulas for Risk Management

    NARCIS (Netherlands)

    H.J.W.G. Kole (Erik); C.G. Koedijk (Kees); M.J.C.M. Verbeek (Marno)

    2006-01-01

    textabstractCopulas offer financial risk managers a powerful tool to model the dependence between the different elements of a portfolio and are preferable to the traditional, correlation-based approach. In this paper we show the importance of selecting an accurate copula for risk management. We

  5. Modeling stochastic frontier based on vine copulas

    Science.gov (United States)

    Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito

    2017-11-01

    This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.

  6. A COMPARATIVE ANALYSIS OF ASEAN CURRENCIES USING A COPULA APPROACH AND A DYNAMIC COPULA APPROACH

    Directory of Open Access Journals (Sweden)

    CHUKIAT CHAIBOONSRI

    2012-12-01

    Full Text Available The ASEAN Economic Community (AEC will be shaped developing to be a single market and production base in 2015, moving towards regional Economic Integration, 2009. These developments in international financial markets do lead to some adverse cost for AEC country borrowers. The specific objective aims to investigate the dependent measures and the co-movement among selected ASEAN currencies. A Copula Approach was used to examine dependent measures of Thai Baht exchange rate among selected ASEAN currencies during the period of 2008-2011. Also, a Dynamic Copula Approach was tested to investigate the co-movement of Thai Baht exchange rate among selected ASEAN currencies during the period of 2008-2011. The results of the study based on a Pearson linear correlation coefficient confirmed that Thai Baht exchange rate and each of selected ASEAN currencies have a linear correlation during the specific period excluding Vietnam exchange rate. Furthermore, based on empirical Copula Approach, Thai Baht exchange rate had a dependent structure with each of the selected in ASEAN currencies including Brunei exchange rate, Singapore exchange rate, Malaysia exchange rate, Indonesia exchange rate, Philippine exchange rate, and Vietnam exchange rate respectively. The results of Dynamic Copula estimation indicated that Thai Baht exchange rate had a co-movement with selected ASEAN currencies. The research results provide an informative and interactive ASEAN financial market to all users, including Global financial market.

  7. Global Population Density Grid Time Series Estimates

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Population Density Grid Time Series Estimates provide a back-cast time series of population density grids based on the year 2000 population grid from SEDAC's...

  8. Nonparametric identification of copula structures

    KAUST Repository

    Li, Bo

    2013-06-01

    We propose a unified framework for testing a variety of assumptions commonly made about the structure of copulas, including symmetry, radial symmetry, joint symmetry, associativity and Archimedeanity, and max-stability. Our test is nonparametric and based on the asymptotic distribution of the empirical copula process.We perform simulation experiments to evaluate our test and conclude that our method is reliable and powerful for assessing common assumptions on the structure of copulas, particularly when the sample size is moderately large. We illustrate our testing approach on two datasets. © 2013 American Statistical Association.

  9. Density estimation from local structure

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2009-11-01

    Full Text Available Mixture Model (GMM) density function of the data and the log-likelihood scores are compared to the scores of a GMM trained with the expectation maximization (EM) algorithm on 5 real-world classification datasets (from the UCI collection). They show...

  10. PERBANDINGAN ASURANSI LAST SURVIVOR DENGAN PENGEMBALIAN PREMI MENGGUNAKAN METODE COPULA FRANK, COPULA CLAYTON, DAN COPULA GUMBEL

    Directory of Open Access Journals (Sweden)

    I GEDE DICKY ARYA BRAMANTA

    2017-10-01

    Full Text Available This study examines about last survivor life insurance with return of premium for married couples with independent and dependent mortality model. By using Frank copula, Clayton copula, Gumbel copula and Indonesian Mortalita Table 2011, the impact of future life dependence on single premiums and annually premium is evaluated. Based on the calculation of premium with a 10 year contract for the insured parties aged 58 years and 55 years with interest rate used 6.5%, the value of insurance premium last survivor with return of premium is more expensive than without return of premium. The greater the dependency, the more expensive the price of the premium.

  11. Nonparametric identification of copula structures

    KAUST Repository

    Li, Bo; Genton, Marc G.

    2013-01-01

    We propose a unified framework for testing a variety of assumptions commonly made about the structure of copulas, including symmetry, radial symmetry, joint symmetry, associativity and Archimedeanity, and max-stability. Our test is nonparametric

  12. The dependence of Islamic and conventional stocks: A copula approach

    Science.gov (United States)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2015-09-01

    Recent studies have found that Islamic stocks are dependent on conventional stocks and they appear to be more risky. In Asia, particularly in Islamic countries, research on dependence involving Islamic and non-Islamic stock markets is limited. The objective of this study is to investigate the dependence between financial times stock exchange Hijrah Shariah index and conventional stocks (EMAS and KLCI indices). Using the copula approach and a time series model for each marginal distribution function, the copula parameters were estimated. The Elliptical copula was selected to present the dependence structure of each pairing of the Islamic stock and conventional stock. Specifically, the Islamic versus conventional stocks (Shariah-EMAS and Shariah-KLCI) had lower dependence compared to conventional versus conventional stocks (EMAS-KLCI). These findings suggest that the occurrence of shocks in a conventional stock will not have strong impact on the Islamic stock.

  13. Nonparametric methods for volatility density estimation

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2009-01-01

    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on

  14. ADN* Density log estimation Using Rockcell*

    International Nuclear Information System (INIS)

    Okuku, C.; Iloghalu, Emeka. M.; Omotayo, O.

    2003-01-01

    This work is intended to inform on the possibilities of estimating good density data in zones associated with sliding in a reservoir with ADN* tool with or without ADOS in string in cases where repeat sections were not done, possibly due to hole stability or directional concerns. This procedure has been equally used to obtain a better density data in corkscrew holes. Density data (ROBB) was recomputed using neural network in RockCell* to estimate the density over zones of interest. RockCell* is a Schlumberger software that has neural network functionally which can be used to estimate missing logs using the combination of the responses of other log curves and intervals that are not affected by sliding. In this work, an interval was selected and within this interval twelve litho zones were defined using the unsupervised neural network. From this a training set was selected based on intervals of very good log responses outside the sliding zones. This training set was used to train and run the neural network for a specific lithostratigraphic interval. The results matched the known good density curve. Then after this, an estimation of the density curve was done using the supervised neural network. The output from this estimation matched very closely in the good portions of the log, thus providing some density measurements in the sliding zone. This methodology provides a scientific solution to missing data during the process of Formation evaluation

  15. Local fluctuations of the signed traded volumes and the dependencies of demands: a copula analysis

    Science.gov (United States)

    Wang, Shanshan; Guhr, Thomas

    2018-03-01

    We investigate how the local fluctuations of the signed traded volumes affect the dependence of demands between stocks. We analyze the empirical dependence of demands using copulas and show that they are well described by a bivariate K copula density function. We find that large local fluctuations strongly increase the positive dependence but lower slightly the negative one in the copula density. This interesting feature is due to cross-correlations of volume imbalances between stocks. Also, we explore the asymmetries of tail dependencies of the copula density, which are moderate for the negative dependencies but strong for the positive ones. For the latter, we reveal that large local fluctuations of the signed traded volumes trigger stronger dependencies of demands than of supplies, probably indicating a bull market with persistent raising of prices.

  16. Copula Theory and Its Applications

    CERN Document Server

    Jaworski, Piotr; Hardle, Wolfgang Karl; Rychlik, Tomasz

    2010-01-01

    Copulas are mathematical objects that fully capture the dependence structure among random variables and hence offer great flexibility in building multivariate stochastic models. Since their introduction in the early 50's, copulas have gained considerable popularity in several fields of applied mathematics, such as finance, insurance and reliability theory. Today, they represent a well-recognized tool for market and credit models, aggregation of risks, portfolio selection, etc. This book is divided into two main parts: Part I - 'Surveys' contains 11 chapters that provide an up-to-date account o

  17. High throughput nonparametric probability density estimation.

    Science.gov (United States)

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  18. Nonparametric Collective Spectral Density Estimation and Clustering

    KAUST Repository

    Maadooliat, Mehdi

    2017-04-12

    In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at

  19. Nonparametric Collective Spectral Density Estimation and Clustering

    KAUST Repository

    Maadooliat, Mehdi; Sun, Ying; Chen, Tianbo

    2017-01-01

    In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at

  20. Universal integrals based on copulas

    Czech Academy of Sciences Publication Activity Database

    Klement, E.P.; Mesiar, Radko; Spizzichino, F.; Stupňanová, A.

    2014-01-01

    Roč. 13, č. 3 (2014), s. 273-286 ISSN 1568-4539 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * copula * universal integral Subject RIV: BA - General Mathematics Impact factor: 2.163, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0432228.pdf

  1. Estimation and display of beam density profiles

    Energy Technology Data Exchange (ETDEWEB)

    Dasgupta, S; Mukhopadhyay, T; Roy, A; Mallik, C

    1989-03-15

    A setup in which wire-scanner-type beam-profile monitor data are collected on-line in a nuclear data-acquisition system has been used and a simple algorithm for estimation and display of the current density distribution in a particle beam is described.

  2. Anisotropic Density Estimation in Global Illumination

    DEFF Research Database (Denmark)

    Schjøth, Lars

    2009-01-01

    Density estimation employed in multi-pass global illumination algorithms gives cause to a trade-off problem between bias and noise. The problem is seen most evident as blurring of strong illumination features. This thesis addresses the problem, presenting four methods that reduce both noise...

  3. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  4. ESTIMASI NILAI CONDITIONAL VALUE AT RISK MENGGUNAKAN FUNGSI GAUSSIAN COPULA

    Directory of Open Access Journals (Sweden)

    HERLINA HIDAYATI

    2015-11-01

    Full Text Available Copula is already widely used in financial assets, especially in risk management. It is due to the ability of copula, to capture the nonlinear dependence structure on multivariate assets. In addition, using copula function doesn’t require the assumption of normal distribution. There fore it is suitable to be applied to financial data. To manage a risk the necessary measurement tools can help mitigate the risks. One measure that can be used to measure risk is Value at Risk (VaR. Although VaR is very popular, it has several weaknesses. To overcome the weakness in VaR, an alternative risk measure called CVaR can be used. The porpose of this study is to estimate CVaR using Gaussian copula. The data we used are the closing price of Facebook and Twitter stocks. The results from the calculation using 90%  confidence level showed that the risk that may be experienced is at 4,7%, for 95% confidence level it is at 6,1%, and for 99% confidence level it is at 10,6%.

  5. Risk of portfolio with simulated returns based on copula model

    Science.gov (United States)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2015-02-01

    The commonly used tool for measuring risk of a portfolio with equally weighted stocks is variance-covariance method. Under extreme circumstances, this method leads to significant underestimation of actual risk due to its multivariate normality assumption of the joint distribution of stocks. The purpose of this research is to compare the actual risk of portfolio with the simulated risk of portfolio in which the joint distribution of two return series is predetermined. The data used is daily stock prices from the ASEAN market for the period January 2000 to December 2012. The copula approach is applied to capture the time varying dependence among the return series. The results shows that the chosen copula families are not suitable to present the dependence structures of each bivariate returns. Exception for the Philippines-Thailand pair where by t copula distribution appears to be the appropriate choice to depict its dependence. Assuming that the t copula distribution is the joint distribution of each paired series, simulated returns is generated and value-at-risk (VaR) is then applied to evaluate the risk of each portfolio consisting of two simulated return series. The VaR estimates was found to be symmetrical due to the simulation of returns via elliptical copula-GARCH approach. By comparison, it is found that the actual risks are underestimated for all pairs of portfolios except for Philippines-Thailand. This study was able to show that disregard of the non-normal dependence structure of two series will result underestimation of actual risk of the portfolio.

  6. Estimating snowpack density from Albedo measurement

    Science.gov (United States)

    James L. Smith; Howard G. Halverson

    1979-01-01

    Snow is a major source of water in Western United States. Data on snow depth and average snowpack density are used in mathematical models to predict water supply. In California, about 75 percent of the snow survey sites above 2750-meter elevation now used to collect data are in statutory wilderness areas. There is need for a method of estimating the water content of a...

  7. Infrared thermography for wood density estimation

    Science.gov (United States)

    López, Gamaliel; Basterra, Luis-Alfonso; Acuña, Luis

    2018-03-01

    Infrared thermography (IRT) is becoming a commonly used technique to non-destructively inspect and evaluate wood structures. Based on the radiation emitted by all objects, this technique enables the remote visualization of the surface temperature without making contact using a thermographic device. The process of transforming radiant energy into temperature depends on many parameters, and interpreting the results is usually complicated. However, some works have analyzed the operation of IRT and expanded its applications, as found in the latest literature. This work analyzes the effect of density on the thermodynamic behavior of timber to be determined by IRT. The cooling of various wood samples has been registered, and a statistical procedure that enables one to quantitatively estimate the density of timber has been designed. This procedure represents a new method to physically characterize this material.

  8. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  9. IDF relationships using bivariate copula for storm events in Peninsular Malaysia

    Science.gov (United States)

    Ariff, N. M.; Jemain, A. A.; Ibrahim, K.; Wan Zin, W. Z.

    2012-11-01

    SummaryIntensity-duration-frequency (IDF) curves are used in many hydrologic designs for the purpose of water managements and flood preventions. The IDF curves available in Malaysia are those obtained from univariate analysis approach which only considers the intensity of rainfalls at fixed time intervals. As several rainfall variables are correlated with each other such as intensity and duration, this paper aims to derive IDF points for storm events in Peninsular Malaysia by means of bivariate frequency analysis. This is achieved through utilizing the relationship between storm intensities and durations using the copula method. Four types of copulas; namely the Ali-Mikhail-Haq (AMH), Frank, Gaussian and Farlie-Gumbel-Morgenstern (FGM) copulas are considered because the correlation between storm intensity, I, and duration, D, are negative and these copulas are appropriate when the relationship between the variables are negative. The correlations are attained by means of Kendall's τ estimation. The analysis was performed on twenty rainfall stations with hourly data across Peninsular Malaysia. Using Akaike's Information Criteria (AIC) for testing goodness-of-fit, both Frank and Gaussian copulas are found to be suitable to represent the relationship between I and D. The IDF points found by the copula method are compared to the IDF curves yielded based on the typical IDF empirical formula of the univariate approach. This study indicates that storm intensities obtained from both methods are in agreement with each other for any given storm duration and for various return periods.

  10. Joint distribution of temperature and precipitation in the Mediterranean, using the Copula method

    Science.gov (United States)

    Lazoglou, Georgia; Anagnostopoulou, Christina

    2018-03-01

    This study analyses the temperature and precipitation dependence among stations in the Mediterranean. The first station group is located in the eastern Mediterranean (EM) and includes two stations, Athens and Thessaloniki, while the western (WM) one includes Malaga and Barcelona. The data was organized in two time periods, the hot-dry period and the cold-wet one, composed of 5 months, respectively. The analysis is based on a new statistical technique in climatology: the Copula method. Firstly, the calculation of the Kendall tau correlation index showed that temperatures among stations are dependant during both time periods whereas precipitation presents dependency only between the stations located in EM or WM and only during the cold-wet period. Accordingly, the marginal distributions were calculated for each studied station, as they are further used by the copula method. Finally, several copula families, both Archimedean and Elliptical, were tested in order to choose the most appropriate one to model the relation of the studied data sets. Consequently, this study achieves to model the dependence of the main climate parameters (temperature and precipitation) with the Copula method. The Frank copula was identified as the best family to describe the joint distribution of temperature, for the majority of station groups. For precipitation, the best copula families are BB1 and Survival Gumbel. Using the probability distribution diagrams, the probability of a combination of temperature and precipitation values between stations is estimated.

  11. Forecasting VaR and ES of stock index portfolio: A Vine copula method

    Science.gov (United States)

    Zhang, Bangzheng; Wei, Yu; Yu, Jiang; Lai, Xiaodong; Peng, Zhenfeng

    2014-12-01

    Risk measurement has both theoretical and practical significance in risk management. Using daily sample of 10 international stock indices, firstly this paper models the internal structures among different stock markets with C-Vine, D-Vine and R-Vine copula models. Secondly, the Value-at-Risk (VaR) and Expected Shortfall (ES) of the international stock markets portfolio are forecasted using Monte Carlo method based on the estimated dependence of different Vine copulas. Finally, the accuracy of VaR and ES measurements obtained from different statistical models are evaluated by UC, IND, CC and Posterior analysis. The empirical results show that the VaR forecasts at the quantile levels of 0.9, 0.95, 0.975 and 0.99 with three kinds of Vine copula models are sufficiently accurate. Several traditional methods, such as historical simulation, mean-variance and DCC-GARCH models, fail to pass the CC backtesting. The Vine copula methods can accurately forecast the ES of the portfolio on the base of VaR measurement, and D-Vine copula model is superior to other Vine copulas.

  12. Multivariate density estimation theory, practice, and visualization

    CERN Document Server

    Scott, David W

    2015-01-01

    David W. Scott, PhD, is Noah Harding Professor in the Department of Statistics at Rice University. The author of over 100 published articles, papers, and book chapters, Dr. Scott is also Fellow of the American Statistical Association (ASA) and the Institute of Mathematical Statistics. He is recipient of the ASA Founder's Award and the Army Wilks Award. His research interests include computational statistics, data visualization, and density estimation. Dr. Scott is also Coeditor of Wiley Interdisciplinary Reviews: Computational Statistics and previous Editor of the Journal of Computational and

  13. Some aspects of Lévy copulas

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Lindner, A.M

    describe multivariate Lévy measures on Rm+. In this paper, we show that any such Lévy copula defines itself a Lévy measure with 1-stable margins, in a canonical way. A limit theorem is obtained, characterising convergence of Lévy measures with the aid of Lévy copulas. Homogeneous Lévy copulas...... are considered in detail. They correspond to Lévy processes which have a timeconstant Lévy copula. Furthermore, we show how the Lévy copula concept can be used to construct multivariate distributions in the Bondesson class with prescribed margins in the Bondesson class. The construction depends on a mapping ϒ...

  14. Bivariate copula in fitting rainfall data

    Science.gov (United States)

    Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui

    2014-07-01

    The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).

  15. The Realized Hierarchical Archimedean Copula in Risk Modelling

    Directory of Open Access Journals (Sweden)

    Ostap Okhrin

    2017-06-01

    Full Text Available This paper introduces the concept of the realized hierarchical Archimedean copula (rHAC. The proposed approach inherits the ability of the copula to capture the dependencies among financial time series, and combines it with additional information contained in high-frequency data. The considered model does not suffer from the curse of dimensionality, and is able to accurately predict high-dimensional distributions. This flexibility is obtained by using a hierarchical structure in the copula. The time variability of the model is provided by daily forecasts of the realized correlation matrix, which is used to estimate the structure and the parameters of the rHAC. Extensive simulation studies show the validity of the estimator based on this realized correlation matrix, and its performance, in comparison to the benchmark models. The application of the estimator to one-day-ahead Value at Risk (VaR prediction using high-frequency data exhibits good forecasting properties for a multivariate portfolio.

  16. Factor copula models for data with spatio-temporal dependence

    KAUST Repository

    Krupskii, Pavel

    2017-10-13

    We propose a new copula model for spatial data that are observed repeatedly in time. The model is based on the assumption that there exists a common factor that affects the measurements of a process in space and in time. Unlike models based on multivariate normality, our model can handle data with tail dependence and asymmetry. The likelihood for the proposed model can be obtained in a simple form and therefore parameter estimation is quite fast. Simulation from this model is straightforward and data can be predicted at any spatial location and time point. We use simulation studies to show different types of dependencies, both in space and in time, that can be generated by this model. We apply the proposed copula model to hourly wind data and compare its performance with some classical models for spatio-temporal data.

  17. Factor copula models for data with spatio-temporal dependence

    KAUST Repository

    Krupskii, Pavel; Genton, Marc G.

    2017-01-01

    We propose a new copula model for spatial data that are observed repeatedly in time. The model is based on the assumption that there exists a common factor that affects the measurements of a process in space and in time. Unlike models based on multivariate normality, our model can handle data with tail dependence and asymmetry. The likelihood for the proposed model can be obtained in a simple form and therefore parameter estimation is quite fast. Simulation from this model is straightforward and data can be predicted at any spatial location and time point. We use simulation studies to show different types of dependencies, both in space and in time, that can be generated by this model. We apply the proposed copula model to hourly wind data and compare its performance with some classical models for spatio-temporal data.

  18. Financial market volatility and contagion effect: A copula-multifractal volatility approach

    Science.gov (United States)

    Chen, Wang; Wei, Yu; Lang, Qiaoqi; Lin, Yu; Liu, Maojuan

    2014-03-01

    In this paper, we propose a new approach based on the multifractal volatility method (MFV) to study the contagion effect between the U.S. and Chinese stock markets. From recent studies, which reveal that multifractal characteristics exist in both developed and emerging financial markets, according to the econophysics literature we could draw conclusions as follows: Firstly, we estimate volatility using the multifractal volatility method, and find out that the MFV method performs best among other volatility models, such as GARCH-type and realized volatility models. Secondly, we analyze the tail dependence structure between the U.S. and Chinese stock market. The estimated static copula results for the entire period show that the SJC copula performs best, indicating asymmetric characteristics of the tail dependence structure. The estimated dynamic copula results show that the time-varying t copula achieves the best performance, which means the symmetry dynamic t copula is also a good choice, for it is easy to estimate and is able to depict both the upper and lower tail dependence structure. Finally, with the results of the previous two steps, we analyze the contagion effect between the U.S. and Chinese stock markets during the subprime mortgage crisis. The empirical results show that the subprime mortgage crisis started in the U.S. and that its stock market has had an obvious contagion effect on the Chinese stock market. Our empirical results should/might be useful for investors allocating their portfolios.

  19. Mammography density estimation with automated volumetic breast density measurement

    International Nuclear Information System (INIS)

    Ko, Su Yeon; Kim, Eun Kyung; Kim, Min Jung; Moon, Hee Jung

    2014-01-01

    To compare automated volumetric breast density measurement (VBDM) with radiologists' evaluations based on the Breast Imaging Reporting and Data System (BI-RADS), and to identify the factors associated with technical failure of VBDM. In this study, 1129 women aged 19-82 years who underwent mammography from December 2011 to January 2012 were included. Breast density evaluations by radiologists based on BI-RADS and by VBDM (Volpara Version 1.5.1) were compared. The agreement in interpreting breast density between radiologists and VBDM was determined based on four density grades (D1, D2, D3, and D4) and a binary classification of fatty (D1-2) vs. dense (D3-4) breast using kappa statistics. The association between technical failure of VBDM and patient age, total breast volume, fibroglandular tissue volume, history of partial mastectomy, the frequency of mass > 3 cm, and breast density was analyzed. The agreement between breast density evaluations by radiologists and VBDM was fair (k value = 0.26) when the four density grades (D1/D2/D3/D4) were used and moderate (k value = 0.47) for the binary classification (D1-2/D3-4). Twenty-seven women (2.4%) showed failure of VBDM. Small total breast volume, history of partial mastectomy, and high breast density were significantly associated with technical failure of VBDM (p 0.001 to 0.015). There is fair or moderate agreement in breast density evaluation between radiologists and VBDM. Technical failure of VBDM may be related to small total breast volume, a history of partial mastectomy, and high breast density.

  20. Conference on Copulas and Their Applications

    CERN Document Server

    Artero, Enrique; Durante, Fabrizio; Sánchez, Juan

    2017-01-01

    This book presents contributions and review articles on the theory of copulas and their applications. The authoritative and refereed contributions review the latest findings in the area with emphasis on “classical” topics like distributions with fixed marginals, measures of association, construction of copulas with given additional information, etc. The book celebrates the 75th birthday of Professor Roger B. Nelsen and his outstanding contribution to the development of copula theory. Most of the book’s contributions were presented at the conference “Copulas and Their Applications” held in his honor in Almería, Spain, July 3-5, 2017. The chapter 'When Gumbel met Galambos' is published open access under a CC BY 4.0 license.

  1. A two-phase copula entropy-based multiobjective optimization approach to hydrometeorological gauge network design

    Science.gov (United States)

    Xu, Pengcheng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi; Liu, Jiufu; Zou, Ying; He, Ruimin

    2017-12-01

    Hydrometeorological data are needed for obtaining point and areal mean, quantifying the spatial variability of hydrometeorological variables, and calibration and verification of hydrometeorological models. Hydrometeorological networks are utilized to collect such data. Since data collection is expensive, it is essential to design an optimal network based on the minimal number of hydrometeorological stations in order to reduce costs. This study proposes a two-phase copula entropy- based multiobjective optimization approach that includes: (1) copula entropy-based directional information transfer (CDIT) for clustering the potential hydrometeorological gauges into several groups, and (2) multiobjective method for selecting the optimal combination of gauges for regionalized groups. Although entropy theory has been employed for network design before, the joint histogram method used for mutual information estimation has several limitations. The copula entropy-based mutual information (MI) estimation method is shown to be more effective for quantifying the uncertainty of redundant information than the joint histogram (JH) method. The effectiveness of this approach is verified by applying to one type of hydrometeorological gauge network, with the use of three model evaluation measures, including Nash-Sutcliffe Coefficient (NSC), arithmetic mean of the negative copula entropy (MNCE), and MNCE/NSC. Results indicate that the two-phase copula entropy-based multiobjective technique is capable of evaluating the performance of regional hydrometeorological networks and can enable decision makers to develop strategies for water resources management.

  2. Copula-based prediction of economic movements

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Hirsh, I. D.

    2016-06-01

    In this paper we model the discretized returns of two paired time series BM&FBOVESPA Dividend Index and BM&FBOVESPA Public Utilities Index using multivariate Markov models. The discretization corresponds to three categories, high losses, high profits and the complementary periods of the series. In technical terms, the maximal memory that can be considered for a Markov model, can be derived from the size of the alphabet and dataset. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination, of the partitions corresponding to the two marginal processes and the partition corresponding to the multivariate Markov chain. In order to estimate the transition probabilities, all the partitions are linked using a copula. In our application this strategy provides a significant improvement in the movement predictions.

  3. Gaussian copula as a likelihood function for environmental models

    Science.gov (United States)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an

  4. Global Risk Evolution and Diversification: a Copula-DCC-GARCH Model Approach

    Directory of Open Access Journals (Sweden)

    Marcelo Brutti Righi

    2012-12-01

    Full Text Available In this paper we estimate a dynamic portfolio composed by the U.S., German, British, Brazilian, Hong Kong and Australian markets, the period considered started on September 2001 and finished in September 2011. We ran the Copula-DCC-GARCH model on the daily returns conditional covariance matrix. The results allow us to conclude that there were changes in portfolio composition, occasioned by modifications in volatility and dependence between markets. The dynamic approach significantly reduced the portfolio risk if compared to the traditional static approach, especially in turbulent periods. Furthermore, we verified that the estimated copula model outperformed the conventional DCC model for the sample studied.

  5. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting

    2011-03-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  6. Approximation of bivariate copulas by patched bivariate Fréchet copulas

    KAUST Repository

    Zheng, Yanting; Yang, Jingping; Huang, Jianhua Z.

    2011-01-01

    Bivariate Fréchet (BF) copulas characterize dependence as a mixture of three simple structures: comonotonicity, independence and countermonotonicity. They are easily interpretable but have limitations when used as approximations to general dependence structures. To improve the approximation property of the BF copulas and keep the advantage of easy interpretation, we develop a new copula approximation scheme by using BF copulas locally and patching the local pieces together. Error bounds and a probabilistic interpretation of this approximation scheme are developed. The new approximation scheme is compared with several existing copula approximations, including shuffle of min, checkmin, checkerboard and Bernstein approximations and exhibits better performance, especially in characterizing the local dependence. The utility of the new approximation scheme in insurance and finance is illustrated in the computation of the rainbow option prices and stop-loss premiums. © 2010 Elsevier B.V.

  7. Concrete density estimation by rebound hammer method

    International Nuclear Information System (INIS)

    Ismail, Mohamad Pauzi bin; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri; Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin; Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin

    2016-01-01

    Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite

  8. Snow-melt flood frequency analysis by means of copula based 2D probability distributions for the Narew River in Poland

    Directory of Open Access Journals (Sweden)

    Bogdan Ozga-Zielinski

    2016-06-01

    New hydrological insights for the region: The results indicated that the 2D normal probability distribution model gives a better probabilistic description of snowmelt floods characterized by the 2-dimensional random variable (Qmax,f, Vf compared to the elliptical Gaussian copula and Archimedean 1-parameter Gumbel–Hougaard copula models, in particular from the view point of probability of exceedance as well as complexity and time of computation. Nevertheless, the copula approach offers a new perspective in estimating the 2D probability distribution for multidimensional random variables. Results showed that the 2D model for snowmelt floods built using the Gumbel–Hougaard copula is much better than the model built using the Gaussian copula.

  9. The Gaussian copula model for the joint deficit index for droughts

    Science.gov (United States)

    Van de Vyver, H.; Van den Bergh, J.

    2018-06-01

    The characterization of droughts and their impacts is very dependent on the time scale that is involved. In order to obtain an overall drought assessment, the cumulative effects of water deficits over different times need to be examined together. For example, the recently developed joint deficit index (JDI) is based on multivariate probabilities of precipitation over various time scales from 1- to 12-months, and was constructed from empirical copulas. In this paper, we examine the Gaussian copula model for the JDI. We model the covariance across the temporal scales with a two-parameter function that is commonly used in the specific context of spatial statistics or geostatistics. The validity of the covariance models is demonstrated with long-term precipitation series. Bootstrap experiments indicate that the Gaussian copula model has advantages over the empirical copula method in the context of drought severity assessment: (i) it is able to quantify droughts outside the range of the empirical copula, (ii) provides adequate drought quantification, and (iii) provides a better understanding of the uncertainty in the estimation.

  10. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  11. Impact of copula directional specification on multi-trial evaluation of surrogate endpoints

    Science.gov (United States)

    Renfro, Lindsay A.; Shang, Hongwei; Sargent, Daniel J.

    2014-01-01

    Evaluation of surrogate endpoints using patient-level data from multiple trials is the gold standard, where multi-trial copula models are used to quantify both patient-level and trial-level surrogacy. While limited consideration has been given in the literature to copula choice (e.g., Clayton), no prior consideration has been given to direction of implementation (via survival versus distribution functions). We demonstrate that evenwith the “correct” copula family, directional misspecification leads to biased estimates of patient-level and trial-level surrogacy. We illustrate with a simulation study and a re-analysis of disease-free survival as a surrogate for overall survival in early stage colon cancer. PMID:24905465

  12. Pair copula constructions to determine the dependence structure of Treasury bond yields

    Directory of Open Access Journals (Sweden)

    Marcelo Brutti Righi

    2015-12-01

    Full Text Available We estimated the dependence structure of US Treasury bonds through a pair copula construction. As a result, we verified that the variability of the yields decreases with a longer time of maturity of the bond. The yields presented strong dependence with past values, strongly positive bivariate associations between the daily variations, and prevalence of the Student's t copula in the relationships between the bonds. Furthermore, in tail associations, we identified relevant values in most of the relationships, which highlights the importance of risk management in the context of bonds diversification.

  13. On Improving Convergence Rates for Nonnegative Kernel Density Estimators

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1980-01-01

    To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...

  14. Probabilistic modelling of drought events in China via 2-dimensional joint copula

    Science.gov (United States)

    Ayantobo, Olusola O.; Li, Yi; Song, Songbai; Javed, Tehseen; Yao, Ning

    2018-04-01

    Probabilistic modelling of drought events is a significant aspect of water resources management and planning. In this study, popularly applied and several relatively new bivariate Archimedean copulas were employed to derive regional and spatial based copula models to appraise drought risk in mainland China over 1961-2013. Drought duration (Dd), severity (Ds), and peak (Dp), as indicated by Standardized Precipitation Evapotranspiration Index (SPEI), were extracted according to the run theory and fitted with suitable marginal distributions. The maximum likelihood estimation (MLE) and curve fitting method (CFM) were used to estimate the copula parameters of nineteen bivariate Archimedean copulas. Drought probabilities and return periods were analysed based on appropriate bivariate copula in sub-region I-VII and entire mainland China. The goodness-of-fit tests as indicated by the CFM showed that copula NN19 in sub-regions III, IV, V, VI and mainland China, NN20 in sub-region I and NN13 in sub-region VII are the best for modeling drought variables. Bivariate drought probability across mainland China is relatively high, and the highest drought probabilities are found mainly in the Northwestern and Southwestern China. Besides, the result also showed that different sub-regions might suffer varying drought risks. The drought risks as observed in Sub-region III, VI and VII, are significantly greater than other sub-regions. Higher probability of droughts of longer durations in the sub-regions also corresponds to shorter return periods with greater drought severity. These results may imply tremendous challenges for the water resources management in different sub-regions, particularly the Northwestern and Southwestern China.

  15. Uncertainty of Hydrological Drought Characteristics with Copula Functions and Probability Distributions: A Case Study of Weihe River, China

    Directory of Open Access Journals (Sweden)

    Panpan Zhao

    2017-05-01

    Full Text Available This study investigates the sensitivity and uncertainty of hydrological droughts frequencies and severity in the Weihe Basin, China during 1960–2012, by using six commonly used univariate probability distributions and three Archimedean copulas to fit the marginal and joint distributions of drought characteristics. The Anderson-Darling method is used for testing the goodness-of-fit of the univariate model, and the Akaike information criterion (AIC is applied to select the best distribution and copula functions. The results demonstrate that there is a very strong correlation between drought duration and drought severity in three stations. The drought return period varies depending on the selected marginal distributions and copula functions and, with an increase of the return period, the differences become larger. In addition, the estimated return periods (both co-occurrence and joint from the best-fitted copulas are the closet to those from empirical distribution. Therefore, it is critical to select the appropriate marginal distribution and copula function to model the hydrological drought frequency and severity. The results of this study can not only help drought investigation to select a suitable probability distribution and copulas function, but are also useful for regional water resource management. However, a few limitations remain in this study, such as the assumption of stationary of runoff series.

  16. Estimating diurnal primate densities using distance sampling ...

    African Journals Online (AJOL)

    SARAH

    2016-03-31

    Mar 31, 2016 ... In the second session, we used 10 transect adjusted to transect (Grid 17 ... session transect was visited 20 times while at the second session transect ... probability, the density of the group and the group size of each species ...

  17. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  18. Ant-inspired density estimation via random walks.

    Science.gov (United States)

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  19. Information geometry of density matrices and state estimation

    International Nuclear Information System (INIS)

    Brody, Dorje C

    2011-01-01

    Given a pure state vector |x) and a density matrix ρ-hat, the function p(x|ρ-hat)= defines a probability density on the space of pure states parameterised by density matrices. The associated Fisher-Rao information measure is used to define a unitary invariant Riemannian metric on the space of density matrices. An alternative derivation of the metric, based on square-root density matrices and trace norms, is provided. This is applied to the problem of quantum-state estimation. In the simplest case of unitary parameter estimation, new higher-order corrections to the uncertainty relations, applicable to general mixed states, are derived. (fast track communication)

  20. USING COPULAS TO MODEL DEPENDENCE IN SIMULATION RISK ASSESSMENT

    Energy Technology Data Exchange (ETDEWEB)

    Dana L. Kelly

    2007-11-01

    Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition, substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.

  1. Current Source Density Estimation for Single Neurons

    Directory of Open Access Journals (Sweden)

    Dorottya Cserpán

    2014-03-01

    Full Text Available Recent developments of multielectrode technology made it possible to measure the extracellular potential generated in the neural tissue with spatial precision on the order of tens of micrometers and on submillisecond time scale. Combining such measurements with imaging of single neurons within the studied tissue opens up new experimental possibilities for estimating distribution of current sources along a dendritic tree. In this work we show that if we are able to relate part of the recording of extracellular potential to a specific cell of known morphology we can estimate the spatiotemporal distribution of transmembrane currents along it. We present here an extension of the kernel CSD method (Potworowski et al., 2012 applicable in such case. We test it on several model neurons of progressively complicated morphologies from ball-and-stick to realistic, up to analysis of simulated neuron activity embedded in a substantial working network (Traub et al, 2005. We discuss the caveats and possibilities of this new approach.

  2. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  3. Toward accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  4. Breast density estimation from high spectral and spatial resolution MRI

    Science.gov (United States)

    Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.

    2016-01-01

    Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (pdensity estimations. An interclass correlation coefficient of 0.99 (pdensity estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590

  5. Multivariate Option Pricing Using Dynamic Copula Models

    NARCIS (Netherlands)

    van den Goorbergh, R.W.J.; Genest, C.; Werker, B.J.M.

    2003-01-01

    This paper examines the behavior of multivariate option prices in the presence of association between the underlying assets.Parametric families of copulas offering various alternatives to the normal dependence structure are used to model this association, which is explicitly assumed to vary over

  6. The utilization of copula in hidrology

    OpenAIRE

    Trandafir, Romica; Ciuiu, Daniel; Drobot, Radu

    2010-01-01

    In this paper the parameters of the generalized Pareto cumulative distribution functions of the marginals and the parameter θ of the connecting copula for the water maximum discharges and water volumes are obtained. The isolines for C(F(x),G(y)) =1−ε and for C∗ (F(x),G(y))=ε will be drawn.

  7. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  8. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  9. Improved Variable Window Kernel Estimates of Probability Densities

    OpenAIRE

    Hall, Peter; Hu, Tien Chung; Marron, J. S.

    1995-01-01

    Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...

  10. On Improving Density Estimators which are not Bona Fide Functions

    OpenAIRE

    Gajek, Leslaw

    1986-01-01

    In order to improve the rate of decrease of the IMSE for nonparametric kernel density estimators with nonrandom bandwidth beyond $O(n^{-4/5})$ all current methods must relax the constraint that the density estimate be a bona fide function, that is, be nonnegative and integrate to one. In this paper we show how to achieve similar improvement without relaxing any of these constraints. The method can also be applied for orthogonal series, adaptive orthogonal series, spline, jackknife, and other ...

  11. The Structure of the Class of Maximum Tsallis–Havrda–Chavát Entropy Copulas

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2016-07-01

    Full Text Available A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy copula to be a copula in the class introduced in Rodríguez-Lallena and Úbeda-Flores (2004, and we also show that each copula in that class is a maximum entropy copula.

  12. 'The formula that killed Wall Street': the Gaussian copula and modelling practices in investment banking.

    Science.gov (United States)

    MacKenzie, Donald; Spears, Taylor

    2014-06-01

    Drawing on documentary sources and 114 interviews with market participants, this and a companion article discuss the development and use in finance of the Gaussian copula family of models, which are employed to estimate the probability distribution of losses on a pool of loans or bonds, and which were centrally involved in the credit crisis. This article, which explores how and why the Gaussian copula family developed in the way it did, employs the concept of 'evaluation culture', a set of practices, preferences and beliefs concerning how to determine the economic value of financial instruments that is shared by members of multiple organizations. We identify an evaluation culture, dominant within the derivatives departments of investment banks, which we call the 'culture of no-arbitrage modelling', and explore its relation to the development of Gaussian copula models. The article suggests that two themes from the science and technology studies literature on models (modelling as 'impure' bricolage, and modelling as articulating with heterogeneous objectives and constraints) help elucidate the history of Gaussian copula models in finance.

  13. Density estimates of monarch butterflies overwintering in central Mexico

    Directory of Open Access Journals (Sweden)

    Wayne E. Thogmartin

    2017-04-01

    Full Text Available Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L. under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1; the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1. Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp. lost (0.86 billion stems in the northern US plus the amount of milkweed remaining (1.34 billion stems, we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  14. Density estimates of monarch butterflies overwintering in central Mexico

    Science.gov (United States)

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  15. Bivariate Kumaraswamy Models via Modified FGM Copulas: Properties and Applications

    Directory of Open Access Journals (Sweden)

    Indranil Ghosh

    2017-11-01

    Full Text Available A copula is a useful tool for constructing bivariate and/or multivariate distributions. In this article, we consider a new modified class of FGM (Farlie–Gumbel–Morgenstern bivariate copula for constructing several different bivariate Kumaraswamy type copulas and discuss their structural properties, including dependence structures. It is established that construction of bivariate distributions by this method allows for greater flexibility in the values of Spearman’s correlation coefficient, ρ and Kendall’s τ .

  16. Optimal Bandwidth Selection for Kernel Density Functionals Estimation

    Directory of Open Access Journals (Sweden)

    Su Chen

    2015-01-01

    Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.

  17. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this

  18. Probability Density Estimation Using Neural Networks in Monte Carlo Calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo

    2008-01-01

    The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)

  19. A two-component copula with links to insurance

    Directory of Open Access Journals (Sweden)

    Ismail S.

    2017-12-01

    Full Text Available This paper presents a new copula to model dependencies between insurance entities, by considering how insurance entities are affected by both macro and micro factors. The model used to build the copula assumes that the insurance losses of two companies or lines of business are related through a random common loss factor which is then multiplied by an individual random company factor to get the total loss amounts. The new two-component copula is not Archimedean and it extends the toolkit of copulas for the insurance industry.

  20. Computerized image analysis: estimation of breast density on mammograms

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    2000-06-01

    An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.

  1. Joint survival probability via truncated invariant copula

    International Nuclear Information System (INIS)

    Kim, Jeong-Hoon; Ma, Yong-Ki; Park, Chan Yeol

    2016-01-01

    Highlights: • We have studied an issue of dependence structure between default intensities. • We use a multivariate shot noise intensity process, where jumps occur simultaneously and their sizes are correlated. • We obtain the joint survival probability of the integrated intensities by using a copula. • We apply our theoretical result to pricing basket default swap spread. - Abstract: Given an intensity-based credit risk model, this paper studies dependence structure between default intensities. To model this structure, we use a multivariate shot noise intensity process, where jumps occur simultaneously and their sizes are correlated. Through very lengthy algebra, we obtain explicitly the joint survival probability of the integrated intensities by using the truncated invariant Farlie–Gumbel–Morgenstern copula with exponential marginal distributions. We also apply our theoretical result to pricing basket default swap spreads. This result can provide a useful guide for credit risk management.

  2. Gradient-based stochastic estimation of the density matrix

    Science.gov (United States)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  3. A new approach for estimating the density of liquids.

    Science.gov (United States)

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-05

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  4. Density Estimation in Several Populations With Uncertain Population Membership

    KAUST Repository

    Ma, Yanyuan

    2011-09-01

    We devise methods to estimate probability density functions of several populations using observations with uncertain population membership, meaning from which population an observation comes is unknown. The probability of an observation being sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate our methods with data from a nutrition study.

  5. Assessing the copula selection for bivariate frequency analysis ...

    Indian Academy of Sciences (India)

    58

    Copulas are applied to overcome the restriction of traditional bivariate frequency ... frequency analysis methods cannot describe the random variable properties that ... In order to overcome the limitation of multivariate distributions, a copula is a ..... The Mann-Kendall (M-K) test is a non-parametric statistical test which is used ...

  6. Lower Tail Dependence for Archimedean Copulas : Characterizations and Pitfalls

    NARCIS (Netherlands)

    Charpentier, A.; Segers, J.J.J.

    2006-01-01

    Tail dependence copulas provide a natural perspective from which one can study the dependence in the tail of a multivariate distribution.For Archimedean copulas with continuously differentiable generators, regular variation of the generator near the origin is known to be closely connected to

  7. Face Value: Towards Robust Estimates of Snow Leopard Densities.

    Directory of Open Access Journals (Sweden)

    Justine S Alexander

    Full Text Available When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01 individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87. Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.

  8. Assessing Efficiency of D-Vine Copula ARMA-GARCH Method in Value at Risk Forecasting: Evidence from PSE Listed Companies

    Directory of Open Access Journals (Sweden)

    Václav Klepáč

    2015-01-01

    Full Text Available The article points out the possibilities of using static D-Vine copula ARMA-GARCH model for estimation of 1 day ahead market Value at Risk. For the illustration we use data of the four companies listed on Prague Stock Exchange in range from 2010 to 2014. Vine copula approach allows us to construct high-dimensional copula from both elliptical and Archimedean bivariate copulas, i.e. multivariate probability distribution, created from process innovations. Due to a deeper shortage of existing domestic results or comparison studies with advanced volatility governed VaR forecasts we backtested D-Vine copula ARMA-GARCH model against the VaR rolling out of sample forecast from October 2012 to April 2014 of chosen benchmark models, e.g. multivariate VAR-GO-GARCH, VAR-DCC-GARCH and univariate ARMA-GARCH type models. Common backtesting via Kupiec and Christoffersen procedures offer generalization that technological superiority of model supports accuracy only in case of an univariate modeling – working with non-basic GARCH models and innovations with leptokurtic distributions. Multivariate VAR governed type models and static Copula Vines performed in stated backtesting comparison worse than selected univariate ARMA-GARCH, i.e. it have overestimated the level of actual market risk, probably due to hardly tractable time-varying dependence structure.

  9. Bayesian error estimation in density-functional theory

    DEFF Research Database (Denmark)

    Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund

    2005-01-01

    We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...

  10. Estimate of energy density on CYCLOPS spatial filter pinhole structure

    International Nuclear Information System (INIS)

    Guch, S. Jr.

    1974-01-01

    The inclusion of a spatial filter between the B and C stages in CYCLOPS to reduce the effects of small-scale beam self-focusing is discussed. An estimate is made of the energy density to which the pinhole will be subjected, and the survivability of various pinhole materials and designs is discussed

  11. State of the Art in Photon-Density Estimation

    DEFF Research Database (Denmark)

    Hachisuka, Toshiya; Jarosz, Wojciech; Georgiev, Iliyan

    2013-01-01

    scattering. Since its introduction, photon-density estimation has been significantly extended in computer graphics with the introduction of: specialized techniques that intelligently modify the positions or bandwidths to reduce visual error using a small number of photons, approaches that eliminate error...

  12. State of the Art in Photon Density Estimation

    DEFF Research Database (Denmark)

    Hachisuka, Toshiya; Jarosz, Wojciech; Bouchard, Guillaume

    2012-01-01

    scattering. Since its introduction, photon-density estimation has been significantly extended in computer graphics with the introduction of: specialized techniques that intelligently modify the positions or bandwidths to reduce visual error using a small number of photons, approaches that eliminate error...

  13. Estimation of larval density of Liriomyza sativae Blanchard (Diptera ...

    African Journals Online (AJOL)

    This study was conducted to develop sequential sampling plans to estimate larval density of Liriomyza sativae Blanchard (Diptera: Agromyzidae) at three precision levels in cucumber greenhouse. The within- greenhouse spatial patterns of larvae were aggregated. The slopes and intercepts of both Iwao's patchiness ...

  14. Estimating forest canopy bulk density using six indirect methods

    Science.gov (United States)

    Robert E. Keane; Elizabeth D. Reinhardt; Joe Scott; Kathy Gray; James Reardon

    2005-01-01

    Canopy bulk density (CBD) is an important crown characteristic needed to predict crown fire spread, yet it is difficult to measure in the field. Presented here is a comprehensive research effort to evaluate six indirect sampling techniques for estimating CBD. As reference data, detailed crown fuel biomass measurements were taken on each tree within fixed-area plots...

  15. Estimating Soil Bulk Density and Total Nitrogen from Catchment ...

    African Journals Online (AJOL)

    Even though data on soil bulk density (BD) and total nitrogen (TN) are essential for planning modern farming techniques, their data availability is limited for many applications in the developing word. This study is designed to estimate BD and TN from soil properties, land-use systems, soil types and landforms in the ...

  16. Density estimation in tiger populations: combining information for strong inference

    Science.gov (United States)

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

    2012-01-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  17. Corruption clubs: empirical evidence from kernel density estimates

    NARCIS (Netherlands)

    Herzfeld, T.; Weiss, Ch.

    2007-01-01

    A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to

  18. A Balanced Approach to Adaptive Probability Density Estimation

    Directory of Open Access Journals (Sweden)

    Julio A. Kovacs

    2017-04-01

    Full Text Available Our development of a Fast (Mutual Information Matching (FIM of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.

  19. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  20. Simplified large African carnivore density estimators from track indices

    Directory of Open Access Journals (Sweden)

    Christiaan W. Winterbach

    2016-12-01

    Full Text Available Background The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. Methods We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. Results The Lion on Clay and Low Density on Sand models with intercept were not significant (P > 0.05. The other four models with intercept and the six models thorough origin were all significant (P < 0.05. The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Discussion Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26

  1. Evaluating lidar point densities for effective estimation of aboveground biomass

    Science.gov (United States)

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason M.; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  2. Estimating the effect of urban density on fuel demand

    Energy Technology Data Exchange (ETDEWEB)

    Karathodorou, Niovi; Graham, Daniel J. [Imperial College London, London, SW7 2AZ (United Kingdom); Noland, Robert B. [Rutgers University, New Brunswick, NJ 08901 (United States)

    2010-01-15

    Much of the empirical literature on fuel demand presents estimates derived from national data which do not permit any explicit consideration of the spatial structure of the economy. Intuitively we would expect the degree of spatial concentration of activities to have a strong link with transport fuel consumption. The present paper addresses this theme by estimating a fuel demand model for urban areas to provide a direct estimate of the elasticity of demand with respect to urban density. Fuel demand per capita is decomposed into car stock per capita, fuel consumption per kilometre and annual distance driven per car per year. Urban density is found to affect fuel consumption, mostly through variations in the car stock and in the distances travelled, rather than through fuel consumption per kilometre. (author)

  3. Automated mammographic breast density estimation using a fully convolutional network.

    Science.gov (United States)

    Lee, Juhun; Nishikawa, Robert M

    2018-03-01

    The purpose of this study was to develop a fully automated algorithm for mammographic breast density estimation using deep learning. Our algorithm used a fully convolutional network, which is a deep learning framework for image segmentation, to segment both the breast and the dense fibroglandular areas on mammographic images. Using the segmented breast and dense areas, our algorithm computed the breast percent density (PD), which is the faction of dense area in a breast. Our dataset included full-field digital screening mammograms of 604 women, which included 1208 mediolateral oblique (MLO) and 1208 craniocaudal (CC) views. We allocated 455, 58, and 91 of 604 women and their exams into training, testing, and validation datasets, respectively. We established ground truth for the breast and the dense fibroglandular areas via manual segmentation and segmentation using a simple thresholding based on BI-RADS density assessments by radiologists, respectively. Using the mammograms and ground truth, we fine-tuned a pretrained deep learning network to train the network to segment both the breast and the fibroglandular areas. Using the validation dataset, we evaluated the performance of the proposed algorithm against radiologists' BI-RADS density assessments. Specifically, we conducted a correlation analysis between a BI-RADS density assessment of a given breast and its corresponding PD estimate by the proposed algorithm. In addition, we evaluated our algorithm in terms of its ability to classify the BI-RADS density using PD estimates, and its ability to provide consistent PD estimates for the left and the right breast and the MLO and CC views of the same women. To show the effectiveness of our algorithm, we compared the performance of our algorithm against a state of the art algorithm, laboratory for individualized breast radiodensity assessment (LIBRA). The PD estimated by our algorithm correlated well with BI-RADS density ratings by radiologists. Pearson's rho values of

  4. Dual-Layer Density Estimation for Multiple Object Instance Detection

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2016-01-01

    Full Text Available This paper introduces a dual-layer density estimation-based architecture for multiple object instance detection in robot inventory management applications. The approach consists of raw scale-invariant feature transform (SIFT feature matching and key point projection. The dominant scale ratio and a reference clustering threshold are estimated using the first layer of the density estimation. A cascade of filters is applied after feature template reconstruction and refined feature matching to eliminate false matches. Before the second layer of density estimation, the adaptive threshold is finalized by multiplying an empirical coefficient for the reference value. The coefficient is identified experimentally. Adaptive threshold-based grid voting is applied to find all candidate object instances. Error detection is eliminated using final geometric verification in accordance with Random Sample Consensus (RANSAC. The detection results of the proposed approach are evaluated on a self-built dataset collected in a supermarket. The results demonstrate that the approach provides high robustness and low latency for inventory management application.

  5. Value at risk using financial copulas: Application to the Mexican exchange rate (2002-2011

    Directory of Open Access Journals (Sweden)

    Tania Nadiezhda Plascencia Cuevas

    2012-12-01

    Full Text Available Nowadays, the volatility of exchange rate is a crucial and a transcendental issue for all transactions, negotiations and operations taking place in foreign currency, being an objective and an accurate prediction the cornerstone. Therefore, the main objective of this research is to analyze whether the Mexican exchange rate market, risk assessment using traditional VaR and VaR with copulas methodologies are more accurate when the estimates are made for a wide historical time-series or two periods for certain, helping it to predict the maximum losses that may be, with the main motivation to have a efficient hedging strategy. The principal conclusion is that assessing risk with these methodologies, the series does not necessarily have to include more than five years, considering that the use of copulas as a dependent measure make that the prediction fits better to the movements of the real returns.

  6. LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Directory of Open Access Journals (Sweden)

    Huibing Hao

    2015-01-01

    Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.

  7. Semiautomatic estimation of breast density with DM-Scan software.

    Science.gov (United States)

    Martínez Gómez, I; Casals El Busto, M; Antón Guirao, J; Ruiz Perales, F; Llobet Azpitarte, R

    2014-01-01

    To evaluate the reproducibility of the calculation of breast density with DM-Scan software, which is based on the semiautomatic segmentation of fibroglandular tissue, and to compare it with the reproducibility of estimation by visual inspection. The study included 655 direct digital mammograms acquired using craniocaudal projections. Three experienced radiologists analyzed the density of the mammograms using DM-Scan, and the inter- and intra-observer agreement between pairs of radiologists for the Boyd and BI-RADS® scales were calculated using the intraclass correlation coefficient. The Kappa index was used to compare the inter- and intra-observer agreements with those obtained previously for visual inspection in the same set of images. For visual inspection, the mean interobserver agreement was 0,876 (95% CI: 0,873-0,879) on the Boyd scale and 0,823 (95% CI: 0,818-0,829) on the BI-RADS® scale. The mean intraobserver agreement was 0,813 (95% CI: 0,796-0,829) on the Boyd scale and 0,770 (95% CI: 0,742-0,797) on the BI-RADS® scale. For DM-Scan, the mean inter- and intra-observer agreement was 0,92, considerably higher than the agreement for visual inspection. The semiautomatic calculation of breast density using DM-Scan software is more reliable and reproducible than visual estimation and reduces the subjectivity and variability in determining breast density. Copyright © 2012 SERAM. Published by Elsevier Espana. All rights reserved.

  8. Covariance and correlation estimation in electron-density maps.

    Science.gov (United States)

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  9. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  10. Estimating black bear density using DNA data from hair snares

    Science.gov (United States)

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

    2010-01-01

    DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

  11. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  12. Copula-based model for rainfall and El- Niño in Banyuwangi Indonesia

    Science.gov (United States)

    Caraka, R. E.; Supari; Tahmid, M.

    2018-04-01

    Modelling, describing and measuring the structure dependences between different random events is at the very heart of statistics. Therefore, a broad variety of varying dependence concepts has been developed in the past. Most often, practitioners rely only on the linear correlation to describe the degree of dependence between two or more variables; an approach that can lead to quite misleading conclusions as this measure is only capable of capturing linear relationships. Copulas go beyond dependence measures and provide a sound framework for general dependence modelling. This paper will introduce an application of Copula to estimate, understand, and interpret the dependence structure in a given set of data El-Niño in Banyuwangi, Indonesia. In a nutshell, we proved the flexibility of Copulas Archimedean in rainfall modelling and catching phenomena of El Niño in Banyuwangi, East Java, Indonesia. Also, it was found that SST of nino3, nino4, and nino3.4 are most appropriate ENSO indicators in identifying the relationship of El Nino and rainfall.

  13. Ambit determination method in estimating rice plant population density

    Directory of Open Access Journals (Sweden)

    Abu Bakar, B.,

    2017-11-01

    Full Text Available Rice plant population density is a key indicator in determining the crop setting and fertilizer application rate. It is therefore essential that the population density is monitored to ensure that a correct crop management decision is taken. The conventional method of determining plant population is by manually counting the total number of rice plant tillers in a 25 cm x 25 cm square frame. Sampling is done by randomly choosing several different locations within a plot to perform tiller counting. This sampling method is time consuming, labour intensive and costly. An alternative fast estimating method was developed to overcome this issue. The method relies on measuring the outer circumference or ambit of the contained rice plants in a 25 cm x 25 cm square frame to determine the number of tillers within that square frame. Data samples of rice variety MR219 were collected from rice plots in the Muda granary area, Sungai Limau Dalam, Kedah. The data were taken at 50 days and 70 days after seeding (DAS. A total of 100 data samples were collected for each sampling day. A good correlation was obtained for the variety of 50 DAS and 70 DAS. The model was then verified by taking 100 samples with the latching strap for 50 DAS and 70 DAS. As a result, this technique can be used as a fast, economical and practical alternative to manual tiller counting. The technique can potentially be used in the development of an electronic sensing system to estimate paddy plant population density.

  14. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    Science.gov (United States)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of

  15. A projection and density estimation method for knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Adam Stanski

    Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  16. Copula Regression Analysis of Simultaneously Recorded Frontal Eye Field and Inferotemporal Spiking Activity during Object-Based Working Memory

    Science.gov (United States)

    Hu, Meng; Clark, Kelsey L.; Gong, Xiajing; Noudoost, Behrad; Li, Mingyao; Moore, Tirin

    2015-01-01

    Inferotemporal (IT) neurons are known to exhibit persistent, stimulus-selective activity during the delay period of object-based working memory tasks. Frontal eye field (FEF) neurons show robust, spatially selective delay period activity during memory-guided saccade tasks. We present a copula regression paradigm to examine neural interaction of these two types of signals between areas IT and FEF of the monkey during a working memory task. This paradigm is based on copula models that can account for both marginal distribution over spiking activity of individual neurons within each area and joint distribution over ensemble activity of neurons between areas. Considering the popular GLMs as marginal models, we developed a general and flexible likelihood framework that uses the copula to integrate separate GLMs into a joint regression analysis. Such joint analysis essentially leads to a multivariate analog of the marginal GLM theory and hence efficient model estimation. In addition, we show that Granger causality between spike trains can be readily assessed via the likelihood ratio statistic. The performance of this method is validated by extensive simulations, and compared favorably to the widely used GLMs. When applied to spiking activity of simultaneously recorded FEF and IT neurons during working memory task, we observed significant Granger causality influence from FEF to IT, but not in the opposite direction, suggesting the role of the FEF in the selection and retention of visual information during working memory. The copula model has the potential to provide unique neurophysiological insights about network properties of the brain. PMID:26063909

  17. An Improved Convolutional Neural Network on Crowd Density Estimation

    Directory of Open Access Journals (Sweden)

    Pan Shao-Yun

    2016-01-01

    Full Text Available In this paper, a new method is proposed for crowd density estimation. An improved convolutional neural network is combined with traditional texture feature. The data calculated by the convolutional layer can be treated as a new kind of features.So more useful information of images can be extracted by different features.In the meantime, the size of image has little effect on the result of convolutional neural network. Experimental results indicate that our scheme has adequate performance to allow for its use in real world applications.

  18. Using Copulas in Data Mining Based on the Observational Calculus

    Czech Academy of Sciences Publication Activity Database

    Holeňa, Martin; Bajer, L.; Ščavnický, M.

    2015-01-01

    Roč. 27, č. 10 (2015), s. 2851-2864 ISSN 1041-4347 R&D Projects: GA ČR GA13-17187S Grant - others:SLU(CZ) SGS/21/2014 Institutional support: RVO:67985807 Keywords : data mining * observational calculus * generalized quantifiers * joint probability distribution * copulas * hierarchical Archimedean copulas Subject RIV: IN - Informatics, Computer Science Impact factor: 2.476, year: 2015

  19. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    Science.gov (United States)

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  20. Review of methods for level density estimation from resonance parameters

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1983-01-01

    A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)

  1. HEDPIN: a computer program to estimate pinwise power density

    International Nuclear Information System (INIS)

    Cappiello, M.W.

    1976-05-01

    A description is given of the digital computer program, HEDPIN. This program, modeled after a previously developed program, POWPIN, provides a means of estimating the pinwise power density distribution in fast reactor triangular pitched pin bundles. The capability also exists for computing any reaction rate of interest at the respective pin positions within an assembly. HEDPIN was developed in support of FTR fuel and test management as well as fast reactor core design and core characterization planning and analysis. The results of a test devised to check out HEDPIN's computational method are given, and the realm of application is discussed. Nearly all programming is in FORTRAN IV. Variable dimensioning is employed to make efficient use of core memory and maintain short running time for small problems. Input instructions, sample problem, and a program listing are also given

  2. Cortical cell and neuron density estimates in one chimpanzee hemisphere.

    Science.gov (United States)

    Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H

    2016-01-19

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates.

  3. Estimating Foreign-Object-Debris Density from Photogrammetry Data

    Science.gov (United States)

    Long, Jason; Metzger, Philip; Lane, John

    2013-01-01

    Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

  4. Surrogacy assessment using principal stratification and a Gaussian copula model.

    Science.gov (United States)

    Conlon, Asc; Taylor, Jmg; Elliott, M R

    2017-02-01

    In clinical trials, a surrogate outcome ( S) can be measured before the outcome of interest ( T) and may provide early information regarding the treatment ( Z) effect on T. Many methods of surrogacy validation rely on models for the conditional distribution of T given Z and S. However, S is a post-randomization variable, and unobserved, simultaneous predictors of S and T may exist, resulting in a non-causal interpretation. Frangakis and Rubin developed the concept of principal surrogacy, stratifying on the joint distribution of the surrogate marker under treatment and control to assess the association between the causal effects of treatment on the marker and the causal effects of treatment on the clinical outcome. Working within the principal surrogacy framework, we address the scenario of an ordinal categorical variable as a surrogate for a censored failure time true endpoint. A Gaussian copula model is used to model the joint distribution of the potential outcomes of T, given the potential outcomes of S. Because the proposed model cannot be fully identified from the data, we use a Bayesian estimation approach with prior distributions consistent with reasonable assumptions in the surrogacy assessment setting. The method is applied to data from a colorectal cancer clinical trial, previously analyzed by Burzykowski et al.

  5. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    Science.gov (United States)

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  6. Risk Measurement and Risk Modelling Using Applications of Vine Copulas

    Directory of Open Access Journals (Sweden)

    David E. Allen

    2017-09-01

    Full Text Available This paper features an application of Regular Vine copulas which are a novel and recently developed statistical and mathematical tool which can be applied in the assessment of composite financial risk. Copula-based dependence modelling is a popular tool in financial applications, but is usually applied to pairs of securities. By contrast, Vine copulas provide greater flexibility and permit the modelling of complex dependency patterns using the rich variety of bivariate copulas which may be arranged and analysed in a tree structure to explore multiple dependencies. The paper features the use of Regular Vine copulas in an analysis of the co-dependencies of 10 major European Stock Markets, as represented by individual market indices and the composite STOXX 50 index. The sample runs from 2005 to the end of 2013 to permit an exploration of how correlations change indifferent economic circumstances using three different sample periods: pre-GFC (January 2005–July 2007, GFC (July 2007– September 2009, and post-GFC periods (September 2009–December 2013. The empirical results suggest that the dependencies change in a complex manner, and are subject to change in different economic circumstances. One of the attractions of this approach to risk modelling is the flexibility in the choice of distributions used to model co-dependencies. The practical application of Regular Vine metrics is demonstrated via an example of the calculation of the VaR of a portfolio made up of the indices.

  7. Data on copula modeling of mixed discrete and continuous neural time series.

    Science.gov (United States)

    Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou

    2016-06-01

    Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.

  8. Goodness-of-Fit Tests For Elliptical and Independent Copulas through Projection Pursuit

    Directory of Open Access Journals (Sweden)

    Jacques Touboul

    2011-04-01

    Full Text Available Two goodness-of-fit tests for copulas are being investigated. The first one deals with the case of elliptical copulas and the second one deals with independent copulas. These tests result from the expansion of the projection pursuit methodology that we will introduce in the present article. This method enables us to determine on which axis system these copulas lie as well as the exact value of these very copulas in the basis formed by the axes previously determined irrespective of their value in their canonical basis. Simulations are also presented as well as an application to real datasets.

  9. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  10. The Interdependence between Rainfall and Temperature: Copula Analyses

    DEFF Research Database (Denmark)

    Cong, Ronggang; Brady, Mark

    2012-01-01

    possible approach to this problem, five families of copula models are employed to model the interdependence between rainfall and temperature. Scania is a leading agricultural province in Sweden and is affected by a maritime climate. Historical climatic data for Scania is used to demonstrate the modeling...... process. Heteroscedasticity and autocorrelation of sample data are also considered to eliminate the possibility of observation error. The results indicate that for Scania there are negative correlations between rainfall and temperature for the months from April to July and September. The student copula...... is found to be most suitable to model the bivariate distribution of rainfall and temperature based on the Akaike information criterion (AIC) and Bayesian information criterion (BIC). Using the student copula, we simulate temperature and rainfall simultaneously. The resulting models can be integrated...

  11. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    Science.gov (United States)

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  12. Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys

    Science.gov (United States)

    Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik

    2011-01-01

    The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...

  13. Multivariate density estimation using dimension reducing information and tail flattening transformations for truncated or censored data

    DEFF Research Database (Denmark)

    Buch-Kromann, Tine; Nielsen, Jens

    2012-01-01

    This paper introduces a multivariate density estimator for truncated and censored data with special emphasis on extreme values based on survival analysis. A local constant density estimator is considered. We extend this estimator by means of tail flattening transformation, dimension reducing prior...

  14. Automatic breast tissue density estimation scheme in digital mammography images

    Science.gov (United States)

    Menechelli, Renan C.; Pacheco, Ana Luisa V.; Schiabel, Homero

    2017-03-01

    Cases of breast cancer have increased substantially each year. However, radiologists are subject to subjectivity and failures of interpretation which may affect the final diagnosis in this examination. The high density features in breast tissue are important factors related to these failures. Thus, among many functions some CADx (Computer-Aided Diagnosis) schemes are classifying breasts according to the predominant density. In order to aid in such a procedure, this work attempts to describe automated software for classification and statistical information on the percentage change in breast tissue density, through analysis of sub regions (ROIs) from the whole mammography image. Once the breast is segmented, the image is divided into regions from which texture features are extracted. Then an artificial neural network MLP was used to categorize ROIs. Experienced radiologists have previously determined the ROIs density classification, which was the reference to the software evaluation. From tests results its average accuracy was 88.7% in ROIs classification, and 83.25% in the classification of the whole breast density in the 4 BI-RADS density classes - taking into account a set of 400 images. Furthermore, when considering only a simplified two classes division (high and low densities) the classifier accuracy reached 93.5%, with AUC = 0.95.

  15. DBH Prediction Using Allometry Described by Bivariate Copula Distribution

    Science.gov (United States)

    Xu, Q.; Hou, Z.; Li, B.; Greenberg, J. A.

    2017-12-01

    Forest biomass mapping based on single tree detection from the airborne laser scanning (ALS) usually depends on an allometric equation that relates diameter at breast height (DBH) with per-tree aboveground biomass. The incapability of the ALS technology in directly measuring DBH leads to the need to predict DBH with other ALS-measured tree-level structural parameters. A copula-based method is proposed in the study to predict DBH with the ALS-measured tree height and crown diameter using a dataset measured in the Lassen National Forest in California. Instead of exploring an explicit mathematical equation that explains the underlying relationship between DBH and other structural parameters, the copula-based prediction method utilizes the dependency between cumulative distributions of these variables, and solves the DBH based on an assumption that for a single tree, the cumulative probability of each structural parameter is identical. Results show that compared with the bench-marking least-square linear regression and the k-MSN imputation, the copula-based method obtains better accuracy in the DBH for the Lassen National Forest. To assess the generalization of the proposed method, prediction uncertainty is quantified using bootstrapping techniques that examine the variability of the RMSE of the predicted DBH. We find that the copula distribution is reliable in describing the allometric relationship between tree-level structural parameters, and it contributes to the reduction of prediction uncertainty.

  16. Multivariate operational risk: dependence modelling with Lévy copulas

    OpenAIRE

    Böcker, K. and Klüppelberg, C.

    2015-01-01

    Simultaneous modelling of operational risks occurring in different event type/business line cells poses the challenge for operational risk quantification. Invoking the new concept of L´evy copulas for dependence modelling yields simple approximations of high quality for multivariate operational VAR.

  17. Estimates of high absolute densities and emergence rates of demersal zooplankton from the Agatti Atoll, laccadives

    Digital Repository Service at National Institute of Oceanography (India)

    Madhupratap, M.; Achuthankutty, C.T.; Nair, S.R.S.

    Direct sampling of the sandy substratus of the Agatti Lagoon with a corer showed the presence of vary high densities of epibenthic forms. On average, densities were about 25 times higher than previously estimated with emergence traps. About 80...

  18. Automated volumetric breast density estimation: A comparison with visual assessment

    International Nuclear Information System (INIS)

    Seo, J.M.; Ko, E.S.; Han, B.-K.; Ko, E.Y.; Shin, J.H.; Hahn, S.Y.

    2013-01-01

    Aim: To compare automated volumetric breast density (VBD) measurement with visual assessment according to Breast Imaging Reporting and Data System (BI-RADS), and to determine the factors influencing the agreement between them. Materials and methods: One hundred and ninety-three consecutive screening mammograms reported as negative were included in the study. Three radiologists assigned qualitative BI-RADS density categories to the mammograms. An automated volumetric breast-density method was used to measure VBD (% breast density) and density grade (VDG). Each case was classified into an agreement or disagreement group according to the comparison between visual assessment and VDG. The correlation between visual assessment and VDG was obtained. Various physical factors were compared between the two groups. Results: Agreement between visual assessment by the radiologists and VDG was good (ICC value = 0.757). VBD showed a highly significant positive correlation with visual assessment (Spearman's ρ = 0.754, p < 0.001). VBD and the x-ray tube target was significantly different between the agreement group and the disagreement groups (p = 0.02 and 0.04, respectively). Conclusion: Automated VBD is a reliable objective method to measure breast density. The agreement between VDG and visual assessment by radiologist might be influenced by physical factors

  19. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    Directory of Open Access Journals (Sweden)

    Hendra Gunawan

    2014-06-01

    Full Text Available http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting of an effect of topography, an effect of intracrustal, and an isostatic compensation. Based on simulation results, Bouguer density estimates were then investigated for a gravity survey of 2005 on La Soufriere Volcano-Guadeloupe area (Antilles Islands. The Bouguer density based on the Parasnis approach is 2.71 g/cm3 for the whole area, except the edifice area where average topography density estimates are 2.21 g/cm3 where Bouguer density estimates from previous gravity survey of 1975 are 2.67 g/cm3. The Bouguer density in La Soufriere Volcano was uncertainly estimated to be 0.1 g/cm3. For the studied area, the density deduced from refraction seismic data is coherent with the recent Bouguer density estimates. New Bouguer anomaly map based on these Bouguer density values allows to a better geological intepretation.    

  20. Estimation of current density distribution under electrodes for external defibrillation

    Directory of Open Access Journals (Sweden)

    Papazov Sava P

    2002-12-01

    Full Text Available Abstract Background Transthoracic defibrillation is the most common life-saving technique for the restoration of the heart rhythm of cardiac arrest victims. The procedure requires adequate application of large electrodes on the patient chest, to ensure low-resistance electrical contact. The current density distribution under the electrodes is non-uniform, leading to muscle contraction and pain, or risks of burning. The recent introduction of automatic external defibrillators and even wearable defibrillators, presents new demanding requirements for the structure of electrodes. Method and Results Using the pseudo-elliptic differential equation of Laplace type with appropriate boundary conditions and applying finite element method modeling, electrodes of various shapes and structure were studied. The non-uniformity of the current density distribution was shown to be moderately improved by adding a low resistivity layer between the metal and tissue and by a ring around the electrode perimeter. The inclusion of openings in long-term wearable electrodes additionally disturbs the current density profile. However, a number of small-size perforations may result in acceptable current density distribution. Conclusion The current density distribution non-uniformity of circular electrodes is about 30% less than that of square-shaped electrodes. The use of an interface layer of intermediate resistivity, comparable to that of the underlying tissues, and a high-resistivity perimeter ring, can further improve the distribution. The inclusion of skin aeration openings disturbs the current paths, but an appropriate selection of number and size provides a reasonable compromise.

  1. Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.

    Science.gov (United States)

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2015-05-01

    Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology.

  2. Multisite stochastic simulation of daily precipitation from copula modeling with a gamma marginal distribution

    Science.gov (United States)

    Lee, Taesam

    2018-05-01

    Multisite stochastic simulations of daily precipitation have been widely employed in hydrologic analyses for climate change assessment and agricultural model inputs. Recently, a copula model with a gamma marginal distribution has become one of the common approaches for simulating precipitation at multiple sites. Here, we tested the correlation structure of the copula modeling. The results indicate that there is a significant underestimation of the correlation in the simulated data compared to the observed data. Therefore, we proposed an indirect method for estimating the cross-correlations when simulating precipitation at multiple stations. We used the full relationship between the correlation of the observed data and the normally transformed data. Although this indirect method offers certain improvements in preserving the cross-correlations between sites in the original domain, the method was not reliable in application. Therefore, we further improved a simulation-based method (SBM) that was developed to model the multisite precipitation occurrence. The SBM preserved well the cross-correlations of the original domain. The SBM method provides around 0.2 better cross-correlation than the direct method and around 0.1 degree better than the indirect method. The three models were applied to the stations in the Nakdong River basin, and the SBM was the best alternative for reproducing the historical cross-correlation. The direct method significantly underestimates the correlations among the observed data, and the indirect method appeared to be unreliable.

  3. Density Estimation in Several Populations With Uncertain Population Membership

    KAUST Repository

    Ma, Yanyuan; Hart, Jeffrey D.; Carroll, Raymond J.

    2011-01-01

    sampled from any given population can be calculated. We develop general estimation procedures and bandwidth selection methods for our setting. We establish large-sample properties and study finite-sample performance using simulation studies. We illustrate

  4. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  5. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  6. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  7. Application of Density Estimation Methods to Datasets from a Glider

    Science.gov (United States)

    2014-09-30

    humpback and sperm whales as well as different dolphin species. OBJECTIVES The objective of this research is to extend existing methods for cetacean...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...estimation from single sensor datasets. Required steps for a cue counting approach, where a cue has been defined as a clicking event (Küsel et al., 2011), to

  8. Adjusting forest density estimates for surveyor bias in historical tree surveys

    Science.gov (United States)

    Brice B. Hanberry; Jian Yang; John M. Kabrick; Hong S. He

    2012-01-01

    The U.S. General Land Office surveys, conducted between the late 1700s to early 1900s, provide records of trees prior to widespread European and American colonial settlement. However, potential and documented surveyor bias raises questions about the reliability of historical tree density estimates and other metrics based on density estimated from these records. In this...

  9. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Science.gov (United States)

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  10. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  11. Volumetric breast density estimation from full-field digital mammograms.

    NARCIS (Netherlands)

    Engeland, S. van; Snoeren, P.R.; Huisman, H.J.; Boetes, C.; Karssemeijer, N.

    2006-01-01

    A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast

  12. Goodness-of-fit tests for multi-dimensional copulas: Expanding application to historical drought data

    Directory of Open Access Journals (Sweden)

    Ming-wei Ma

    2013-01-01

    Full Text Available The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for multi-dimensional copulas. A goodness-of-fit test based on Rosenblatt's transformation was mathematically expanded from two dimensions to three dimensions and procedures of a bootstrap version of the test were provided. Through stochastic copula simulation, an empirical application of historical drought data at the Lintong Gauge Station shows that the goodness-of-fit tests perform well, revealing that both trivariate Gaussian and Student t copulas are acceptable for modeling the dependence structures of the observed drought duration, severity, and peak. The goodness-of-fit tests for multi-dimensional copulas can provide further support and help a lot in the potential applications of a wider range of copulas to describe the associations of correlated hydrological variables. However, for the application of copulas with the number of dimensions larger than three, more complicated computational efforts as well as exploration and parameterization of corresponding copulas are required.

  13. EuroMInd-D: A Density Estimate of Monthly Gross Domestic Product for the Euro Area

    DEFF Research Database (Denmark)

    Proietti, Tommaso; Marczak, Martyna; Mazzi, Gianluigi

    EuroMInd-D is a density estimate of monthly gross domestic product (GDP) constructed according to a bottom–up approach, pooling the density estimates of eleven GDP components, by output and expenditure type. The components density estimates are obtained from a medium-size dynamic factor model...... of a set of coincident time series handling mixed frequencies of observation and ragged–edged data structures. They reflect both parameter and filtering uncertainty and are obtained by implementing a bootstrap algorithm for simulating from the distribution of the maximum likelihood estimators of the model...

  14. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    Science.gov (United States)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  15. Volumetric breast density estimation from full-field digital mammograms.

    Science.gov (United States)

    van Engeland, Saskia; Snoeren, Peter R; Huisman, Henkjan; Boetes, Carla; Karssemeijer, Nico

    2006-03-01

    A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear attenuation coefficients of these tissues are derived from empirical data as a function of tube voltage (kVp), anode material, filtration, and compressed breast thickness. By employing these, tissue composition at a given pixel is computed after performing breast thickness compensation, using a reference value for fatty tissue determined by the maximum pixel value in the breast tissue projection. Validation has been performed using 22 FFDM cases acquired with a GE Senographe 2000D by comparing the volume estimates with volumes obtained by semi-automatic segmentation of breast magnetic resonance imaging (MRI) data. The correlation between MRI and mammography volumes was 0.94 on a per image basis and 0.97 on a per patient basis. Using the dense tissue volumes from MRI data as the gold standard, the average relative error of the volume estimates was 13.6%.

  16. Effect of Broadband Nature of Marine Mammal Echolocation Clicks on Click-Based Population Density Estimates

    Science.gov (United States)

    2014-09-30

    No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...will be applied also to other species such as sperm whale (Physeter macrocephalus) (whose high source level assures long range detection and amplifies...improve the accuracy of marine mammal density estimation based on counting echolocation clicks, and will be applicable to density estimates obtained

  17. Light element nucleosynthesis and estimates of the universal baryon density

    International Nuclear Information System (INIS)

    Mathews, G.J.; Viola, V.E.

    1978-01-01

    The present mean universal baryon density rho/sub b/, is of interest because it and the Hubble constant determine the curvature of the Universe. The available indicators of rho/sub b/ come from the present deuterium abundance, if it is assumed that ''big-bang'' nucleosynthesis must produce enough D to at least match the abundance of this nuclide in the interstellar medium. An alternative method utilizing the 7 Li/D ratio is used to evaluate rho/sub b/. With this method the difficulty associated with the astration process can be essentially canceled from the problem. The results obtained indicate an open Universe with a best guess for rho/sub b/ of 7.1 x 10 -31 g/cm 3 . 1 figure, 1 table

  18. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    Directory of Open Access Journals (Sweden)

    Manan Gupta

    Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates

  19. Technical Note: Cortical thickness and density estimation from clinical CT using a prior thickness-density relationship

    International Nuclear Information System (INIS)

    Humbert, Ludovic; Hazrati Marangalou, Javad; Rietbergen, Bert van; Río Barquero, Luis Miguel del; Lenthe, G. Harry van

    2016-01-01

    Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was used as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 ± 0.19 mm (−18 ± 92 mg/cm"3) using the simulated clinical CT volumes with the smallest voxel size (0.33 × 0.33 × 0.5 mm"3), and 0.10 ± 0.24 mm (−10 ± 115 mg/cm"3) using the volumes with the largest voxel size (1.0 × 1.0 × 3.0 mm"3). A trend for the cortical thickness and

  20. Technical Note: Cortical thickness and density estimation from clinical CT using a prior thickness-density relationship

    Energy Technology Data Exchange (ETDEWEB)

    Humbert, Ludovic, E-mail: ludohumberto@gmail.com [Galgo Medical, Barcelona 08036 (Spain); Hazrati Marangalou, Javad; Rietbergen, Bert van [Orthopaedic Biomechanics, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB (Netherlands); Río Barquero, Luis Miguel del [CETIR Centre Medic, Barcelona 08029 (Spain); Lenthe, G. Harry van [Biomechanics Section, KU Leuven–University of Leuven, Leuven 3001 (Belgium)

    2016-04-15

    Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was used as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 ± 0.19 mm (−18 ± 92 mg/cm{sup 3}) using the simulated clinical CT volumes with the smallest voxel size (0.33 × 0.33 × 0.5 mm{sup 3}), and 0.10 ± 0.24 mm (−10 ± 115 mg/cm{sup 3}) using the volumes with the largest voxel size (1.0 × 1.0 × 3.0 mm{sup 3}). A trend for the

  1. Efficient estimation of dynamic density functions with an application to outlier detection

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Zhang, Xiangliang; Wang, Suojin

    2012-01-01

    In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.

  2. Use of spatial capture–recapture to estimate density of Andean bears in northern Ecuador

    Science.gov (United States)

    Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew

    2017-01-01

    The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial capture–recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2 within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.

  3. A generalized model for estimating the energy density of invertebrates

    Science.gov (United States)

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  4. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    Directory of Open Access Journals (Sweden)

    Darren Kidney

    Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will

  5. Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mohayai, Tanaz Angelina [IIT, Chicago; Snopok, Pavel [IIT, Chicago; Neuffer, David [Fermilab; Rogers, Chris [Rutherford

    2017-10-12

    The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.

  6. Estimation of dislocations density and distribution of dislocations during ECAP-Conform process

    Science.gov (United States)

    Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza

    2018-01-01

    Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.

  7. A dependent stress-strength interference model based on mixed copula function

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Jian Xiong; An, Zong Wen; Liu, Bo [School of Mechatronics Engineering, Lanzhou University of Technology, Lanzhou (China)

    2016-10-15

    In the traditional Stress-strength interference (SSI) model, stress and strength must satisfy the basic assumption of mutual independence. However, a complex dependence between stress and strength exists in practical engineering. To evaluate structural reliability under the case that stress and strength are dependent, a mixed copula function is introduced to a new dependent SSI model. This model can fully characterize the dependence between stress and strength. The residual square sum method and genetic algorithm are also used to estimate the unknown parameters of the model. Finally, the validity of the proposed model is demonstrated via a practical case. Results show that traditional SSI model ignoring the dependence between stress and strength more easily overestimates product reliability than the new dependent SSI model.

  8. Copula Entropy coupled with Wavelet Neural Network Model for Hydrological Prediction

    Science.gov (United States)

    Wang, Yin; Yue, JiGuang; Liu, ShuGuang; Wang, Li

    2018-02-01

    Artificial Neural network(ANN) has been widely used in hydrological forecasting. in this paper an attempt has been made to find an alternative method for hydrological prediction by combining Copula Entropy(CE) with Wavelet Neural Network(WNN), CE theory permits to calculate mutual information(MI) to select Input variables which avoids the limitations of the traditional linear correlation(LCC) analysis. Wavelet analysis can provide the exact locality of any changes in the dynamical patterns of the sequence Coupled with ANN Strong non-linear fitting ability. WNN model was able to provide a good fit with the hydrological data. finally, the hybrid model(CE+WNN) have been applied to daily water level of Taihu Lake Basin, and compared with CE ANN, LCC WNN and LCC ANN. Results showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models.

  9. Volatility spillover between crude oil and exchange rate: A copula-CARR approach

    Science.gov (United States)

    Pu, Y. J.; Guo, M. Y.

    2017-11-01

    Oil provides a powerful impetus for modern society's production and life. The influences of oil price fluctuations on socio-economic development are obvious, and it draws more attention from scholars. However, the distribution of oil is highly centralized, which leads to the vast majority of oil trading through foreign trade. As a result, exchange rate plays an important role in the oil business. Study on the relationship between exchange rate and crude oil gradually becomes a hot research topic in recent years. In this paper, we use copula and CARR model to study correlation structure and relationship between crude oil price and exchange rate. We establish CARR models as marginal models and use five copulas which are Gaussian Copula, Student-t Copula, Gumbel Copula, Clayton Copula and Frank Copula to study the correlation structure between NYMEX crude oil price range and U. S. Dollar Index range. Furthermore, we use Copula-CARR model with structural breaks to detect the change points in the correlation structure between NYMEX crude oil price range and U. S. Dollar Index range. Empirical results show that the change points are closely related to the actual economic events.

  10. A note on the conditional density estimate in single functional index model

    OpenAIRE

    2010-01-01

    Abstract In this paper, we consider estimation of the conditional density of a scalar response variable Y given a Hilbertian random variable X when the observations are linked with a single-index structure. We establish the pointwise and the uniform almost complete convergence (with the rate) of the kernel estimate of this model. As an application, we show how our result can be applied in the prediction problem via the conditional mode estimate. Finally, the estimation of the funct...

  11. EnviroAtlas - New Bedford, MA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  12. EnviroAtlas - Woodbine, IA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  13. EnviroAtlas - Green Bay, WI - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  14. EnviroAtlas - Des Moines, IA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  15. EnviroAtlas - Durham, NC - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  16. EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  17. EnviroAtlas - Fresno, CA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  18. EnviroAtlas - Cleveland, OH - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  19. EnviroAtlas - Portland, ME - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  20. EnviroAtlas - New York, NY - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  1. EnviroAtlas - Memphis, TN - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  2. EnviroAtlas - Milwaukee, WI - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  3. EnviroAtlas Estimated Intersection Density of Walkable Roads Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in each EnviroAtlas community....

  4. EnviroAtlas - Portland, OR - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  5. EnviroAtlas - Tampa, FL - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  6. EnviroAtlas - Austin, TX - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  7. EnviroAtlas - Paterson, NJ - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  8. EnviroAtlas - Phoenix, AZ - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  9. EnviroAtlas - Pittsburgh, PA - Estimated Intersection Density of Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections...

  10. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statistical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  11. Density meter algorithm and system for estimating sampling/mixing uncertainty

    International Nuclear Information System (INIS)

    Shine, E.P.

    1986-01-01

    The Laboratories Department at the Savannah River Plant (SRP) has installed a six-place density meter with an automatic sampling device. This paper describes the statisical software developed to analyze the density of uranyl nitrate solutions using this automated system. The purpose of this software is twofold: to estimate the sampling/mixing and measurement uncertainties in the process and to provide a measurement control program for the density meter. Non-uniformities in density are analyzed both analytically and graphically. The mean density and its limit of error are estimated. Quality control standards are analyzed concurrently with process samples and used to control the density meter measurement error. The analyses are corrected for concentration due to evaporation of samples waiting to be analyzed. The results of this program have been successful in identifying sampling/mixing problems and controlling the quality of analyses

  12. Analysis of survival data with dependent censoring copula-based approaches

    CERN Document Server

    Emura, Takeshi

    2018-01-01

    This book introduces readers to copula-based statistical methods for analyzing survival data involving dependent censoring. Primarily focusing on likelihood-based methods performed under copula models, it is the first book solely devoted to the problem of dependent censoring. The book demonstrates the advantages of the copula-based methods in the context of medical research, especially with regard to cancer patients’ survival data. Needless to say, the statistical methods presented here can also be applied to many other branches of science, especially in reliability, where survival analysis plays an important role. The book can be used as a textbook for graduate coursework or a short course aimed at (bio-) statisticians. To deepen readers’ understanding of copula-based approaches, the book provides an accessible introduction to basic survival analysis and explains the mathematical foundations of copula-based survival models.

  13. The Visualization and Analysis of POI Features under Network Space Supported by Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    YU Wenhao

    2015-01-01

    Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.

  14. Effects of stand density on top height estimation for ponderosa pine

    Science.gov (United States)

    Martin Ritchie; Jianwei Zhang; Todd Hamilton

    2012-01-01

    Site index, estimated as a function of dominant-tree height and age, is often used as an expression of site quality. This expression is assumed to be effectively independent of stand density. Observation of dominant height at two different ponderosa pine levels-of-growing-stock studies revealed that top height stability with respect to stand density depends on the...

  15. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Wang, Suojin; Zhang, Xiangliang

    2016-01-01

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  16. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2016-11-08

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  17. Modeling Tropical Cyclone Storm Surge and Wind Induced Risk Along the Bay of Bengal Coastline Using a Statistical Copula

    Science.gov (United States)

    Bushra, N.; Trepanier, J. C.; Rohli, R. V.

    2017-12-01

    High winds, torrential rain, and storm surges from tropical cyclones (TCs) cause massive destruction to property and cost the lives of many people. The coastline of the Bay of Bengal (BoB) ranks as one of the most susceptible to TC storm surges in the world due to low-lying elevation and a high frequency of occurrence. Bangladesh suffers the most due to its geographical setting and population density. Various models have been developed to predict storm surge in this region but none of them quantify statistical risk with empirical data. This study describes the relationship and dependency between empirical TC storm surge and peak reported wind speed at the BoB using a bivariate statistical copula and data from 1885-2011. An Archimedean, Gumbel copula with margins defined by the empirical distributions is specified as the most appropriate choice for the BoB. The model provides return periods for pairs of TC storm surge and peak wind along the BoB coastline. The BoB can expect a TC with peak reported winds of at least 24 m s-1 and surge heights of at least 4.0 m, on average, once every 3.2 years, with a quartile pointwise confidence interval of 2.7-3.8 years. In addition, the BoB can expect peak reported winds of 62 m s-1 and surge heights of at least 8.0 m, on average, once every 115.4 years, with a quartile pointwise confidence interval of 55.8-381.1 years. The purpose of the analysis is to increase the understanding of these dangerous TC characteristics to reduce fatalities and monetary losses into the future. Application of the copula will mitigate future threats of storm surge impacts on coastal communities of the BoB.

  18. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    OpenAIRE

    Gunawan, Hendra; Micheldiament, Micheldiament; Mikhailov, Valentin

    2008-01-01

    http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density) estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting ...

  19. Investigating the impact of uneven magnetic flux density distribution on core loss estimation

    DEFF Research Database (Denmark)

    Niroumand, Farideh Javidi; Nymand, Morten; Wang, Yiren

    2017-01-01

    is calculated according to an effective flux density value and the macroscopic dimensions of the cores. However, the flux distribution in the core can alter by core shapes and/or operating conditions due to nonlinear material properties. This paper studies the element-wise estimation of the loss in magnetic......There are several approaches for loss estimation in magnetic cores, and all these approaches highly rely on accurate information about flux density distribution in the cores. It is often assumed that the magnetic flux density evenly distributes throughout the core and the overall core loss...

  20. An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index

    DEFF Research Database (Denmark)

    Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle

    2013-01-01

    We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...

  1. Historical and future drought in Bangladesh using copula-based bivariate regional frequency analysis

    Science.gov (United States)

    Mortuza, Md Rubayet; Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2018-02-01

    The study aims at regional and probabilistic evaluation of bivariate drought characteristics to assess both the past and future drought duration and severity in Bangladesh. The procedures involve applying (1) standardized precipitation index to identify drought duration and severity, (2) regional frequency analysis to determine the appropriate marginal distributions for both duration and severity, (3) copula model to estimate the joint probability distribution of drought duration and severity, and (4) precipitation projections from multiple climate models to assess future drought trends. Since drought duration and severity in Bangladesh are often strongly correlated and do not follow same marginal distributions, the joint and conditional return periods of droughts are characterized using the copula-based joint distribution. The country is divided into three homogeneous regions using Fuzzy clustering and multivariate discordancy and homogeneity measures. For given severity and duration values, the joint return periods for a drought to exceed both values are on average 45% larger, while to exceed either value are 40% less than the return periods from the univariate frequency analysis, which treats drought duration and severity independently. These suggest that compared to the bivariate drought frequency analysis, the standard univariate frequency analysis under/overestimate the frequency and severity of droughts depending on how their duration and severity are related. Overall, more frequent and severe droughts are observed in the west side of the country. Future drought trend based on four climate models and two scenarios showed the possibility of less frequent drought in the future (2020-2100) than in the past (1961-2010).

  2. Kernel and wavelet density estimators on manifolds and more general metric spaces

    DEFF Research Database (Denmark)

    Cleanthous, G.; Georgiadis, Athanasios; Kerkyacharian, G.

    We consider the problem of estimating the density of observations taking values in classical or nonclassical spaces such as manifolds and more general metric spaces. Our setting is quite general but also sufficiently rich in allowing the development of smooth functional calculus with well localized...... spectral kernels, Besov regularity spaces, and wavelet type systems. Kernel and both linear and nonlinear wavelet density estimators are introduced and studied. Convergence rates for these estimators are established, which are analogous to the existing results in the classical setting of real...

  3. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    Science.gov (United States)

    2015-09-30

    titled “Ocean Basin Impact of Ambient Noise on Marine Mammal Detectability, Distribution, and Acoustic Communication ”. Patterns and trends of ocean... mammals in response to potentially negative interactions with human activity requires knowledge of how many animals are present in an area during a...specific time period. Many marine mammal species are relatively hard to sight, making standard visual methods of density estimation difficult and

  4. The estimation of heavy metal concentration in FBR reprocessing solvent streams by density measurement

    International Nuclear Information System (INIS)

    Brown, M.L.; Savage, D.J.

    1986-04-01

    The application of density measurement to heavy metal monitoring in the solvent phase is described, including practical experience gained during three fast reactor fuel reprocessing campaigns. An experimental algorithm relating heavy metal concentration and sample density was generated from laboratory-measured density data, for uranyl nitrate dissolved in nitric acid loaded tri-butyl phosphate in odourless kerosene. Differences in odourless kerosene batch densities are mathematically interpolated, and the algorithm can be used to estimate heavy metal concentrations from the density to within +1.5 g/l. An Anton Paar calculating digital densimeter with remote cell operation was used for all density measurements, but the algorithm will give similar accuracy with any density measuring device capable of a precision of better than 0.0005 g/cm 3 . For plant control purposes, the algorithm was simplified using a density referencing system, whereby the density of solvent not yet loaded with heavy metal is subtracted from the sample density. This simplified algorithm compares very favourably with empirical algorithms, derived from numerical analysis of density data and chemically measured uranium and plutonium data obtained during fuel reprocessing campaigns, particularly when differences in the acidity of the solvent are considered before and after loading with heavy metal. This simplified algorithm had been successfully used for plant control of heavy metal loaded solvent during four fast reactor fuel reprocessing campaigns. (author)

  5. Dependent defaults and losses with factor copula models

    Directory of Open Access Journals (Sweden)

    Ackerer Damien

    2017-12-01

    Full Text Available We present a class of flexible and tractable static factor models for the term structure of joint default probabilities, the factor copula models. These high-dimensional models remain parsimonious with paircopula constructions, and nest many standard models as special cases. The loss distribution of a portfolio of contingent claims can be exactly and efficiently computed when individual losses are discretely supported on a finite grid. Numerical examples study the key features affecting the loss distribution and multi-name credit derivatives prices. An empirical exercise illustrates the flexibility of our approach by fitting credit index tranche prices.

  6. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  7. Estimating population density and connectivity of American mink using spatial capture-recapture.

    Science.gov (United States)

    Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P

    2016-06-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  8. Estimating population density and connectivity of American mink using spatial capture-recapture

    Science.gov (United States)

    Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.

    2016-01-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  9. Optimal moment determination in POME-copula based hydrometeorological dependence modelling

    Science.gov (United States)

    Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi

    2017-07-01

    Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.

  10. Risk Management of Assets Dependency Based on Copulas Function

    Directory of Open Access Journals (Sweden)

    Cheng Lei

    2017-01-01

    Full Text Available As the two important form of financial market, the risk of financial securities, such as stocks and bonds, has been a hot topic in the financial field; at the same time, under the influence of many factors of financial assets, the correlation between portfolio returns causes more research. This paper presents Copula-SV-t model that it uses SV-t model to measure the edge distribution, and uses the Copula-t method to obtain the high-dimensional joint distribution. It not only solves the actual deviation with using the ARCH family model to calculate the portfolio risk, but also solves the problem to overestimate the risk with using extreme value theory to study financial risk. Through the empirical research, the conclusion shows that the model describes better assets and tail characteristics of assets, and is more in line with the reality of the market. Furthermore, Empirical evidence also shows that if the portfolio is relatively large degree of correlation, the ability to disperse portfolio risk is relatively weakness.

  11. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    Science.gov (United States)

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  12. The Kernel Mixture Network: A Nonparametric Method for Conditional Density Estimation of Continuous Random Variables

    OpenAIRE

    Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric

    2017-01-01

    This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...

  13. Bulk density estimation using a 3-dimensional image acquisition and analysis system

    Directory of Open Access Journals (Sweden)

    Heyduk Adam

    2016-01-01

    Full Text Available The paper presents a concept of dynamic bulk density estimation of a particulate matter stream using a 3-d image analysis system and a conveyor belt scale. A method of image acquisition should be adjusted to the type of scale. The paper presents some laboratory results of static bulk density measurements using the MS Kinect time-of-flight camera and OpenCV/Matlab software. Measurements were made for several different size classes.

  14. Trap array configuration influences estimates and precision of black bear density and abundance.

    Directory of Open Access Journals (Sweden)

    Clay M Wilton

    Full Text Available Spatial capture-recapture (SCR models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193-406 bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of

  15. Estimating peer density effects on oral health for community-based older adults.

    Science.gov (United States)

    Chakraborty, Bibhas; Widener, Michael J; Mirzaei Salehabadi, Sedigheh; Northridge, Mary E; Kum, Susan S; Jin, Zhu; Kunzel, Carol; Palmer, Harvey D; Metcalf, Sara S

    2017-12-29

    As part of a long-standing line of research regarding how peer density affects health, researchers have sought to understand the multifaceted ways that the density of contemporaries living and interacting in proximity to one another influence social networks and knowledge diffusion, and subsequently health and well-being. This study examined peer density effects on oral health for racial/ethnic minority older adults living in northern Manhattan and the Bronx, New York, NY. Peer age-group density was estimated by smoothing US Census data with 4 kernel bandwidths ranging from 0.25 to 1.50 mile. Logistic regression models were developed using these spatial measures and data from the ElderSmile oral and general health screening program that serves predominantly racial/ethnic minority older adults at community centers in northern Manhattan and the Bronx. The oral health outcomes modeled as dependent variables were ordinal dentition status and binary self-rated oral health. After construction of kernel density surfaces and multiple imputation of missing data, logistic regression analyses were performed to estimate the effects of peer density and other sociodemographic characteristics on the oral health outcomes of dentition status and self-rated oral health. Overall, higher peer density was associated with better oral health for older adults when estimated using smaller bandwidths (0.25 and 0.50 mile). That is, statistically significant relationships (p density and improved dentition status were found when peer density was measured assuming a more local social network. As with dentition status, a positive significant association was found between peer density and fair or better self-rated oral health when peer density was measured assuming a more local social network. This study provides novel evidence that the oral health of community-based older adults is affected by peer density in an urban environment. To the extent that peer density signifies the potential for

  16. A citizen science based survey method for estimating the density of urban carnivores

    Science.gov (United States)

    Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on

  17. A copula method for modeling directional dependence of genes

    Directory of Open Access Journals (Sweden)

    Park Changyi

    2008-05-01

    Full Text Available Abstract Background Genes interact with each other as basic building blocks of life, forming a complicated network. The relationship between groups of genes with different functions can be represented as gene networks. With the deposition of huge microarray data sets in public domains, study on gene networking is now possible. In recent years, there has been an increasing interest in the reconstruction of gene networks from gene expression data. Recent work includes linear models, Boolean network models, and Bayesian networks. Among them, Bayesian networks seem to be the most effective in constructing gene networks. A major problem with the Bayesian network approach is the excessive computational time. This problem is due to the interactive feature of the method that requires large search space. Since fitting a model by using the copulas does not require iterations, elicitation of the priors, and complicated calculations of posterior distributions, the need for reference to extensive search spaces can be eliminated leading to manageable computational affords. Bayesian network approach produces a discretely expression of conditional probabilities. Discreteness of the characteristics is not required in the copula approach which involves use of uniform representation of the continuous random variables. Our method is able to overcome the limitation of Bayesian network method for gene-gene interaction, i.e. information loss due to binary transformation. Results We analyzed the gene interactions for two gene data sets (one group is eight histone genes and the other group is 19 genes which include DNA polymerases, DNA helicase, type B cyclin genes, DNA primases, radiation sensitive genes, repaire related genes, replication protein A encoding gene, DNA replication initiation factor, securin gene, nucleosome assembly factor, and a subunit of the cohesin complex by adopting a measure of directional dependence based on a copula function. We have compared

  18. A group contribution method to estimate the densities of ionic liquids

    International Nuclear Information System (INIS)

    Qiao Yan; Ma Youguang; Huo Yan; Ma Peisheng; Xia Shuqian

    2010-01-01

    Densities of ionic liquids at different temperature and pressure were collected from 84 references. The collection contains 7381 data points derived from 123 pure ionic liquids and 13 kinds of binary ionic liquids mixtures. In terms of the collected database, a group contribution method based on 51 groups was used to predict the densities of ionic liquids. In group partition, the effect of interaction among several substitutes on the same center was considered. The same structure in different substitutes may have different group values. According to the estimation of pure ionic liquids' densities, the results show that the average relative error is 0.88% and the standard deviation (S) is 0.0181. Using the set of group values three pure ionic liquids densities were predicted, the average relative error is 0.27% and the S is 0.0048. For ionic liquid mixtures, they are thought considered as idea mixtures, so the group contribution method was used to estimate their densities and the average relative error is 1.22% with S is 0.0607. And the method can also be used to estimate the densities of MCl x type ionic liquids which are produced by mixing an ionic liquid with a Cl - anion and a kind of metal chloride.

  19. Estimating the amount and distribution of radon flux density from the soil surface in China

    International Nuclear Information System (INIS)

    Zhuo Weihai; Guo Qiuju; Chen Bo; Cheng Guan

    2008-01-01

    Based on an idealized model, both the annual and the seasonal radon ( 222 Rn) flux densities from the soil surface at 1099 sites in China were estimated by linking a database of soil 226 Ra content and a global ecosystems database. Digital maps of the 222 Rn flux density in China were constructed in a spatial resolution of 25 km x 25 km by interpolation among the estimated data. An area-weighted annual average 222 Rn flux density from the soil surface across China was estimated to be 29.7 ± 9.4 mBq m -2 s -1 . Both regional and seasonal variations in the 222 Rn flux densities are significant in China. Annual average flux densities in the southeastern and northwestern China are generally higher than those in other regions of China, because of high soil 226 Ra content in the southeastern area and high soil aridity in the northwestern one. The seasonal average flux density is generally higher in summer/spring than winter, since relatively higher soil temperature and lower soil water saturation in summer/spring than other seasons are common in China

  20. PEDO-TRANSFER FUNCTIONS FOR ESTIMATING SOIL BULK DENSITY IN CENTRAL AMAZONIA

    Directory of Open Access Journals (Sweden)

    Henrique Seixas Barros

    2015-04-01

    Full Text Available Under field conditions in the Amazon forest, soil bulk density is difficult to measure. Rigorous methodological criteria must be applied to obtain reliable inventories of C stocks and soil nutrients, making this process expensive and sometimes unfeasible. This study aimed to generate models to estimate soil bulk density based on parameters that can be easily and reliably measured in the field and that are available in many soil-related inventories. Stepwise regression models to predict bulk density were developed using data on soil C content, clay content and pH in water from 140 permanent plots in terra firme (upland forests near Manaus, Amazonas State, Brazil. The model results were interpreted according to the coefficient of determination (R2 and Akaike information criterion (AIC and were validated with a dataset consisting of 125 plots different from those used to generate the models. The model with best performance in estimating soil bulk density under the conditions of this study included clay content and pH in water as independent variables and had R2 = 0.73 and AIC = -250.29. The performance of this model for predicting soil density was compared with that of models from the literature. The results showed that the locally calibrated equation was the most accurate for estimating soil bulk density for upland forests in the Manaus region.

  1. Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery

    Directory of Open Access Journals (Sweden)

    Shouyang Liu

    2017-05-01

    Full Text Available Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages.

  2. Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party

    Science.gov (United States)

    Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

    The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

  3. Estimation of current density distribution of PAFC by analysis of cell exhaust gas

    Energy Technology Data Exchange (ETDEWEB)

    Kato, S.; Seya, A. [Fuji Electric Co., Ltd., Ichihara-shi (Japan); Asano, A. [Fuji Electric Corporate, Ltd., Yokosuka-shi (Japan)

    1996-12-31

    To estimate distributions of Current densities, voltages, gas concentrations, etc., in phosphoric acid fuel cell (PAFC) stacks, is very important for getting fuel cells with higher quality. In this work, we leave developed a numerical simulation tool to map out the distribution in a PAFC stack. And especially to Study Current density distribution in the reaction area of the cell, we analyzed gas composition in several positions inside a gas outlet manifold of the PAFC stack. Comparing these measured data with calculated data, the current density distribution in a cell plane calculated by the simulation, was certified.

  4. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    Science.gov (United States)

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.

  5. A hierarchical model for estimating density in camera-trap studies

    Science.gov (United States)

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  6. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid

    Science.gov (United States)

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  7. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    Directory of Open Access Journals (Sweden)

    Eléanor Brassine

    Full Text Available Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9 cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km². While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200, no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.

  8. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    Science.gov (United States)

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  9. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Science.gov (United States)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  10. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Directory of Open Access Journals (Sweden)

    Samir Saoudi

    2008-07-01

    Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  11. Nonparametric Bayesian density estimation on manifolds with applications to planar shapes.

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David B

    2010-12-01

    Statistical analysis on landmark-based shape spaces has diverse applications in morphometrics, medical diagnostics, machine vision and other areas. These shape spaces are non-Euclidean quotient manifolds. To conduct nonparametric inferences, one may define notions of centre and spread on this manifold and work with their estimates. However, it is useful to consider full likelihood-based methods, which allow nonparametric estimation of the probability density. This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular. Following a Bayesian approach with a nonparametric prior on the mixing distribution, conditions are obtained under which the Kullback-Leibler property holds, implying large support and weak posterior consistency. Gibbs sampling methods are developed for posterior computation, and the methods are applied to problems in density estimation and classification with shape-based predictors. Simulation studies show improved estimation performance relative to existing approaches.

  12. The importance of spatial models for estimating the strength of density dependence

    DEFF Research Database (Denmark)

    Thorson, James T.; Skaug, Hans J.; Kristensen, Kasper

    2014-01-01

    the California Coast. In this case, the nonspatial model estimates implausible oscillatory dynamics on an annual time scale, while the spatial model estimates strong autocorrelation and is supported by model selection tools. We conclude by discussing the importance of improved data archiving techniques, so...... that spatial models can be used to re-examine classic questions regarding the presence and strength of density dependence in wild populations Read More: http://www.esajournals.org/doi/abs/10.1890/14-0739.1...

  13. Features of the normal choriocapillaris with OCT-angiography: Density estimation and textural properties.

    Science.gov (United States)

    Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo

    2017-01-01

    The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.

  14. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    Directory of Open Access Journals (Sweden)

    Alexander Richard Braczkowski

    Full Text Available Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96 or temporal activity of female (p = 0.12 or male leopards (p = 0.79, and the assumption of geographic closure was met for both surveys (p >0.05. The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90. Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2 were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2. The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.

  15. ALTERNATIVE METHODOLOGIES FOR THE ESTIMATION OF LOCAL POINT DENSITY INDEX: MOVING TOWARDS ADAPTIVE LIDAR DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-07-01

    Full Text Available Over the past few years, LiDAR systems have been established as a leading technology for the acquisition of high density point clouds over physical surfaces. These point clouds will be processed for the extraction of geo-spatial information. Local point density is one of the most important properties of the point cloud that highly affects the performance of data processing techniques and the quality of extracted information from these data. Therefore, it is necessary to define a standard methodology for the estimation of local point density indices to be considered for the precise processing of LiDAR data. Current definitions of local point density indices, which only consider the 2D neighbourhood of individual points, are not appropriate for 3D LiDAR data and cannot be applied for laser scans from different platforms. In order to resolve the drawbacks of these methods, this paper proposes several approaches for the estimation of the local point density index which take the 3D relationship among the points and the physical properties of the surfaces they belong to into account. In the simplest approach, an approximate value of the local point density for each point is defined while considering the 3D relationship among the points. In the other approaches, the local point density is estimated by considering the 3D neighbourhood of the point in question and the physical properties of the surface which encloses this point. The physical properties of the surfaces enclosing the LiDAR points are assessed through eigen-value analysis of the 3D neighbourhood of individual points and adaptive cylinder methods. This paper will discuss these approaches and highlight their impact on various LiDAR data processing activities (i.e., neighbourhood definition, region growing, segmentation, boundary detection, and classification. Experimental results from airborne and terrestrial LiDAR data verify the efficacy of considering local point density variation for

  16. Cost-offsets of prescription drug expenditures: data analysis via a copula-based bivariate dynamic hurdle model.

    Science.gov (United States)

    Deb, Partha; Trivedi, Pravin K; Zimmer, David M

    2014-10-01

    In this paper, we estimate a copula-based bivariate dynamic hurdle model of prescription drug and nondrug expenditures to test the cost-offset hypothesis, which posits that increased expenditures on prescription drugs are offset by reductions in other nondrug expenditures. We apply the proposed methodology to data from the Medical Expenditure Panel Survey, which have the following features: (i) the observed bivariate outcomes are a mixture of zeros and continuously measured positives; (ii) both the zero and positive outcomes show state dependence and inter-temporal interdependence; and (iii) the zeros and the positives display contemporaneous association. The point mass at zero is accommodated using a hurdle or a two-part approach. The copula-based approach to generating joint distributions is appealing because the contemporaneous association involves asymmetric dependence. The paper studies samples categorized by four health conditions: arthritis, diabetes, heart disease, and mental illness. There is evidence of greater than dollar-for-dollar cost-offsets of expenditures on prescribed drugs for relatively low levels of spending on drugs and less than dollar-for-dollar cost-offsets at higher levels of drug expenditures. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Water shortage risk assessment considering large-scale regional transfers: a copula-based uncertainty case study in Lunan, China.

    Science.gov (United States)

    Gao, Xueping; Liu, Yinzhu; Sun, Bowen

    2018-06-05

    The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.

  18. Estimation of Mesospheric Densities at Low Latitudes Using the Kunming Meteor Radar Together With SABER Temperatures

    Science.gov (United States)

    Yi, Wen; Xue, Xianghui; Reid, Iain M.; Younger, Joel P.; Chen, Jinsong; Chen, Tingdi; Li, Na

    2018-04-01

    Neutral mesospheric densities at a low latitude have been derived during April 2011 to December 2014 using data from the Kunming meteor radar in China (25.6°N, 103.8°E). The daily mean density at 90 km was estimated using the ambipolar diffusion coefficients from the meteor radar and temperatures from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument. The seasonal variations of the meteor radar-derived density are consistent with the density from the Mass Spectrometer and Incoherent Scatter (MSIS) model, show a dominant annual variation, with a maximum during winter, and a minimum during summer. A simple linear model was used to separate the effects of atmospheric density and the meteor velocity on the meteor radar peak detection height. We find that a 1 km/s difference in the vertical meteor velocity yields a change of approximately 0.42 km in peak height. The strong correlation between the meteor radar density and the velocity-corrected peak height indicates that the meteor radar density estimates accurately reflect changes in neutral atmospheric density and that meteor peak detection heights, when adjusted for meteoroid velocity, can serve as a convenient tool for measuring density variations around the mesopause. A comparison of the ambipolar diffusion coefficient and peak height observed simultaneously by two co-located meteor radars indicates that the relative errors of the daily mean ambipolar diffusion coefficient and peak height should be less than 5% and 6%, respectively, and that the absolute error of the peak height is less than 0.2 km.

  19. Assessing different parameters estimation methods of Weibull distribution to compute wind power density

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi

    2016-01-01

    Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.

  20. Urban birds in the Sonoran Desert: estimating population density from point counts

    Directory of Open Access Journals (Sweden)

    Karina Johnston López

    2015-01-01

    Full Text Available We conducted bird surveys in Hermosillo, Sonora using distance sampling to characterize detection functions at point-transects for native and non-native urban birds in a desert environment. From March to August 2013 we sampled 240 plots in the city and its surroundings; each plot was visited three times. Our purpose was to provide information for a rapid assessment of bird density in this region by using point counts. We identified 72 species, including six non-native species. Sixteen species had sufficient detections to accurately estimate the parameters of the detection functions. To illustrate the estimation of density from bird count data using our inferred detection functions, we estimated the density of the Eurasian Collared-Dove (Streptopelia decaocto under two different levels of urbanization: highly urbanized (90-100% of urban impact and moderately urbanized zones (39-50% of urban impact. Density of S. decaocto in the highly-urbanized and moderately-urbanized zones was 3.97±0.52 and 2.92±0.52 individuals/ha, respectively. By using our detection functions, avian ecologists can efficiently relocate time and effort that is regularly used for the estimation of detection distances, to increase the number of sites surveyed and to collect other relevant ecological information.

  1. DNA-based population density estimation of black bear at northern ...

    African Journals Online (AJOL)

    The analysis of deoxyribonucleic acid (DNA) microsatellites from hair samples obtained by the non-invasive method of traps was used to estimate the population density of black bears (Ursus americanus eremicus) in a mountain located at the county of Lampazos, Nuevo Leon, Mexico. The genotyping of bears was ...

  2. Multi-objective mixture-based iterated density estimation evolutionary algorithms

    NARCIS (Netherlands)

    Thierens, D.; Bosman, P.A.N.

    2001-01-01

    We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability

  3. Eurasian otter (Lutra lutra) density estimate based on radio tracking and other data sources

    Czech Academy of Sciences Publication Activity Database

    Quaglietta, L.; Hájková, Petra; Mira, A.; Boitani, L.

    2015-01-01

    Roč. 60, č. 2 (2015), s. 127-137 ISSN 2199-2401 R&D Projects: GA AV ČR KJB600930804 Institutional support: RVO:68081766 Keywords : Lutra lutra * Density estimation * Edge effect * Known-to-be-alive * Linear habitats * Sampling scale Subject RIV: EG - Zoology

  4. The Wegner Estimate and the Integrated Density of States for some ...

    Indian Academy of Sciences (India)

    The integrated density of states (IDS) for random operators is an important function describing many physical characteristics of a random system. Properties of the IDS are derived from the Wegner estimate that describes the influence of finite-volume perturbations on a background system. In this paper, we present a simple ...

  5. 3D depth-to-basement and density contrast estimates using gravity and borehole data

    Science.gov (United States)

    Barbosa, V. C.; Martins, C. M.; Silva, J. B.

    2009-05-01

    We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding

  6. Estimating abundance and density of Amur tigers along the Sino-Russian border.

    Science.gov (United States)

    Xiao, Wenhong; Feng, Limin; Mou, Pu; Miquelle, Dale G; Hebblewhite, Mark; Goldberg, Joshua F; Robinson, Hugh S; Zhao, Xiaodan; Zhou, Bo; Wang, Tianming; Ge, Jianping

    2016-07-01

    As an apex predator the Amur tiger (Panthera tigris altaica) could play a pivotal role in maintaining the integrity of forest ecosystems in Northeast Asia. Due to habitat loss and harvest over the past century, tigers rapidly declined in China and are now restricted to the Russian Far East and bordering habitat in nearby China. To facilitate restoration of the tiger in its historical range, reliable estimates of population size are essential to assess effectiveness of conservation interventions. Here we used camera trap data collected in Hunchun National Nature Reserve from April to June 2013 and 2014 to estimate tiger density and abundance using both maximum likelihood and Bayesian spatially explicit capture-recapture (SECR) methods. A minimum of 8 individuals were detected in both sample periods and the documentation of marking behavior and reproduction suggests the presence of a resident population. Using Bayesian SECR modeling within the 11 400 km(2) state space, density estimates were 0.33 and 0.40 individuals/100 km(2) in 2013 and 2014, respectively, corresponding to an estimated abundance of 38 and 45 animals for this transboundary Sino-Russian population. In a maximum likelihood framework, we estimated densities of 0.30 and 0.24 individuals/100 km(2) corresponding to abundances of 34 and 27, in 2013 and 2014, respectively. These density estimates are comparable to other published estimates for resident Amur tiger populations in the Russian Far East. This study reveals promising signs of tiger recovery in Northeast China, and demonstrates the importance of connectivity between the Russian and Chinese populations for recovering tigers in Northeast China. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  7. Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.

    Science.gov (United States)

    Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li

    2017-05-03

    In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.

  8. NEW CONCEPTS AND TEST METHODS OF CURVE PROFILE AREA DENSITY IN SURFACE: ESTIMATION OF AREAL DENSITY ON CURVED SPATIAL SURFACE

    OpenAIRE

    Hong Shen

    2011-01-01

    The concepts of curve profile, curve intercept, curve intercept density, curve profile area density, intersection density in containing intersection (or intersection density relied on intersection reference), curve profile intersection density in surface (or curve intercept intersection density relied on intersection of containing curve), and curve profile area density in surface (AS) were defined. AS expressed the amount of curve profile area of Y phase in the unit containing surface area, S...

  9. An analytical framework for estimating aquatic species density from environmental DNA

    Science.gov (United States)

    Chambert, Thierry; Pilliod, David S.; Goldberg, Caren S.; Doi, Hideyuki; Takahara, Teruhiko

    2018-01-01

    Environmental DNA (eDNA) analysis of water samples is on the brink of becoming a standard monitoring method for aquatic species. This method has improved detection rates over conventional survey methods and thus has demonstrated effectiveness for estimation of site occupancy and species distribution. The frontier of eDNA applications, however, is to infer species density. Building upon previous studies, we present and assess a modeling approach that aims at inferring animal density from eDNA. The modeling combines eDNA and animal count data from a subset of sites to estimate species density (and associated uncertainties) at other sites where only eDNA data are available. As a proof of concept, we first perform a cross-validation study using experimental data on carp in mesocosms. In these data, fish densities are known without error, which allows us to test the performance of the method with known data. We then evaluate the model using field data from a study on a stream salamander species to assess the potential of this method to work in natural settings, where density can never be known with absolute certainty. Two alternative distributions (Normal and Negative Binomial) to model variability in eDNA concentration data are assessed. Assessment based on the proof of concept data (carp) revealed that the Negative Binomial model provided much more accurate estimates than the model based on a Normal distribution, likely because eDNA data tend to be overdispersed. Greater imprecision was found when we applied the method to the field data, but the Negative Binomial model still provided useful density estimates. We call for further model development in this direction, as well as further research targeted at sampling design optimization. It will be important to assess these approaches on a broad range of study systems.

  10. Modelling the joint distribution of competing risks survival times using copula functions

    OpenAIRE

    Kaishev, V. K.; Haberman, S.; Dimitrova, D. S.

    2005-01-01

    The problem of modelling the joint distribution of survival times in a competing risks model, using copula functions is considered. In order to evaluate this joint distribution and the related overall survival function, a system of non-linear differential equations is solved, which relates the crude and net survival functions of the modelled competing risks, through the copula. A similar approach to modelling dependent multiple decrements was applied by Carriere (1994) who used a Gaussian cop...

  11. Is the Potential for International Diversification Disappearing? A Dynamic Copula Approach

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Errunza, Vihang; Jacobs, Kris

    International equity markets are characterized by nonlinear dependence and asymmetries. We propose a new dynamic asymmetric copula model to capture long-run and short-run dependence, multivariate nonnormality, and asymmetries in large cross-sections. We find that copula correlations have increased...... and nonlinear dependence. The bene…fits from international diversi…cation have reduced over time, drastically so for DMs. EMs still offer signi…cant diversi…cation bene…ts, especially during large market downturns....

  12. Using bremsstrahlung for electron density estimation and correction in EAST tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yingjie, E-mail: bestfaye@gmail.com; Wu, Zhenwei; Gao, Wei; Jie, Yinxian; Zhang, Jizong; Huang, Juan; Zhang, Ling; Zhao, Junyu

    2013-11-15

    Highlights: • The visible bremsstrahlung diagnostic provides a simple and effective tool for electron density estimation in steady state discharges. • This method can make up some disadvantages of present FIR and TS diagnostics in EAST tokamak. • Line averaged electron density has been deduced from central VB signal. The results can also be used for FIR n{sub e} correction. • Typical n{sub e} profiles have been obtained with T{sub e} and reconstructed bremsstrahlung profiles. -- Abstract: In EAST electron density (n{sub e}) is measured by the multi-channel far-infrared (FIR) hydrogen cyanide (HCN) interferometer and Thomson scattering (TS) diagnostics. However, it is difficult to obtain accurate n{sub e} profile for that there are many problems existing in current electron density diagnostics. Since the visible bremsstrahlung (VB) emission coefficient has a strong dependence on electron density, the visible bremsstrahlung measurement system developed to determine the ion effective charge (Z{sub eff}) may also be used for n{sub e} estimation via inverse operations. With assumption that Z{sub eff} has a flat profile and does not change significantly in steady state discharges, line averaged electron density (n{sup ¯}{sub e}) has been deduced from VB signals in L-mode and H-mode discharges in EAST. The results are in good coincidence with n{sup ¯}{sub e} from FIR, which proves that VB measurement is an effective tool for n{sub e} estimation. VB diagnostic is also applied to n{sup ¯}{sub e} correction when FIR n{sup ¯}{sub e} is wrong for the laser phase shift reversal together with noise causes errors when electron density changed rapidly in the H-mode discharges. Typical n{sub e} profiles in L-mode and H-mode phase are also deduced with reconstructed bremsstrahlung profiles.

  13. Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations

    Science.gov (United States)

    Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik

    2009-04-01

    Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.

  14. Importance of tree basic density in biomass estimation and associated uncertainties

    DEFF Research Database (Denmark)

    Njana, Marco Andrew; Meilby, Henrik; Eid, Tron

    2016-01-01

    Key message Aboveground and belowground tree basic densities varied between and within the three mangrove species. If appropriately determined and applied, basic density may be useful in estimation of tree biomass. Predictive accuracy of the common (i.e. multi-species) models including aboveground...... of sustainable forest management, conservation and enhancement of carbon stocks (REDD+) initiatives offer an opportunity for sustainable management of forests including mangroves. In carbon accounting for REDD+, it is required that carbon estimates prepared for monitoring reporting and verification schemes...... and examine uncertainties in estimation of tree biomass using indirect methods. Methods This study focused on three dominant mangrove species (Avicennia marina (Forssk.) Vierh, Sonneratia alba J. Smith and Rhizophora mucronata Lam.) in Tanzania. A total of 120 trees were destructively sampled for aboveground...

  15. A Frank mixture copula family for modeling higher-order correlations of neural spike counts

    International Nuclear Information System (INIS)

    Onken, Arno; Obermayer, Klaus

    2009-01-01

    In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.

  16. Large portfolio risk management and optimal portfolio allocation with dynamic elliptical copulas

    Directory of Open Access Journals (Sweden)

    Jin Xisong

    2018-02-01

    Full Text Available Previous research has focused on the importance of modeling the multivariate distribution for optimal portfolio allocation and active risk management. However, existing dynamic models are not easily applied to high-dimensional problems due to the curse of dimensionality. In this paper, we extend the framework of the Dynamic Conditional Correlation/Equicorrelation and an extreme value approach into a series of Dynamic Conditional Elliptical Copulas. We investigate risk measures such as Value at Risk (VaR and Expected Shortfall (ES for passive portfolios and dynamic optimal portfolios using Mean-Variance and ES criteria for a sample of US stocks over a period of 10 years. Our results suggest that (1 Modeling the marginal distribution is important for dynamic high-dimensional multivariate models. (2 Neglecting the dynamic dependence in the copula causes over-aggressive risk management. (3 The DCC/DECO Gaussian copula and t-copula work very well for both VaR and ES. (4 Grouped t-copulas and t-copulas with dynamic degrees of freedom further match the fat tail. (5 Correctly modeling the dependence structure makes an improvement in portfolio optimization with respect to tail risk. (6 Models driven by multivariate t innovations with exogenously given degrees of freedom provide a flexible and applicable alternative for optimal portfolio risk management.

  17. Estimation of energy density of Li-S batteries with liquid and solid electrolytes

    Science.gov (United States)

    Li, Chunmei; Zhang, Heng; Otaegui, Laida; Singh, Gurpreet; Armand, Michel; Rodriguez-Martinez, Lide M.

    2016-09-01

    With the exponential growth of technology in mobile devices and the rapid expansion of electric vehicles into the market, it appears that the energy density of the state-of-the-art Li-ion batteries (LIBs) cannot satisfy the practical requirements. Sulfur has been one of the best cathode material choices due to its high charge storage (1675 mAh g-1), natural abundance and easy accessibility. In this paper, calculations are performed for different cell design parameters such as the active material loading, the amount/thickness of electrolyte, the sulfur utilization, etc. to predict the energy density of Li-S cells based on liquid, polymeric and ceramic electrolytes. It demonstrates that Li-S battery is most likely to be competitive in gravimetric energy density, but not volumetric energy density, with current technology, when comparing with LIBs. Furthermore, the cells with polymer and thin ceramic electrolytes show promising potential in terms of high gravimetric energy density, especially the cells with the polymer electrolyte. This estimation study of Li-S energy density can be used as a good guidance for controlling the key design parameters in order to get desirable energy density at cell-level.

  18. Copula-based modeling of degree-correlated networks

    International Nuclear Information System (INIS)

    Raschke, Mathias; Schläpfer, Markus; Trantopoulos, Konstantinos

    2014-01-01

    Dynamical processes on complex networks such as information exchange, innovation diffusion, cascades in financial networks or epidemic spreading are highly affected by their underlying topologies as characterized by, for instance, degree–degree correlations. Here, we introduce the concept of copulas in order to generate random networks with an arbitrary degree distribution and a rich a priori degree–degree correlation (or ‘association’) structure. The accuracy of the proposed formalism and corresponding algorithm is numerically confirmed, while the method is tested on a real-world network of yeast protein–protein interactions. The derived network ensembles can be systematically deployed as proper null models, in order to unfold the complex interplay between the topology of real-world networks and the dynamics on top of them. (paper)

  19. On the application of copula in modeling maintenance contract

    International Nuclear Information System (INIS)

    Iskandar, B P; Husniah, H

    2016-01-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation. (paper)

  20. A novel deep learning-based approach to high accuracy breast density estimation in digital mammography

    Science.gov (United States)

    Ahn, Chul Kyun; Heo, Changyong; Jin, Heongmin; Kim, Jong Hyo

    2017-03-01

    Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert's manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.

  1. Reliability of different sampling densities for estimating and mapping lichen diversity in biomonitoring studies

    International Nuclear Information System (INIS)

    Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.

    2004-01-01

    Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design

  2. Joint estimation of crown of thorns (Acanthaster planci densities on the Great Barrier Reef

    Directory of Open Access Journals (Sweden)

    M. Aaron MacNeil

    2016-08-01

    Full Text Available Crown-of-thorns starfish (CoTS; Acanthaster spp. are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1 estimating the detectability of adult CoTS on typical underwater visual count (UVC surveys using covariates and (2 inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR. We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD and [95% uncertainty intervals], with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014.

  3. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    Directory of Open Access Journals (Sweden)

    Marco Lombardo

    Full Text Available PURPOSE: To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. METHODS: Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL. The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr, the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. RESULTS: The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. CONCLUSIONS: The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi

  4. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim

    2016-05-11

    Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The

  5. A joint frailty-copula model between tumour progression and death for meta-analysis.

    Science.gov (United States)

    Emura, Takeshi; Nakatochi, Masahiro; Murotani, Kenta; Rondeau, Virginie

    2017-12-01

    Dependent censoring often arises in biomedical studies when time to tumour progression (e.g., relapse of cancer) is censored by an informative terminal event (e.g., death). For meta-analysis combining existing studies, a joint survival model between tumour progression and death has been considered under semicompeting risks, which induces dependence through the study-specific frailty. Our paper here utilizes copulas to generalize the joint frailty model by introducing additional source of dependence arising from intra-subject association between tumour progression and death. The practical value of the new model is particularly evident for meta-analyses in which only a few covariates are consistently measured across studies and hence there exist residual dependence. The covariate effects are formulated through the Cox proportional hazards model, and the baseline hazards are nonparametrically modeled on a basis of splines. The estimator is then obtained by maximizing a penalized log-likelihood function. We also show that the present methodologies are easily modified for the competing risks or recurrent event data, and are generalized to accommodate left-truncation. Simulations are performed to examine the performance of the proposed estimator. The method is applied to a meta-analysis for assessing a recently suggested biomarker CXCL12 for survival in ovarian cancer patients. We implement our proposed methods in R joint.Cox package.

  6. A new estimation method for nuclide number densities in equilibrium cycle

    International Nuclear Information System (INIS)

    Seino, Takeshi; Sekimoto, Hiroshi; Ando, Yoshihira.

    1997-01-01

    A new method is proposed for estimating nuclide number densities of LWR equilibrium cycle by multi-recycling calculation. Conventionally, it is necessary to spend a large computation time for attaining the ultimate equilibrium state. Hence, the cycle in nearly constant fuel composition has been considered as an equilibrium state which can be achieved by a few of recycling calculations on a simulated cycle operation under a specific fuel core design. The present method uses steady state fuel nuclide number densities as the initial guess for multi-recycling burnup calculation obtained by a continuously fuel supplied core model. The number densities are modified to be the initial number densities for nuclides of a batch supplied fuel. It was found that the calculated number densities could attain to more precise equilibrium state than that of a conventional multi-recycling calculation with a small number of recyclings. In particular, the present method could give the ultimate equilibrium number densities of the nuclides with the higher mass number than 245 Cm and 244 Pu which were not able to attain to the ultimate equilibrium state within a reasonable number of iterations using a conventional method. (author)

  7. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    Science.gov (United States)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  8. Crowd density estimation based on convolutional neural networks with mixed pooling

    Science.gov (United States)

    Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming

    2017-09-01

    Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.

  9. Automated voxelization of 3D atom probe data through kernel density estimation

    International Nuclear Information System (INIS)

    Srinivasan, Srikant; Kaluskar, Kaustubh; Dumpala, Santoshrupa; Broderick, Scott; Rajan, Krishna

    2015-01-01

    Identifying nanoscale chemical features from atom probe tomography (APT) data routinely involves adjustment of voxel size as an input parameter, through visual supervision, making the final outcome user dependent, reliant on heuristic knowledge and potentially prone to error. This work utilizes Kernel density estimators to select an optimal voxel size in an unsupervised manner to perform feature selection, in particular targeting resolution of interfacial features and chemistries. The capability of this approach is demonstrated through analysis of the γ / γ’ interface in a Ni–Al–Cr superalloy. - Highlights: • Develop approach for standardizing aspects of atom probe reconstruction. • Use Kernel density estimators to select optimal voxel sizes in an unsupervised manner. • Perform interfacial analysis of Ni–Al–Cr superalloy, using new automated approach. • Optimize voxel size to preserve the feature of interest and minimizing loss / noise.

  10. Estimated refractive index and solid density of DT, with application to hollow-microsphere laser targets

    International Nuclear Information System (INIS)

    Briggs, C.K.; Tsugawa, R.T.; Hendricks, C.D.; Souers, P.C.

    1975-01-01

    The literature values for the 0.55-μm refractive index N of liquid and gaseous H 2 and D 2 are combined to yield the equation (N - 1) = [(3.15 +- 0.12) x 10 -6 ]rho, where rho is the density in moles per cubic meter. This equation can be extrapolated to 300 0 K for use on DT in solid, liquid, and gas phases. The equation is based on a review of solid-hydrogen densities measured in bulk and also by diffraction methods. By extrapolation, the estimated densities and 0.55-μm refractive indices for DT are given. Radiation-induced point defects could possibly cause optical absorption and a resulting increased refractive index in solid DT and T 2 . The effect of the DT refractive index in measuring glass and cryogenic DT laser targets is also described

  11. Kernel density estimation and transition maps of Moldavian Neolithic and Eneolithic settlement

    Directory of Open Access Journals (Sweden)

    Robin Brigand

    2018-04-01

    Full Text Available The data presented in this article are related to the research article entitled “Neo-Eneolithic settlement pattern and salt exploitation in Romanian Moldavia” (Brigand and Weller, 2018 [1]. Kernel density estimation (KDE is used in order to move beyond the discrete distribution of sites and to enable us to work on a continuous surface that reflects the intensity of the occupation in the space. Maps of density per period – Neolithic I (Cris, Neolithic II (LBK, Eneolithic I (Precucuteni, Eneolithic II (Cucuteni A, Eneolithic III-IV (Cucuteni A-B and B – are used to create maps of density difference (Figs. 1–4 in order to analyse the dynamic (either non-existent, negative or positive between two chronological sequences.

  12. Application of Density Estimation Methods to Datasets Collected From a Glider

    Science.gov (United States)

    2015-09-30

    2832. PUBLICATIONS Küsel, E.T., Siderius, M., and Mellinger, D.K., “Single-sensor, cue- counting population density estimation: Average ...contained echolocation clicks of sperm whales (Physeter macrocephalus). This species is also known to occur in the Gulf of Mexico where data is...Because such approach considers the entire click bandwidth, the average probability of detection of thousands of click realizations, and hence the

  13. Estimation of bone mineral density by digital X-ray radiogrammetry: theoretical background and clinical testing

    DEFF Research Database (Denmark)

    Rosholm, A; Hyldstrup, L; Backsgaard, L

    2002-01-01

    A new automated radiogrammetric method to estimate bone mineral density (BMD) from a single radiograph of the hand and forearm is described. Five regions of interest in radius, ulna and the three middle metacarpal bones are identified and approximately 1800 geometrical measurements from these bones......-ray absoptiometry (r = 0.86, p Relative to this age-related loss, the reported short...... sites and a precision that potentially allows for relatively short observation intervals. Udgivelsesdato: 2001-null...

  14. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    Science.gov (United States)

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  15. Power spectral density of velocity fluctuations estimated from phase Doppler data

    OpenAIRE

    Jicha Miroslav; Lizal Frantisek; Jedelsky Jan

    2012-01-01

    Laser Doppler Anemometry (LDA) and its modifications such as PhaseDoppler Particle Anemometry (P/DPA) is point-wise method for optical nonintrusive measurement of particle velocity with high data rate. Conversion of the LDA velocity data from temporal to frequency domain – calculation of power spectral density (PSD) of velocity fluctuations, is a non trivial task due to nonequidistant data sampling in time. We briefly discuss possibilities for the PSD estimation and specify limitations caused...

  16. Dictionary-Based Stochastic Expectation–Maximization for SAR Amplitude Probability Density Function Estimation

    OpenAIRE

    Moser , Gabriele; Zerubia , Josiane; Serpico , Sebastiano B.

    2006-01-01

    International audience; In remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of the pixel intensities. This paper deals with the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis. Several theoretical and heuristic models for the pdfs of SAR data have been proposed in the literature, which have been proved to be effective for different land-cov...

  17. Network Kernel Density Estimation for the Analysis of Facility POI Hotspots

    Directory of Open Access Journals (Sweden)

    YU Wenhao

    2015-12-01

    Full Text Available The distribution pattern of urban facility POIs (points of interest usually forms clusters (i.e. "hotspots" in urban geographic space. To detect such type of hotspot, the methods mostly employ spatial density estimation based on Euclidean distance, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. By using these methods, it is difficult to exactly and objectively delimitate the shape and the size of hotspot. Therefore, this research adopts the kernel density estimation based on the network distance to compute the density of hotspot and proposes a simple and efficient algorithm. The algorithm extends the 2D dilation operator to the 1D morphological operator, thus computing the density of network unit. Through evaluation experiment, it is suggested that the algorithm is more efficient and scalable than the existing algorithms. Based on the case study on real POI data, the range of hotspot can highlight the spatial characteristic of urban functions along traffic routes, in order to provide valuable spatial knowledge and information services for the applications of region planning, navigation and geographic information inquiring.

  18. On the estimation of the current density in space plasmas: Multi- versus single-point techniques

    Science.gov (United States)

    Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco

    2017-06-01

    Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.

  19. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  20. A new approach on seismic mortality estimations based on average population density

    Science.gov (United States)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  1. ESTIMATE OF STAND DENSITY INDEX FOR EUCALYPTUS UROPHYLLA USING DIFFERENT FIT METHODS

    Directory of Open Access Journals (Sweden)

    Ernani Lopes Possato

    Full Text Available ABSTRACT The Reineke stand density index (SDI was created on 1933 and remains as target of researches due to its importance on helping decision making regarding the management of population density. Part of such works is focused on the manner by which plots were selected and methods for the fit of Reineke model parameters in order to improve the definition of SDI value for the genetic material evaluated. The present study aimed to estimate the SDI value for Eucalyptus urophylla using the Reineke model fitted by the method of linear regression (LR and stochastic frontier analysis (SFA. The database containing pairs of data number of stems per hectare (N and mean quadratic diameter (Dq was selected in three intensities, containing the 8, 30 and 43 plots of greatest density, and models were fitted by LR and SFA on each selected intensities. The intensity of data selection altered slightly the estimates of parameters and SDI when comparing the fits of each method. On the other hand, the adjust method influenced the mean estimated values of slope and SDI, which corresponded to -1.863 and 740 for LR and -1.582 and 810 for SFA.

  2. Effect of catchment properties and flood generation regime on copula selection for bivariate flood frequency analysis

    Science.gov (United States)

    Filipova, Valeriya; Lawrence, Deborah; Klempe, Harald

    2018-02-01

    Applying copula-based bivariate flood frequency analysis is advantageous because the results provide information on both the flood peak and volume. More data are, however, required for such an analysis, and it is often the case that only data series with a limited record length are available. To overcome this issue of limited record length, data regarding climatic and geomorphological properties can be used to complement statistical methods. In this paper, we present a study of 27 catchments located throughout Norway, in which we assess whether catchment properties, flood generation processes and flood regime have an effect on the correlation between flood peak and volume and, in turn, on the selection of copulas. To achieve this, the annual maximum flood events were first classified into events generated primarily by rainfall, snowmelt or a combination of these. The catchments were then classified into flood regime, depending on the predominant flood generation process producing the annual maximum flood events. A contingency table and Fisher's exact test were used to determine the factors that affect the selection of copulas in the study area. The results show that the two-parameter copulas BB1 and BB7 are more commonly selected in catchments with high steepness, high mean annual runoff and rainfall flood regime. These findings suggest that in these types of catchments, the dependence structure between flood peak and volume is more complex and cannot be modeled effectively using a one-parameter copula. The results illustrate that by relating copula types to flood regime and catchment properties, additional information can be supplied for selecting copulas in catchments with limited data.

  3. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  4. Copula Based Factorization in Bayesian Multivariate Infinite Mixture Models

    OpenAIRE

    Martin Burda; Artem Prokhorov

    2012-01-01

    Bayesian nonparametric models based on infinite mixtures of density kernels have been recently gaining in popularity due to their flexibility and feasibility of implementation even in complicated modeling scenarios. In economics, they have been particularly useful in estimating nonparametric distributions of latent variables. However, these models have been rarely applied in more than one dimension. Indeed, the multivariate case suffers from the curse of dimensionality, with a rapidly increas...

  5. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America.

    Directory of Open Access Journals (Sweden)

    Tara Chestnut

    Full Text Available Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd, is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1. The highest density observed was ∼3 million zoospores L(-1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free

  6. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Directory of Open Access Journals (Sweden)

    Yongjun Ahn

    Full Text Available The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive

  7. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Science.gov (United States)

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  8. Estimation of Nanodiamond Surface Charge Density from Zeta Potential and Molecular Dynamics Simulations.

    Science.gov (United States)

    Ge, Zhenpeng; Wang, Yi

    2017-04-20

    Molecular dynamics simulations of nanoparticles (NPs) are increasingly used to study their interactions with various biological macromolecules. Such simulations generally require detailed knowledge of the surface composition of the NP under investigation. Even for some well-characterized nanoparticles, however, this knowledge is not always available. An example is nanodiamond, a nanoscale diamond particle with surface dominated by oxygen-containing functional groups. In this work, we explore using the harmonic restraint method developed by Venable et al., to estimate the surface charge density (σ) of nanodiamonds. Based on the Gouy-Chapman theory, we convert the experimentally determined zeta potential of a nanodiamond to an effective charge density (σ eff ), and then use the latter to estimate σ via molecular dynamics simulations. Through scanning a series of nanodiamond models, we show that the above method provides a straightforward protocol to determine the surface charge density of relatively large (> ∼100 nm) NPs. Overall, our results suggest that despite certain limitation, the above protocol can be readily employed to guide the model construction for MD simulations, which is particularly useful when only limited experimental information on the NP surface composition is available to a modeler.

  9. Estimating reliability of degraded system based on the probability density evolution with multi-parameter

    Directory of Open Access Journals (Sweden)

    Jiang Ge

    2017-01-01

    Full Text Available System degradation was usually caused by multiple-parameter degradation. The assessment result of system reliability by universal generating function was low accurate when compared with the Monte Carlo simulation. And the probability density function of the system output performance cannot be got. So the reliability assessment method based on the probability density evolution with multi-parameter was presented for complexly degraded system. Firstly, the system output function was founded according to the transitive relation between component parameters and the system output performance. Then, the probability density evolution equation based on the probability conservation principle and the system output function was established. Furthermore, probability distribution characteristics of the system output performance was obtained by solving differential equation. Finally, the reliability of the degraded system was estimated. This method did not need to discrete the performance parameters and can establish continuous probability density function of the system output performance with high calculation efficiency and low cost. Numerical example shows that this method is applicable to evaluate the reliability of multi-parameter degraded system.

  10. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  11. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  12. A simple method for estimating the length density of convoluted tubular systems.

    Science.gov (United States)

    Ferraz de Carvalho, Cláudio A; de Campos Boldrini, Silvia; Nishimaru, Flávio; Liberti, Edson A

    2008-10-01

    We present a new method for estimating the length density (Lv) of convoluted tubular structures exhibiting an isotropic distribution. Although the traditional equation Lv=2Q/A is used, the parameter Q is obtained by considering the collective perimeters of tubular sections. This measurement is converted to a standard model of the structure, assuming that all cross-sections are approximately circular and have an average perimeter similar to that of actual circular cross-sections observed in the same material. The accuracy of this method was tested in eight experiments using hollow macaroni bent into helical shapes. After measuring the length of the macaroni segments, they were boiled and randomly packed into cylindrical volumes along with an aqueous suspension of gelatin and India ink. The solidified blocks were cut into slices 1.0 cm thick and 33.2 cm2 in area (A). The total perimeter of the macaroni cross-sections so revealed was stereologically estimated using a test system of straight parallel lines. Given Lv and the reference volume, the total length of macaroni in each section could be estimated. Additional corrections were made for the changes induced by boiling, and the off-axis position of the thread used to measure length. No statistical difference was observed between the corrected estimated values and the actual lengths. This technique is useful for estimating the length of capillaries, renal tubules, and seminiferous tubules.

  13. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    DEFF Research Database (Denmark)

    Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas

    2012-01-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this....

  14. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    Science.gov (United States)

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2018-02-01

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  15. The use of copula functions for modeling the risk of investment in shares traded on the Warsaw Stock Exchange

    Science.gov (United States)

    Domino, Krzysztof; Błachowicz, Tomasz

    2014-11-01

    In our work copula functions and the Hurst exponent calculated using the local Detrended Fluctuation Analysis (DFA) were used to investigate the risk of investment made in shares traded on the Warsaw Stock Exchange. The combination of copula functions and the Hurst exponent calculated using local DFA is a new approach. For copula function analysis bivariate variables composed of shares prices of the PEKAO bank (a big bank with high capitalization) and other banks (PKOBP, BZ WBK, MBANK and HANDLOWY in decreasing capitalization order) and companies from other branches (KGHM-mining industry, PKNORLEN-petrol industry as well as ASSECO-software industry) were used. Hurst exponents were calculated for daily shares prices and used to predict high drops of those prices. It appeared to be a valuable indicator in the copula selection procedure, since Hurst exponent’s low values were pointing on heavily tailed copulas e.g. the Clayton one.

  16. Kernel density estimation-based real-time prediction for respiratory motion

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  17. Estimating the mass density in the thermosphere with the CYGNSS mission.

    Science.gov (United States)

    Bussy-Virat, C.; Ridley, A. J.

    2017-12-01

    The Cyclone Global Navigation Satellite System (CYGNSS) mission, launched in December 2016, is a constellation of eight satellites orbiting the Earth at 510 km. Its goal is to improve our understanding of rapid hurricane wind intensification. Each CYGNSS satellite uses GPS signals that are reflected off of the ocean's surface to measure the wind. The GPS can also be used to specify the orbit of the satellites quite precisely. The motion of satellites in low Earth orbit are greatly influenced by the neutral density of the surrounding atmosphere through drag. Modeling the neutral density in the upper atmosphere is a major challenge as it involves a comprehensive understanding of the complex coupling between the thermosphere and the ionosphere, the magnetosphere, and the Sun. This is why thermospheric models (such as NRLMSIS, Jacchia-Bowman, HASDM, GITM, or TIEGCM) can only approximate it with a limited accuracy, which decreases during strong geomagnetic events. Because atmospheric drag directly depends on the thermospheric density, it can be estimated applying filtering methods to the trajectories of the CYGNSS observatories. The CYGNSS mission can provide unique results since the constellation of eight satellites enables multiple measurements of the same region at close intervals ( 10 minutes), which can be used to detect short time scale features. Moreover, the CYGNSS spacecraft can be pitched from a low to high drag attitude configuration, which can be used in the filtering methods to improve the accuracy of the atmospheric density estimation. The methodology and the results of this approach applied to the CYGNSS mission will be presented.

  18. Deep sea animal density and size estimated using a Dual-frequency IDentification SONar (DIDSON) offshore the island of Hawaii

    Science.gov (United States)

    Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.

    2018-01-01

    Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.

  19. A model to optimize trap systems used for small mammal (Rodentia, Insectivora density estimates

    Directory of Open Access Journals (Sweden)

    Damiano Preatoni

    1997-12-01

    Full Text Available Abstract The environment found in the upper and lower Padane Plain and the adjoining hills isn't very homogeneous. In fact it is impossible to find biotopes extended enough to satisfy the necessary criteria for density estimate of small mammals based on the Removal method. This limitation has been partially overcome by adopting a reduced grid, counting 39 traps whose spacing depends on the studied species. Aim of this work was to verify - and eventually measure - the efficiency of a sampling method based on a "reduced" number of catch points. The efficiency of 18 trapping cycles, realized from 1991 to 1993, was evaluated as percent bias. For each of the trapping cycles, 100 computer simulations were performed, so obtaining a Monte-Carlo estimate of bias in density values. Then later, the efficiency of different trap arrangements was examined by varying the criteria. The numbers of traps ranged from 9 to 49, with trap spacing varying from 5 to 15 m and a trapping period duration from 5 to 9 nights. In this way an optimal grid system was found both for dimensions and time duration. The simulation processes involved, as a whole, 1511 different grid types, for 11347 virtual trapping cycles. Our results indicate that density estimates based on "reduced" grids are affected by an average -16% bias, that is an underestimate, and that an optimally sized grid must consist of 6x6 traps square, with about 8.7 m spacing. and be in operation for 7 nights.

  20. New density estimation methods for charged particle beams with applications to microbunching instability

    International Nuclear Information System (INIS)

    Terzic, B.; Bassi, G.

    2011-01-01

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. (G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)G. Bassi and B. Terzic, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043), designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code (G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)), and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  1. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    Science.gov (United States)

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  2. Contribution of the ''simple solutions'' concept to estimate density of actinides concentrated solutions

    International Nuclear Information System (INIS)

    Sorel, C.; Moisy, Ph.; Dinh, B.; Blanc, P.

    2000-01-01

    In order to calculate criticality parameters of nuclear fuel solution systems, number density of nuclides are needed and they are generally estimated from density equations. Most of the relations allowing the calculation of the density of aqueous solutions containing the electrolytes HNO 3 -UO 2 (NO 3 ) 2 -Pu(NO 3 ) 4 , usually called 'nitrate dilution laws' are strictly empirical. They are obtained from a fit of assumed polynomial expressions on experimental density data. Out of their interpolation range, such mathematical expressions show discrepancies between calculated and experimental data appearing in the high concentrations range. In this study, a physico-chemical approach based on the isopiestic mixtures rule is suggested. The behaviour followed by these mixtures was first observed in 1936 by Zdanovskii and expressed as: 'Binary solutions (i.e. one electrolyte in water) having a same water activity are mixed without variation of this water activity value'. With regards to this behaviour, a set of basic thermodynamic expressions has been pointed out by Ryazanov and Vdovenko in 1965 concerning enthalpy, entropy, volume of mixtures, activity and osmotic coefficient of the components. In particular, a very simple relation for the density is obtained from the volume mixture expression depending on only two physico-chemical variables: i) concentration of each component in the mixture and in their respectively binary solutions having the same water activity as the mixture and ii), density of each component respectively in the binary solution having the same water activity as the mixture. Therefore, the calculation needs the knowledge of binary data (water activity, density and concentration) of each component at the same temperature as the mixture. Such experimental data are largely published in the literature and are available for nitric acid and uranyl nitrate. Nevertheless, nitric acid binary data show large discrepancies between the authors and need to be

  3. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  4. Technical note: cortical thickness and density estimation from clinical CT using a prior thickness-density relationship

    NARCIS (Netherlands)

    Humbert, L.; Hazrati Marangalou, J.; Del Río Barquero, L.M.; van Lenthe, G.H.; van Rietbergen, B.

    2016-01-01

    Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical

  5. Biodistribution parameters and radiation absorbed dose estimates for radiolabeled human low density lipoprotein

    International Nuclear Information System (INIS)

    Hay, R.V.; Ryan, J.W.; Williams, K.A.; Atcher, R.W.; Brechbiel, M.W.; Gansow, O.A.; Fleming, R.M.; Stark, V.J.; Lathrop, K.A.; Harper, P.V.

    1992-01-01

    The authors propose a model to generate radiation absorbed dose estimates for radiolabeled low density lipoprotein (LDL), based upon eight studies of LDL biodistribution in three adult human subjects. Autologous plasma LDL was labeled with Tc-99m, I-123, or In-111 and injected intravenously. Biodistribution of each LDL derivative was monitored by quantitative analysis of scintigrams and direct counting of excreta and of serial blood samples. Assuming that transhepatic flux accounts for the majority of LDL clearance from the bloodstream, they obtained values of cumulated activity (A) and of mean dose per unit administered activity (D) for each study. In each case highest D values were calculated for liver, with mean doses of 5 rads estimated at injected activities of 27 mCi, 9 mCi, and 0.9 mCi for Tc-99m-LDL, I-123-LDL, and In-111-LDL, respectively

  6. Obtaining DDF Curves of Extreme Rainfall Data Using Bivariate Copula and Frequency Analysis

    DEFF Research Database (Denmark)

    Sadri, Sara; Madsen, Henrik; Mikkelsen, Peter Steen

    2009-01-01

    , situated near Copenhagen in Denmark. For rainfall extracted using method 2, the marginal distribution of depth was found to fit the Generalized Pareto distribution while duration was found to fit the Gamma distribution, using the method of L-moments. The volume was fit with a generalized Pareto...... with duration for a given return period and name them DDF (depth-duration-frequency) curves. The copula approach does not assume the rainfall variables are independent or jointly normally distributed. Rainfall series are extracted in three ways: (1) by maximum mean intensity; (2) by depth and duration...... distribution and the duration was fit with a Pearson type III distribution for rainfall extracted using method 3. The Clayton copula was found to be appropriate for bivariate analysis of rainfall depth and duration for both methods 2 and 3. DDF curves derived using the Clayton copula for depth and duration...

  7. A new empirical model to estimate hourly diffuse photosynthetic photon flux density

    Science.gov (United States)

    Foyo-Moreno, I.; Alados, I.; Alados-Arboledas, L.

    2018-05-01

    Knowledge of the photosynthetic photon flux density (Qp) is critical in different applications dealing with climate change, plant physiology, biomass production, and natural illumination in greenhouses. This is particularly true regarding its diffuse component (Qpd), which can enhance canopy light-use efficiency and thereby boost carbon uptake. Therefore, diffuse photosynthetic photon flux density is a key driving factor of ecosystem-productivity models. In this work, we propose a model to estimate this component, using a previous model to calculate Qp and furthermore divide it into its components. We have used measurements in urban Granada (southern Spain), of global solar radiation (Rs) to study relationships between the ratio Qpd/Rs with different parameters accounting for solar position, water-vapour absorption and sky conditions. The model performance has been validated with experimental measurements from sites having varied climatic conditions. The model provides acceptable results, with the mean bias error and root mean square error varying between - 0.3 and - 8.8% and between 9.6 and 20.4%, respectively. Direct measurements of this flux are very scarce so that modelling simulations are needed, this is particularly true regarding its diffuse component. We propose a new parameterization to estimate this component using only measured data of solar global irradiance, which facilitates its use for the construction of long-term data series of PAR in regions where continuous measurements of PAR are not yet performed.

  8. Power spectral density of velocity fluctuations estimated from phase Doppler data

    Science.gov (United States)

    Jedelsky, Jan; Lizal, Frantisek; Jicha, Miroslav

    2012-04-01

    Laser Doppler Anemometry (LDA) and its modifications such as PhaseDoppler Particle Anemometry (P/DPA) is point-wise method for optical nonintrusive measurement of particle velocity with high data rate. Conversion of the LDA velocity data from temporal to frequency domain - calculation of power spectral density (PSD) of velocity fluctuations, is a non trivial task due to nonequidistant data sampling in time. We briefly discuss possibilities for the PSD estimation and specify limitations caused by seeding density and other factors of the flow and LDA setup. Arbitrary results of LDA measurements are compared with corresponding Hot Wire Anemometry (HWA) data in the frequency domain. Slot correlation (SC) method implemented in software program Kern by Nobach (2006) is used for the PSD estimation. Influence of several input parameters on resulting PSDs is described. Optimum setup of the software for our data of particle-laden air flow in realistic human airway model is documented. Typical character of the flow is described using PSD plots of velocity fluctuations with comments on specific properties of the flow. Some recommendations for improvements of future experiments to acquire better PSD results are given.

  9. Power spectral density of velocity fluctuations estimated from phase Doppler data

    Directory of Open Access Journals (Sweden)

    Jicha Miroslav

    2012-04-01

    Full Text Available Laser Doppler Anemometry (LDA and its modifications such as PhaseDoppler Particle Anemometry (P/DPA is point-wise method for optical nonintrusive measurement of particle velocity with high data rate. Conversion of the LDA velocity data from temporal to frequency domain – calculation of power spectral density (PSD of velocity fluctuations, is a non trivial task due to nonequidistant data sampling in time. We briefly discuss possibilities for the PSD estimation and specify limitations caused by seeding density and other factors of the flow and LDA setup. Arbitrary results of LDA measurements are compared with corresponding Hot Wire Anemometry (HWA data in the frequency domain. Slot correlation (SC method implemented in software program Kern by Nobach (2006 is used for the PSD estimation. Influence of several input parameters on resulting PSDs is described. Optimum setup of the software for our data of particle-laden air flow in realistic human airway model is documented. Typical character of the flow is described using PSD plots of velocity fluctuations with comments on specific properties of the flow. Some recommendations for improvements of future experiments to acquire better PSD results are given.

  10. Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates

    International Nuclear Information System (INIS)

    Trapero, Juan R.

    2016-01-01

    In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.

  11. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America

    Science.gov (United States)

    Chestnut, Tara E.; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure

  12. Effective dysphonia detection using feature dimension reduction and kernel density estimation for patients with Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Shanshan Yang

    Full Text Available Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD, and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS and kernel principal component analysis (KPCA methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP decision rule and support vector machine (SVM with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified.

  13. Estimation of Engine Intake Air Mass Flow using a generic Speed-Density method

    Directory of Open Access Journals (Sweden)

    Vojtíšek Michal

    2014-10-01

    Full Text Available Measurement of real driving emissions (RDE from internal combustion engines under real-world operation using portable, onboard monitoring systems (PEMS is becoming an increasingly important tool aiding the assessment of the effects of new fuels and technologies on environment and human health. The knowledge of exhaust flow is one of the prerequisites for successful RDE measurement with PEMS. One of the simplest approaches for estimating the exhaust flow from virtually any engine is its computation from the intake air flow, which is calculated from measured engine rpm and intake manifold charge pressure and temperature using a generic speed-density algorithm, applicable to most contemporary four-cycle engines. In this work, a generic speed-density algorithm was compared against several reference methods on representative European production engines - a gasoline port-injected automobile engine, two turbocharged diesel automobile engines, and a heavy-duty turbocharged diesel engine. The overall results suggest that the uncertainty of the generic speed-density method is on the order of 10% throughout most of the engine operating range, but increasing to tens of percent where high-volume exhaust gas recirculation is used. For non-EGR engines, such uncertainty is acceptable for many simpler and screening measurements, and may be, where desired, reduced by engine-specific calibration.

  14. Novel method for the simultaneous estimation of density and surface tension of liquids

    International Nuclear Information System (INIS)

    Thirunavukkarasu, G.; Srinivasan, G.J.

    2003-01-01

    The conventional Hare's apparatus generally used for the determination of density of liquids has been modified by replacing its vertical arms (glass tubes) with capillary tubes of 30 cm length and 0.072 cm diameter. When the columns of liquids are drawn through the capillary tubes with reduced pressure at the top of the liquid columns and kept at equilibrium with the atmospheric pressure acting on the liquid surface outside the capillary tubes, the downward pressure due to gravity of the liquid columns has to be coupled with the pressure arising due to the effect of surface tension of the liquids. A fresh expression for the density and surface tension of liquids has been arrived at while equating the pressure balancing system for the two individual liquid columns of the modified Hare's apparatus. The experimental results showed that the proposed method is precise and accurate in the simultaneous estimation of density and surface tension of liquids, with an error of less than 5%

  15. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    Science.gov (United States)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  16. Stochastic estimation of nuclear level density in the nuclear shell model: An application to parity-dependent level density in 58Ni

    Directory of Open Access Journals (Sweden)

    Noritaka Shimizu

    2016-02-01

    Full Text Available We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of Jπ=2+ and 2− states in 58Ni in a unified manner.

  17. How does spatial study design influence density estimates from spatial capture-recapture models?

    Directory of Open Access Journals (Sweden)

    Rahel Sollmann

    Full Text Available When estimating population density from data collected on non-invasive detector arrays, recently developed spatial capture-recapture (SCR models present an advance over non-spatial models by accounting for individual movement. While these models should be more robust to changes in trapping designs, they have not been well tested. Here we investigate how the spatial arrangement and size of the trapping array influence parameter estimates for SCR models. We analysed black bear data collected with 123 hair snares with an SCR model accounting for differences in detection and movement between sexes and across the trapping occasions. To see how the size of the trap array and trap dispersion influence parameter estimates, we repeated analysis for data from subsets of traps: 50% chosen at random, 50% in the centre of the array and 20% in the South of the array. Additionally, we simulated and analysed data under a suite of trap designs and home range sizes. In the black bear study, we found that results were similar across trap arrays, except when only 20% of the array was used. Black bear density was approximately 10 individuals per 100 km(2. Our simulation study showed that SCR models performed well as long as the extent of the trap array was similar to or larger than the extent of individual movement during the study period, and movement was at least half the distance between traps. SCR models performed well across a range of spatial trap setups and animal movements. Contrary to non-spatial capture-recapture models, they do not require the trapping grid to cover an area several times the average home range of the studied species. This renders SCR models more appropriate for the study of wide-ranging mammals and more flexible to design studies targeting multiple species.

  18. Comparison of breast percent density estimation from raw versus processed digital mammograms

    Science.gov (United States)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  19. Assessing the impacts of climate change on future water resources: a methodological approach based on equiratio CDF-matching and vine copula

    Science.gov (United States)

    Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko E. C.

    2016-04-01

    In this study, the impacts of climate change on future river discharge are evaluated using equiratio CDF-matching and a stochastic copula-based evapotranspiration generator. In recent years, much effort has been dedicated to improve the performances of RCMs outputs, i.e. the downscaled precipitation and temperature, to use in regional studies. However, these outputs usually suffer from bias due to the fact that many important small-scale processes, e.g. the representations of clouds and convection, are not represented explicitly within the models. To solve this problem, several bias correction techniques are developed. In this study, an advanced quantile bias approach called equiratio cumulative distribution function matching (EQCDF) is applied for the outputs from three RCMs for central Belgium, i.e. daily precipitation, temperature and evapotranspiration, for the current (1961-1990) and future climate (2071-2100). The rescaled precipitation and temperature are then used to simulate evapotranspiration via a stochastic copula-based model in which the statistical dependence between evapotranspiration, temperature and precipitation is described by a three-dimensional vine copula. The simulated precipitation and stochastic evapotranspiration are then used to model discharge under present and future climate. To validate, the observations of daily precipitation, temperature and evapotranspiration during 1961 - 1990 in Uccle, Belgium are used. It is found that under current climate, the basic properties of discharge, e.g. mean and frequency distribution, are well modelled; however there is an overestimation of the extreme discharges with return periods higher than 10 years. For the future climate change, compared with historical events, a considerable increase of the discharge magnitude and the number of extreme events is estimated for the studied area in the time period of 2071-2100.

  20. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  1. Extension of biomass estimates to pre-assessment periods using density dependent surplus production approach.

    Directory of Open Access Journals (Sweden)

    Jan Horbowy

    Full Text Available Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR, which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.

  2. Remote estimation of crown size and tree density in snowy areas

    Science.gov (United States)

    Kishi, R.; Ito, A.; Kamada, K.; Fukita, T.; Lahrita, L.; Kawase, Y.; Murahashi, K.; Kawamata, H.; Naruse, N.; Takahashi, Y.

    2017-12-01

    Precise estimation of tree density in the forest leads us to understand the amount of carbon dioxide fixed by plants. Aerial photographs have been used to measure the number of trees. Campaign using aircraft, however, is expensive ( $50,000/1 campaign flight) and the research area is limited in drone. In addition, previous studies estimating the density of trees from aerial photographs have been performed in the summer, so there was a gap of 15% in the estimation due to the overlapping of the leaves. Here, we have proposed a method to accurately estimate the number of forest trees from the satellite images of snow-covered deciduous forest area, using the ratio of branches to snow. The advantages of our method are as follows; 1) snow area could be excluded easily due to the high reflectance, 2) tree branches are small overlapping compared to leaves. Although our method can use only in the snowfall region, the area covered with snow in the world becomes more than 12,800,000 km2. Our proposition should play an important role in discussing global warming. As a test area, we have chosen the forest near Mt. Amano in Iwate prefecture in Japan. First, we made a new index of (Band1-Band5)/(Band1+Band5), which will be suitable to distinguish between the snow and the tree trunk using the corresponding spectral reflection data. Next, the index values of changing the ratio in 1% increments were listed. From the satellite image analysis at 4 points, the ratio of snow to tree trunk showed the following values, I:61%, II:65%, III:66% and IV:65%. To confirm the estimation, we used the aerial photograph from Google earth; the rate was I:42.05%, II:48.89%, III:50.64%, IV:49.05%, respectively. There is a correlation between the numerical values of both, but there are differences. We will discuss in detail at this point, focusing on the effect of shadows.

  3. Assessing a learning process with functional ANOVA estimators of EEG power spectral densities.

    Science.gov (United States)

    Gutiérrez, David; Ramírez-Moreno, Mauricio A

    2016-04-01

    We propose to assess the process of learning a task using electroencephalographic (EEG) measurements. In particular, we quantify changes in brain activity associated to the progression of the learning experience through the functional analysis-of-variances (FANOVA) estimators of the EEG power spectral density (PSD). Such functional estimators provide a sense of the effect of training in the EEG dynamics. For that purpose, we implemented an experiment to monitor the process of learning to type using the Colemak keyboard layout during a twelve-lessons training. Hence, our aim is to identify statistically significant changes in PSD of various EEG rhythms at different stages and difficulty levels of the learning process. Those changes are taken into account only when a probabilistic measure of the cognitive state ensures the high engagement of the volunteer to the training. Based on this, a series of statistical tests are performed in order to determine the personalized frequencies and sensors at which changes in PSD occur, then the FANOVA estimates are computed and analyzed. Our experimental results showed a significant decrease in the power of [Formula: see text] and [Formula: see text] rhythms for ten volunteers during the learning process, and such decrease happens regardless of the difficulty of the lesson. These results are in agreement with previous reports of changes in PSD being associated to feature binding and memory encoding.

  4. Computing Conditional VaR using Time-varying CopulasComputing Conditional VaR using Time-varying Copulas

    Directory of Open Access Journals (Sweden)

    Beatriz Vaz de Melo Mendes

    2005-12-01

    Full Text Available It is now widespread the use of Value-at-Risk (VaR as a canonical measure at risk. Most accurate VaR measures make use of some volatility model such as GARCH-type models. However, the pattern of volatility dynamic of a portfolio follows from the (univariate behavior of the risk assets, as well as from the type and strength of the associations among them. Moreover, the dependence structure among the components may change conditionally t past observations. Some papers have attempted to model this characteristic by assuming a multivariate GARCH model, or by considering the conditional correlation coefficient, or by incorporating some possibility for switches in regimes. In this paper we address this problem using time-varying copulas. Our modeling strategy allows for the margins to follow some FIGARCH type model while the copula dependence structure changes over time.

  5. Breast Density Estimation with Fully Automated Volumetric Method: Comparison to Radiologists' Assessment by BI-RADS Categories.

    Science.gov (United States)

    Singh, Tulika; Sharma, Madhurima; Singla, Veenu; Khandelwal, Niranjan

    2016-01-01

    The objective of our study was to calculate mammographic breast density with a fully automated volumetric breast density measurement method and to compare it to breast imaging reporting and data system (BI-RADS) breast density categories assigned by two radiologists. A total of 476 full-field digital mammography examinations with standard mediolateral oblique and craniocaudal views were evaluated by two blinded radiologists and BI-RADS density categories were assigned. Using a fully automated software, mean fibroglandular tissue volume, mean breast volume, and mean volumetric breast density were calculated. Based on percentage volumetric breast density, a volumetric density grade was assigned from 1 to 4. The weighted overall kappa was 0.895 (almost perfect agreement) for the two radiologists' BI-RADS density estimates. A statistically significant difference was seen in mean volumetric breast density among the BI-RADS density categories. With increased BI-RADS density category, increase in mean volumetric breast density was also seen (P BI-RADS categories and volumetric density grading by fully automated software (ρ = 0.728, P BI-RADS density category by two observers showed fair agreement (κ = 0.398 and 0.388, respectively). In our study, a good correlation was seen between density grading using fully automated volumetric method and density grading using BI-RADS density categories assigned by the two radiologists. Thus, the fully automated volumetric method may be used to quantify breast density on routine mammography. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  6. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  7. Comparison of volatility function technique for risk-neutral densities estimation

    Science.gov (United States)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  8. Asymptotically Distribution-Free Goodness-of-Fit Testing for Copulas

    NARCIS (Netherlands)

    Can, S.U.; Einmahl, John; Laeven, R.J.A.

    2017-01-01

    Consider a random sample from a continuous multivariate distribution function F with copula C. In order to test the null hypothesis that C belongs to a certain parametric family, we construct an under H0 asymptotically distribution-free process that serves as a tests generator. The process is a

  9. Kendall’s tau and agglomerative clustering for structure determination of hierarchical Archimedean copulas

    Czech Academy of Sciences Publication Activity Database

    Górecki, J.; Hofert, M.; Holeňa, Martin

    2017-01-01

    Roč. 5, č. 1 (2017), s. 75-87 ISSN 2300-2298 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : structure determination * agglomerative clustering * Kendall’s tau * Archimedean copula Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability

  10. A model based on Copula Theory for sustainable and social responsible investments

    Directory of Open Access Journals (Sweden)

    Amelia Bilbao-Terol

    2016-01-01

    Full Text Available In this paper, a model is proposed that allows us to obtain a portfolio made up of sustainable and socially responsible (SR investment funds. This portfolio tracks the one that investors might have chosen if they had not taken into account social, ethical and ecological (SEE issues in their investment decisions. Therefore, in the first stage, reference portfolio exclusively made up of conventional funds is obtained. For the construction of the conventional portfolio the Prospect Theory has been used: net profits as the financial objective and error function as the utility function. In the second stage, a portfolio consisting exclusively of SR-funds is built. To do so, the reference portfolio is used as an ideal point, with the objectives of the SR-investor being the relative wealth with respect to the reference portfolio and the SEE quality of the portfolio. The relative wealth will be manipulated by a downside-risk measure, the Conditional Value at Risk (CVaR, and the periodic values of the portfolio. The second objective is the SR Quality of the portfolio, taking into account the personal values of a particular investor. This is built using Fuzzy Set Theory tools. We are faced with a multi-objective problem which is solved by using Goal Programming methodology. The estimation of both conventional and SR markets has been carried out by a semi-parametric approach by using the Copula Theory for modeling the dependence structure of the assets’ returns. The approach has been applied to a set of 38 conventional and 12 ethical funds domiciled in Spain.

  11. Uncertainty Visualization Using Copula-Based Analysis in Mixed Distribution Models.

    Science.gov (United States)

    Hazarika, Subhashis; Biswas, Ayan; Shen, Han-Wei

    2018-01-01

    Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.

  12. Influence of Sky Conditions on Estimation of Photosynthetic Photon Flux Density for Agricultural Ecosystem

    Science.gov (United States)

    Yamashita, M.; Yoshimura, M.

    2018-04-01

    Photosynthetic photon flux density (PPFD: µmol m-2 s-1) is indispensable for plant physiology processes in photosynthesis. However, PPFD is seldom measured, so that PPFD has been estimated by using solar radiation (SR: W m-2) measured in world wide. In method using SR, there are two steps: first to estimate photosynthetically active radiation (PAR: W m-2) by the fraction of PAR to SR (PF) and second: to convert PAR to PPFD using the ratio of quanta to energy (Q / E: µmol J-1). PF and Q/E usually have been used as the constant values, however, recent studies point out that PF and Q / E would not be constants under various sky conditions. In this study, we use the numeric data of sky-conditions factors such cloud cover, sun appearance/hiding and relative sky brightness derived from whole-sky image processing and examine the influences of sky-conditions factors on PF and Q / E of global and diffuse PAR. Furthermore, we discuss our results by comparing with the existing methods.

  13. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    Science.gov (United States)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  14. Simultaneous estimation of neutron density and reactivity in a nuclear reactor using a bank of Kalman filters

    International Nuclear Information System (INIS)

    Cortina, E.; D'Atellis, C.E.

    1990-01-01

    This paper reports on the problem of simultaneously estimating neutron density and reactivity while operating a nuclear reactor. It is solved by using a bank of Kalman filters as an estimator and applying a probabilistic test to determine which filter of the bank has the best performance

  15. A Simultaneous Density-Integral System for Estimating Stem Profile and Biomass: Slash Pine and Willow Oak

    Science.gov (United States)

    Bernard R. Parresol; Charles E. Thomas

    1996-01-01

    In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...

  16. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A. [Physics and Astronomy Department, University of California, Los Angeles, California 90095 (United States); Bobrek, M. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6006 (United States)

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.

  17. MR Imaging-based Estimation of Upper Motor Neuron Density in Patients with Amyotrophic Lateral Sclerosis: A Feasibility Study.

    Science.gov (United States)

    Chen, Jacqueline; Kostenko, Volodymyr; Pioro, Erik P; Trapp, Bruce D

    2018-01-23

    Purpose To determine if magnetic resonance (MR) imaging metrics can estimate primary motor cortex (PMC) motor neuron (MN) density in patients with amyotrophic lateral sclerosis (ALS). Materials and Methods Between 2012 and 2014, in situ brain MR imaging was performed in 11 patients with ALS (age range, 35-81 years; seven women and four men) soon after death (mean, 5.5 hours after death; range, 3.2-9.6 hours). The brain was removed, right PMC (RPMC) was excised, and MN density was quantified. RPMC metrics (thickness, volume, and magnetization transfer ratio) were calculated from MR images. Regression modeling was used to estimate MN density by using RPMC and global MR imaging metrics (brain and tissue volumes); clinical variables were subsequently evaluated as additional estimators. Models were tested at in vivo MR imaging by using the same imaging protocol (six patients with ALS; age range, 54-66 years; three women and three men). Results RPMC mean MN density varied over a greater than threefold range across patients and was estimated by a linear function of normalized gray matter volume (adjusted R 2 = 0.51; P = .008; <10% error in most patients). When considering only sporadic ALS, a linear function of normalized RPMC and white matter volumes estimated MN density (adjusted R 2 = 0.98; P = .01; <10% error in all patients). In vivo data analyses detected decreases in MN density over time. Conclusion PMC mean MN density varies widely in end-stage ALS possibly because of disease heterogeneity. MN density can potentially be estimated by MR imaging metrics. © RSNA, 2018 Online supplemental material is available for this article.

  18. Validity of anthropometric procedures to estimate body density and body fat percent in military men

    Directory of Open Access Journals (Sweden)

    Ciro Romélio Rodriguez-Añez

    1999-12-01

    Full Text Available The objective of this study was to verify the validity of the Katch e McArdle’s equation (1973,which uses the circumferences of the arm, forearm and abdominal to estimate the body density and the procedure of Cohen (1986 which uses the circumferences of the neck and abdominal to estimate the body fat percent (%F in military men. Therefore data from 50 military men, with mean age of 20.26 ± 2.04 years serving in Santa Maria, RS, was collected. The circumferences were measured according with Katch e McArdle (1973 and Cohen (1986 procedures. The body density measured (Dm obtained under water weighting was used as criteria and its mean value was 1.0706 ± 0.0100 g/ml. The residual lung volume was estimated using the Goldman’s e Becklake’s equation (1959. The %F was obtained with the Siri’s equation (1961 and its mean value was 12.70 ± 4.71%. The validation criterion suggested by Lohman (1992 was followed. The analysis of the results indicated that the procedure developed by Cohen (1986 has concurrent validity to estimate %F in military men or in other samples with similar characteristics with standard error of estimate of 3.45%. . RESUMO Através deste estudo objetivou-se verificar a validade: da equação de Katch e McArdle (1973 que envolve os perímetros do braço, antebraço e abdômen, para estimar a densidade corporal; e, o procedimento de Cohen (1986 que envolve os perímetros do pescoço e abdômen, para estimar o % de gordura (%G; para militares. Para tanto, coletou-se os dados de 50 militares masculinos, com idade média de 20,26 ± 2,04 anos, lotados na cidade de Santa Maria, RS. Mensurou-se os perímetros conforme procedimentos de Katch e McArdle (1973 e Cohen (1986. Utilizou-se a densidade corporal mensurada (Dm através da pesagem hidrostática como critério de validação, cujo valor médio foi de 1,0706 ± 0,0100 g/ml. Estimou-se o volume residual pela equação de Goldman e Becklake (1959. O %G derivado da Dm estimou

  19. Impacts of Airborne Lidar Pulse Density on Estimating Biomass Stocks and Changes in a Selectively Logged Tropical Forest

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Silva

    2017-10-01

    Full Text Available Airborne lidar is a technology well-suited for mapping many forest attributes, including aboveground biomass (AGB stocks and changes in selective logging in tropical forests. However, trade-offs still exist between lidar pulse density and accuracy of AGB estimates. We assessed the impacts of lidar pulse density on the estimation of AGB stocks and changes using airborne lidar and field plot data in a selectively logged tropical forest located near Paragominas, Pará, Brazil. Field-derived AGB was computed at 85 square 50 × 50 m plots in 2014. Lidar data were acquired in 2012 and 2014, and for each dataset the pulse density was subsampled from its original density of 13.8 and 37.5 pulses·m−2 to lower densities of 12, 10, 8, 6, 4, 2, 0.8, 0.6, 0.4 and 0.2 pulses·m−2. For each pulse density dataset, a power-law model was developed to estimate AGB stocks from lidar-derived mean height and corresponding changes between the years 2012 and 2014. We found that AGB change estimates at the plot level were only slightly affected by pulse density. However, at the landscape level we observed differences in estimated AGB change of >20 Mg·ha−1 when pulse density decreased from 12 to 0.2 pulses·m−2. The effects of pulse density were more pronounced in areas of steep slope, especially when the digital terrain models (DTMs used in the lidar derived forest height were created from reduced pulse density data. In particular, when the DTM from high pulse density in 2014 was used to derive the forest height from both years, the effects on forest height and the estimated AGB stock and changes did not exceed 20 Mg·ha−1. The results suggest that AGB change can be monitored in selective logging in tropical forests with reasonable accuracy and low cost with low pulse density lidar surveys if a baseline high-quality DTM is available from at least one lidar survey. We recommend the results of this study to be considered in developing projects and national

  20. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.

    Science.gov (United States)

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-03-29

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.

  1. Scattered image artifacts from cone beam computed tomography and its clinical potential in bone mineral density estimation.

    Science.gov (United States)

    Ko, Hoon; Jeong, Kwanmoon; Lee, Chang-Hoon; Jun, Hong Young; Jeong, Changwon; Lee, Myeung Su; Nam, Yunyoung; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    Image artifacts affect the quality of medical images and may obscure anatomic structure and pathology. Numerous methods for suppression and correction of scattered image artifacts have been suggested in the past three decades. In this paper, we assessed the feasibility of use of information on scattered artifacts for estimation of bone mineral density (BMD) without dual-energy X-ray absorptiometry (DXA) or quantitative computed tomographic imaging (QCT). To investigate the relationship between scattered image artifacts and BMD, we first used a forearm phantom and cone-beam computed tomography. In the phantom, we considered two regions of interest-bone-equivalent solid material containing 50 mg HA per cm(-3) and water-to represent low- and high-density trabecular bone, respectively. We compared the scattered image artifacts in the high-density material with those in the low-density material. The technique was then applied to osteoporosis patients and healthy subjects to assess its feasibility for BMD estimation. The high-density material produced a greater number of scattered image artifacts than the low-density material. Moreover, the radius and ulna of healthy subjects produced a greater number of scattered image artifacts than those from osteoporosis patients. Although other parameters, such as bone thickness and X-ray incidence, should be considered, our technique facilitated BMD estimation directly without DXA or QCT. We believe that BMD estimation based on assessment of scattered image artifacts may benefit the prevention, early treatment and management of osteoporosis.

  2. Measuring Value-at-Risk and Expected Shortfall of crude oil portfolio using extreme value theory and vine copula

    Science.gov (United States)

    Yu, Wenhua; Yang, Kun; Wei, Yu; Lei, Likun

    2018-01-01

    Volatilities of crude oil price have important impacts on the steady and sustainable development of world real economy. Thus it is of great academic and practical significance to model and measure the volatility and risk of crude oil markets accurately. This paper aims to measure the Value-at-Risk (VaR) and Expected Shortfall (ES) of a portfolio consists of four crude oil assets by using GARCH-type models, extreme value theory (EVT) and vine copulas. The backtesting results show that the combination of GARCH-type-EVT models and vine copula methods can produce accurate risk measures of the oil portfolio. Mixed R-vine copula is more flexible and superior to other vine copulas. Different GARCH-type models, which can depict the long-memory and/or leverage effect of oil price volatilities, however offer similar marginal distributions of the oil returns.

  3. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  4. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Shangjie [Tianjin Key Laboratory of Process Measurement and Control, School of Electrical Engineering and Automation, Tianjin University, Tianjin (China); Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Hara, Wendy; Wang, Lei; Buyyounouski, Mark K.; Le, Quynh-Thu; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Li, Ruijiang, E-mail: rli2@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States)

    2017-03-15

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a reference anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.

  5. Improving incidence estimation in practice-based sentinel surveillance networks using spatial variation in general practitioner density

    Directory of Open Access Journals (Sweden)

    Cécile Souty

    2016-11-01

    Full Text Available Abstract Background In surveillance networks based on voluntary participation of health-care professionals, there is little choice regarding the selection of participants’ characteristics. External information about participants, for example local physician density, can help reduce bias in incidence estimates reported by the surveillance network. Methods There is an inverse association between the number of reported influenza-like illness (ILI cases and local general practitioners (GP density. We formulated and compared estimates of ILI incidence using this relationship. To compare estimates, we simulated epidemics using a spatially explicit disease model and their observation by surveillance networks with different characteristics: random, maximum coverage, largest cities, etc. Results In the French practice-based surveillance network – the “Sentinelles” network – GPs reported 3.6% (95% CI [3;4] less ILI cases as local GP density increased by 1 GP per 10,000 inhabitants. Incidence estimates varied markedly depending on scenarios for participant selection in surveillance. Yet accounting for change in GP density for participants allowed reducing bias. Applied on data from the Sentinelles network, changes in overall incidence ranged between 1.6 and 9.9%. Conclusions Local GP density is a simple measure that provides a way to reduce bias in estimating disease incidence in general practice. It can contribute to improving disease monitoring when it is not possible to choose the characteristics of participants.

  6. Estimating density of a rare and cryptic high-mountain Galliform species, the Buff-throated Partridge Tetraophasis szechenyii

    Directory of Open Access Journals (Sweden)

    Yu Xu

    2016-06-01

    Full Text Available Estimates of abundance or density are essential for wildlife management and conservation. There are few effective density estimates for the Buff-throated Partridge Tetraophasis szechenyii, a rare and elusive high-mountain Galliform species endemic to western China. In this study, we used the temporary emigration N-mixture model to estimate density of this species, with data acquired from playback point count surveys around a sacred area based on indigenous Tibetan culture of protection of wildlife, in Yajiang County, Sichuan, China, during April-June 2009. Within 84 125-m radius points, we recorded 53 partridge groups during three repeats. The best model indicated that detection probability was described by covariates of vegetation cover type, week of visit, time of day, and weather with weak effects, and a partridge group was present during a sampling period with a constant probability. The abundance component was accounted for by vegetation association. Abundance was substantially higher in rhododendron shrubs, fir-larch forests, mixed spruce-larch-birch forests, and especially oak thickets than in pine forests. The model predicted a density of 5.14 groups/km², which is similar to an estimate of 4.7 - 5.3 groups/km² quantified via an intensive spot-mapping effort. The post-hoc estimate of individual density was 14.44 individuals/km², based on the estimated mean group size of 2.81. We suggest that the method we employed is applicable to estimate densities of Buff-throated Partridges in large areas. Given importance of a mosaic habitat for this species, local logging should be regulated. Despite no effect of the conservation area (sacred on the abundance of Buff-throated Partridges, we suggest regulations linking the sacred mountain conservation area with the official conservation system because of strong local participation facilitated by sacred mountains in land conservation.

  7. Estimation of immune cell densities in immune cell conglomerates: an approach for high-throughput quantification.

    Directory of Open Access Journals (Sweden)

    Niels Halama

    2009-11-01

    Full Text Available Determining the correct number of positive immune cells in immunohistological sections of colorectal cancer and other tumor entities is emerging as an important clinical predictor and therapy selector for an individual patient. This task is usually obstructed by cell conglomerates of various sizes. We here show that at least in colorectal cancer the inclusion of immune cell conglomerates is indispensable for estimating reliable patient cell counts. Integrating virtual microscopy and image processing principally allows the high-throughput evaluation of complete tissue slides.For such large-scale systems we demonstrate a robust quantitative image processing algorithm for the reproducible quantification of cell conglomerates on CD3 positive T cells in colorectal cancer. While isolated cells (28 to 80 microm(2 are counted directly, the number of cells contained in a conglomerate is estimated by dividing the area of the conglomerate in thin tissues sections (< or =6 microm by the median area covered by an isolated T cell which we determined as 58 microm(2. We applied our algorithm to large numbers of CD3 positive T cell conglomerates and compared the results to cell counts obtained manually by two independent observers. While especially for high cell counts, the manual counting showed a deviation of up to 400 cells/mm(2 (41% variation, algorithm-determined T cell numbers generally lay in between the manually observed cell numbers but with perfect reproducibility.In summary, we recommend our approach as an objective and robust strategy for quantifying immune cell densities in immunohistological sections which can be directly implemented into automated full slide image processing systems.

  8. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Science.gov (United States)

    Joo, Hyun; Chavan, Archana G; Day, Ryan; Lennox, Kristin P; Sukhanov, Paul; Dahl, David B; Vannucci, Marina; Tsai, Jerry

    2011-10-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  9. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Directory of Open Access Journals (Sweden)

    Hyun Joo

    2011-10-01

    Full Text Available Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM. Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å, this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  10. Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity

    Science.gov (United States)

    Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry

    2011-01-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638

  11. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Science.gov (United States)

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Semivariogram models for estimating fig fly population density throughout the year

    Directory of Open Access Journals (Sweden)

    Mauricio Paulo Batistella Pasini

    2014-07-01

    Full Text Available The objective of this work was to select semivariogram models to estimate the population density of fig fly (Zaprionus indianus; Diptera: Drosophilidae throughout the year, using ordinary kriging. Nineteen monitoring sites were demarcated in an area of 8,200 m2, cropped with six fruit tree species: persimmon, citrus, fig, guava, apple, and peach. During a 24 month period, 106 weekly evaluations were done in these sites. The average number of adult fig flies captured weekly per trap, during each month, was subjected to the circular, spherical, pentaspherical, exponential, Gaussian, rational quadratic, hole effect, K-Bessel, J-Bessel, and stable semivariogram models, using ordinary kriging interpolation. The models with the best fit were selected by cross-validation. Each data set (months has a particular spatial dependence structure, which makes it necessary to define specific models of semivariograms in order to enhance the adjustment to the experimental semivariogram. Therefore, it was not possible to determine a standard semivariogram model; instead, six theoretical models were selected: circular, Gaussian, hole effect, K-Bessel, J-Bessel, and stable.

  13. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    Science.gov (United States)

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  14. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    Science.gov (United States)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  15. Body fat assessed from body density and estimated from skinfold thickness in normal children and children with cystic fibrosis.

    Science.gov (United States)

    Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A

    1988-12-01

    Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.

  16. Optical Depth Estimates and Effective Critical Densities of Dense Gas Tracers in the Inner Parts of Nearby Galaxy Discs

    OpenAIRE

    Jimenez-Donaire, M. J.; Bigiel, F.; Leroy, A. K.; Cormier, D.; Gallagher, M.; Usero, A.; Bolatto, A.; Colombo, D.; Garcia-Burillo, S.; Hughes, A.; Kramer, C.; Krumholz, M. R.; Meier, D. S.; Murphy, E.; Pety, J.

    2016-01-01

    High critical density molecular lines like HCN(1-0) or HCO+(1-0) represent our best tool to study currently star-forming, dense molecular gas at extragalactic distances. The optical depth of these lines is a key ingredient to estimate the effective density required to excite emission. However, constraints on this quantity are even scarcer in the literature than measurements of the high density tracers themselves. Here, we combine new observations of HCN, HCO+ and HNC(1-0) and their optically ...

  17. Improving the peak power density estimation for the DNBR trip signal

    International Nuclear Information System (INIS)

    Moreira, Joao M. L.; Souza, Rose Mary G.P.

    2002-01-01

    The departure from nucleate boiling (DNB) core protection in PWR reactors is usually carried out through the over temperature trip or the instantaneous minimum DNB ratio (DNBR) trip. The protection is obtained through specialized correlations or fast digital computer simulators that infer the core power level, and local coolant thermal and flow conditions out of process variables furnished by the instrumentation. The power density distribution information is usually expressed in terms of F q , the power peak factor, and its location. F q , in its turn, can be determined through the control rod position or, more often, through the power axial offset (AO) F q =f (AO, control rod positions). The AO, defined as the difference between upper and lower long ion chambers signals, is supplied for each channel by separate sets of out-of-core detectors positioned 90 or 120 degrees apart in plan. The AO is given by AO=(S t -S b )/(S t +S b ) where S t and S b are the out-of-core signals from the top and the bottom sections, respectively. In current PWRs a large penalty is imposed to the result of the first equation, because of the difficult of inferring with good accuracy the peak factor from the AO obtained from the out-of-core instrumentation. This ends up reducing the plant capacity factor. In this work, the f function in the first equation, which correlates the power peak factor with the axial offset yielded by out-of-core detectors and control rod positions, is obtained through a combination of specific experiments in the IPEN/MB-01 zero-power reactor and calculation results. For improving the peak factor estimation, it is necessary to consider accurately the response of the out-of-core detectors to different power density distribution in the core. This task is not easily accomplished through calculation due to the difficulties involved in the necessary neutron transport treatment for the out-of-core detector responses

  18. Estimation of the exchange current density and comparative analysis of morphology of electrochemically produced lead and zinc deposits

    Directory of Open Access Journals (Sweden)

    Nikolić Nebojša D.

    2017-01-01

    Full Text Available The processes of lead and zinc electrodeposition from the very dilute electrolytes were compared by the analysis of polarization characteristics and by the scanning electron microscopic (SEM analysis of the morphology of the deposits obtained in the galvanostatic regime of electrolysis. The exchange current densities for lead and zinc were estimated by comparison of experimentally obtained polarization curves with the simulated ones obtained for the different the exchange current density to the limiting diffusion current density ratios. Using this way for the estimation of the exchange current density, it is shown that the exchange current density for Pb was more than 1300 times higher than the one for Zn. In this way, it is confirmed that the Pb electrodeposition processes are considerably faster than the Zn electrodeposition processes. The difference in the rate of electrochemical processes was confirmed by a comparison of morphologies of lead and zinc deposits obtained at current densities which corresponded to 0.25 and 0.50 values of the limiting diffusion current densities. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. 172046

  19. On the expected value and variance for an estimator of the spatio-temporal product density function

    DEFF Research Database (Denmark)

    Rodríguez-Corté, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge

    Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...

  20. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  1. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM Using Monte Carlo Simulation.

    Directory of Open Access Journals (Sweden)

    Md Nabiul Islam Khan

    Full Text Available In the Point-Centred Quarter Method (PCQM, the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1 and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns plant populations and empirical ones.PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3 show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition. If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1/(π ∑ R2 but not 12N/(π ∑ R2, of PCQM2 is 4(8N - 1/(π ∑ R2 but not 28N/(π ∑ R2 and of PCQM3 is 4(12N - 1/(π ∑ R2 but not 44N/(π ∑ R2 as published.If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process

  2. A regime-switching copula approach to modeling day-ahead prices in coupled electricity markets

    DEFF Research Database (Denmark)

    Pircalabu, Anca; Benth, Fred Espen

    2017-01-01

    significant evidence of tail dependence in all pairs of interconnected areas we consider. As a first application of the proposed model, we consider the pricing of financial transmission rights, and highlight how the choice of marginal distributions and copula impacts prices. As a second application we......The recent price coupling of many European electricity markets has triggered a fundamental change in the interaction of day-ahead prices, challenging additionally the modeling of the joint behavior of prices in interconnected markets. In this paper we propose a regime-switching AR–GARCH copula...... to model pairs of day-ahead electricity prices in coupled European markets. While capturing key stylized facts empirically substantiated in the literature, this model easily allows us to 1) deviate from the assumption of normal margins and 2) include a more detailed description of the dependence between...

  3. A state-space modeling approach to estimating canopy conductance and associated uncertainties from sap flux density data

    Science.gov (United States)

    David M. Bell; Eric J. Ward; A. Christopher Oishi; Ram Oren; Paul G. Flikkema; James S. Clark; David Whitehead

    2015-01-01

    Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as...

  4. Large Portfolio Risk Management and Optimal Portfolio Allocation with Dynamic Copulas

    OpenAIRE

    Thorsten Lehnert; Xisong Jin

    2011-01-01

    Previous research focuses on the importance of modeling the multivariate distribution for optimal portfolio allocation and active risk management. However, available dynamic models are not easily applied for high-dimensional problems due to the curse of dimensionality. In this paper, we extend the framework of the Dynamic Conditional Correlation/Equicorrelation and an extreme value approach into a series of Dynamic Conditional Elliptical Copulas. We investigate risk measures like Value at Ris...

  5. A simple non-parametric goodness-of-fit test for elliptical copulas

    Directory of Open Access Journals (Sweden)

    Jaser Miriam

    2017-12-01

    Full Text Available In this paper, we propose a simple non-parametric goodness-of-fit test for elliptical copulas of any dimension. It is based on the equality of Kendall’s tau and Blomqvist’s beta for all bivariate margins. Nominal level and power of the proposed test are investigated in a Monte Carlo study. An empirical application illustrates our goodness-of-fit test at work.

  6. Kendall’s tau and agglomerative clustering for structure determination of hierarchical Archimedean copulas

    Directory of Open Access Journals (Sweden)

    Górecki J.

    2017-01-01

    Full Text Available Several successful approaches to structure determination of hierarchical Archimedean copulas (HACs proposed in the literature rely on agglomerative clustering and Kendall’s correlation coefficient. However, there has not been presented any theoretical proof justifying such approaches. This work fills this gap and introduces a theorem showing that, given the matrix of the pairwise Kendall correlation coefficients corresponding to a HAC, its structure can be recovered by an agglomerative clustering technique.

  7. C-Vine copula mixture model for clustering of residential electrical load pattern data

    OpenAIRE

    Sun, M; Konstantelos, I; Strbac, G

    2016-01-01

    The ongoing deployment of residential smart meters in numerous jurisdictions has led to an influx of electricity consumption data. This information presents a valuable opportunity to suppliers for better understanding their customer base and designing more effective tariff structures. In the past, various clustering methods have been proposed for meaningful customer partitioning. This paper presents a novel finite mixture modeling framework based on C-vine copulas (CVMM) for carrying out cons...

  8. Estimating black bear density in New Mexico using noninvasive genetic sampling coupled with spatially explicit capture-recapture methods

    Science.gov (United States)

    Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.

    2016-01-01

    During the 2004–2005 to 2015–2016 hunting seasons, the New Mexico Department of Game and Fish (NMDGF) estimated black bear abundance (Ursus americanus) across the state by coupling density estimates with the distribution of primary habitat generated by Costello et al. (2001). These estimates have been used to set harvest limits. For example, a density of 17 bears/100 km2 for the Sangre de Cristo and Sacramento Mountains and 13.2 bears/100 km2 for the Sandia Mountains were used to set harvest levels. The advancement and widespread acceptance of non-invasive sampling and mark-recapture methods, prompted the NMDGF to collaborate with the New Mexico Cooperative Fish and Wildlife Research Unit and New Mexico State University to update their density estimates for black bear populations in select mountain ranges across the state.We established 5 study areas in 3 mountain ranges: the northern (NSC; sampled in 2012) and southern Sangre de Cristo Mountains (SSC; sampled in 2013), the Sandia Mountains (Sandias; sampled in 2014), and the northern (NSacs) and southern Sacramento Mountains (SSacs; both sampled in 2014). We collected hair samples from black bears using two concurrent non-invasive sampling methods, hair traps and bear rubs. We used a gender marker and a suite of microsatellite loci to determine the individual identification of hair samples that were suitable for genetic analysis. We used these data to generate mark-recapture encounter histories for each bear and estimated density in a spatially explicit capture-recapture framework (SECR). We constructed a suite of SECR candidate models using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We used Akaike’s Information Criterion corrected for small sample size (AICc) to rank and select the most supported model from which we estimated density.We set 554 hair traps, 117 bear rubs and collected 4,083 hair

  9. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  10. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  11. Use of ordinary kriging and Gaussian conditional simulation to interpolate airborne fire radiative energy density estimates

    Science.gov (United States)

    C. Klauberg; A. T. Hudak; B. C. Bright; L. Boschetti; M. B. Dickinson; R. L. Kremens; C. A. Silva

    2018-01-01

    Fire radiative energy density (FRED, J m-2) integrated from fire radiative power density (FRPD, W m-2) observations of landscape-level fires can present an undersampling problem when collected from fixed-wing aircraft. In the present study, the aircraft made multiple passes over the fire at ~3 min intervals, thus failing to observe most of the FRPD emitted as the flame...

  12. Estimating tree bole and log weights from green densities measured with the Bergstrom Xylodensimeter.

    Science.gov (United States)

    Dale R. Waddell; Michael B. Lambert; W.Y. Pong

    1984-01-01

    The performance of the Bergstrom xylodensimeter, designed to measure the green density of wood, was investigated and compared with a technique that derived green densities from wood disk samples. In addition, log and bole weights of old-growth Douglas-fir and western hemlock were calculated by various formulas and compared with lifted weights measured with a load cell...

  13. A copula-based sampling method for data-driven prognostics

    International Nuclear Information System (INIS)

    Xi, Zhimin; Jing, Rong; Wang, Pingfeng; Hu, Chao

    2014-01-01

    This paper develops a Copula-based sampling method for data-driven prognostics. The method essentially consists of an offline training process and an online prediction process: (i) the offline training process builds a statistical relationship between the failure time and the time realizations at specified degradation levels on the basis of off-line training data sets; and (ii) the online prediction process identifies probable failure times for online testing units based on the statistical model constructed in the offline process and the online testing data. Our contributions in this paper are three-fold, namely the definition of a generic health index system to quantify the health degradation of an engineering system, the construction of a Copula-based statistical model to learn the statistical relationship between the failure time and the time realizations at specified degradation levels, and the development of a simulation-based approach for the prediction of remaining useful life (RUL). Two engineering case studies, namely the electric cooling fan health prognostics and the 2008 IEEE PHM challenge problem, are employed to demonstrate the effectiveness of the proposed methodology. - Highlights: • We develop a novel mechanism for data-driven prognostics. • A generic health index system quantifies health degradation of engineering systems. • Off-line training model is constructed based on the Bayesian Copula model. • Remaining useful life is predicted from a simulation-based approach

  14. Quantification of the Water-Energy Nexus in Beijing City Based on Copula Analysis

    Science.gov (United States)

    Cai, J.; Cai, Y.

    2017-12-01

    Water resource and energy resource are intimately and highly interwoven, called ``water-energy nexus", which poses challenges for the sustainable management of water resource and energy resource. In this research, the Copula analysis method is first proposed to be applied in "water-energy nexus" field to clarify the internal relationship of water resource and energy resource, which is a favorable tool to explore the relevance among random variables. Beijing City, the capital of China, is chosen as a case study. The marginal distribution functions of water resource and energy resource are analyzed first. Then the Binary Copula function is employed to construct the joint distribution function of "water-energy nexus" to quantify the inherent relationship between water resource and energy resource. The results show that it is more appropriate to apply Lognormal distribution to establish the marginal distribution function of water resource. Meanwhile, Weibull distribution is more feasible to describe the marginal distribution function of energy resource. Furthermore, it is more suitable to adopt the Bivariate Normal Copula function to construct the joint distribution function of "water-energy nexus" in Beijing City. The findings can help to identify and quantify the "water-energy nexus". In addition, our findings can provide reasonable policy recommendations on the sustainable management of water resource and energy resource to promote regional coordinated development.

  15. A Copula Based Approach for Design of Multivariate Random Forests for Drug Sensitivity Prediction.

    Science.gov (United States)

    Haider, Saad; Rahman, Raziur; Ghosh, Souparno; Pal, Ranadip

    2015-01-01

    Modeling sensitivity to drugs based on genetic characterizations is a significant challenge in the area of systems medicine. Ensemble based approaches such as Random Forests have been shown to perform well in both individual sensitivity prediction studies and team science based prediction challenges. However, Random Forests generate a deterministic predictive model for each drug based on the genetic characterization of the cell lines and ignores the relationship between different drug sensitivities during model generation. This application motivates the need for generation of multivariate ensemble learning techniques that can increase prediction accuracy and improve variable importance ranking by incorporating the relationships between different output responses. In this article, we propose a novel cost criterion that captures the dissimilarity in the output response structure between the training data and node samples as the difference in the two empirical copulas. We illustrate that copulas are suitable for capturing the multivariate structure of output responses independent of the marginal distributions and the copula based multivariate random forest framework can provide higher accuracy prediction and improved variable selection. The proposed framework has been validated on genomics of drug sensitivity for cancer and cancer cell line encyclopedia database.

  16. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  17. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  18. Direct estimation of functionals of density operators by local operations and classical communication

    International Nuclear Information System (INIS)

    Alves, Carolina Moura; Horodecki, Pawel; Oi, Daniel K. L.; Kwek, L. C.; Ekert, Artur K.

    2003-01-01

    We present a method of direct estimation of important properties of a shared bipartite quantum state, within the ''distant laboratories'' paradigm, using only local operations and classical communication. We apply this procedure to spectrum estimation of shared states, and locally implementable structural physical approximations to incompletely positive maps. This procedure can also be applied to the estimation of channel capacity and measures of entanglement

  19. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

    2011-01-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment

  20. PDE-Foam - a probability-density estimation method using self-adapting phase-space binning

    CERN Document Server

    Dannheim, Dominik; Voigt, Alexander; Grahn, Karl-Johan; Speckmayer, Peter

    2009-01-01

    Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. To efficiently use large event samples to estimate the probability density, a binary search tree (range searching) is used in the PDE-RS implementation. It is a generalisation of standard likelihood methods and a powerful classification tool for problems with highly non-linearly correlated observables. In this paper, we present an innovative improvement of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multidimensional phase space, minimizing the variance of the signal and background densities inside the cells. The binned density information is stored in binary trees, allowing for a very ...

  1. Near surface bulk density estimates of NEAs from radar observations and permittivity measurements of powdered geologic material

    Science.gov (United States)

    Hickson, Dylan; Boivin, Alexandre; Daly, Michael G.; Ghent, Rebecca; Nolan, Michael C.; Tait, Kimberly; Cunje, Alister; Tsai, Chun An

    2018-05-01

    The variations in near-surface properties and regolith structure of asteroids are currently not well constrained by remote sensing techniques. Radar is a useful tool for such determinations of Near-Earth Asteroids (NEAs) as the power of the reflected signal from the surface is dependent on the bulk density, ρbd, and dielectric permittivity. In this study, high precision complex permittivity measurements of powdered aluminum oxide and dunite samples are used to characterize the change in the real part of the permittivity with the bulk density of the sample. In this work, we use silica aerogel for the first time to increase the void space in the samples (and decrease the bulk density) without significantly altering the electrical properties. We fit various mixing equations to the experimental results. The Looyenga-Landau-Lifshitz mixing formula has the best fit and the Lichtenecker mixing formula, which is typically used to approximate planetary regolith, does not model the results well. We find that the Looyenga-Landau-Lifshitz formula adequately matches Lunar regolith permittivity measurements, and we incorporate it into an existing model for obtaining asteroid regolith bulk density from radar returns which is then used to estimate the bulk density in the near surface of NEA's (101955) Bennu and (25143) Itokawa. Constraints on the material properties appropriate for either asteroid give average estimates of ρbd = 1.27 ± 0.33g/cm3 for Bennu and ρbd = 1.68 ± 0.53g/cm3 for Itokawa. We conclude that our data suggest that the Looyenga-Landau-Lifshitz mixing model, in tandem with an appropriate radar scattering model, is the best method for estimating bulk densities of regoliths from radar observations of airless bodies.

  2. Variability of footprint ridge density and its use in estimation of sex in forensic examinations.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Pathania, Annu; Sharma, Ruchika; DiMaggio, John A

    2015-10-01

    The present study deals with a comparatively new biometric parameter of footprints called footprint ridge density. The study attempts to evaluate sex-dependent variations in ridge density in different areas of the footprint and its usefulness in discriminating sex in the young adult population of north India. The sample for the study consisted of 160 young adults (121 females) from north India. The left and right footprints were taken from each subject according to the standard procedures. The footprints were analysed using a 5 mm × 5 mm square and the ridge density was calculated in four different well-defined areas of the footprints. These were: F1 - the great toe on its proximal and medial side; F2 - the medial ball of the footprint, below the triradius (the triradius is a Y-shaped group of ridges on finger balls, palms and soles which forms the basis of ridge counting in identification); F3 - the lateral ball of the footprint, towards the most lateral part; and F4 - the heel in its central part where the maximum breadth at heel is cut by a perpendicular line drawn from the most posterior point on heel. This value represents the number of ridges in a 25 mm(2) area and reflects the ridge density value. Ridge densities analysed on different areas of footprints were compared with each other using the Friedman test for related samples. The total footprint ridge density was calculated as the sum of the ridge density in the four areas of footprints included in the study (F1 + F2 + F3 + F4). The results show that the mean footprint ridge density was higher in females than males in all the designated areas of the footprints. The sex differences in footprint ridge density were observed to be statistically significant in the analysed areas of the footprint, except for the heel region of the left footprint. The total footprint ridge density was also observed to be significantly higher among females than males. A statistically significant correlation

  3. Correlation for the estimation of the density of fatty acid esters fuels and its implications. A proposed Biodiesel Cetane Index.

    Science.gov (United States)

    Lapuerta, Magín; Rodríguez-Fernández, José; Armas, Octavio

    2010-09-01

    Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  4. RELATIVE AND ABSOLUTE DENSITY ESTIMATES OF LAND PLANARIANS (PLATYHELMINTHES, TRICLADIDA IN URBAN RAINFOREST PATCHES

    Directory of Open Access Journals (Sweden)

    FERNANDO CARBAYO

    Full Text Available ABSTRACT Land planarians (Platyhelminthes are likely important components of the soil cryptofauna, although relevant aspects of their ecology such as their density remain largely unstudied. We investigated absolute and relative densities of flatworms in three patches of secondary Brazilian Atlantic rainforest in an urban environment. Two methods of sampling were carried out, one consisting of 90 hours of active search in delimited plots covering 6,000 m² over a year, and the other consisting of leaf litter extraction from a 60 m² soil area, totaling 480-600 l leaf litter. We found 288 specimens of 16 species belonging to the genera Geobia, Geoplana, Issoca, Luteostriata, Obama, Paraba, Pasipha, Rhynchodemus, Xerapoa, and the exotic species Bipalium kewense and Dolichoplana striata. Specimens up to 10 mm long were mostly sampled only with the leaf litter extraction method. Absolute densities, calculated from data obtained with leaf litter extraction, ranged between 1.25 and 2.10 individuals m-2. These values are 30 to 161 times higher than relative densities, calculated from data obtained by active search. Since most common sampling method used in land planarian studies on species composition and faunal inventories is active search for a few hours in a locality, our results suggest that small species might be overlooked. It remains to be tested whether similar densities of this cryptofauna are also found in primary forests.

  5. Adaptive finite element analysis of incompressible viscous flow using posteriori error estimation and control of node density distribution

    International Nuclear Information System (INIS)

    Yashiki, Taturou; Yagawa, Genki; Okuda, Hiroshi

    1995-01-01

    The adaptive finite element method based on an 'a posteriori error estimation' is known to be a powerful technique for analyzing the engineering practical problems, since it excludes the instinctive aspect of the mesh subdivision and gives high accuracy with relatively low computational cost. In the adaptive procedure, both the error estimation and the mesh generation according to the error estimator are essential. In this paper, the adaptive procedure is realized by the automatic mesh generation based on the control of node density distribution, which is decided according to the error estimator. The global percentage error, CPU time, the degrees of freedom and the accuracy of the solution of the adaptive procedure are compared with those of the conventional method using regular meshes. Such numerical examples as the driven cavity flows of various Reynolds numbers and the flows around a cylinder have shown the very high performance of the proposed adaptive procedure. (author)

  6. Comparison of precision orbit derived density estimates for CHAMP and GRACE satellites

    Science.gov (United States)

    Fattig, Eric Dale

    Current atmospheric density models cannot adequately represent the density variations observed by satellites in Low Earth Orbit (LEO). Using an optimal orbit determination process, precision orbit ephemerides (POE) are used as measurement data to generate corrections to density values obtained from existing atmospheric models. Densities obtained using these corrections are then compared to density data derived from the onboard accelerometers of satellites, specifically the CHAMP and GRACE satellites. This comparison takes two forms, cross correlation analysis and root mean square analysis. The densities obtained from the POE method are nearly always superior to the empirical models, both in matching the trends observed by the accelerometer (cross correlation), and the magnitudes of the accelerometer derived density (root mean square). In addition, this method consistently produces better results than those achieved by the High Accuracy Satellite Drag Model (HASDM). For satellites orbiting Earth that pass through Earth's upper atmosphere, drag is the primary source of uncertainty in orbit determination and prediction. Variations in density, which are often not modeled or are inaccurately modeled, cause difficulty in properly calculating the drag acting on a satellite. These density variations are the result of many factors; however, the Sun is the main driver in upper atmospheric density changes. The Sun influences the densities in Earth's atmosphere through solar heating of the atmosphere, as well as through geomagnetic heating resulting from the solar wind. Data are examined for fourteen hour time spans between November 2004 and July 2009 for both the CHAMP and GRACE satellites. This data spans all available levels of solar and geomagnetic activity, which does not include data in the elevated and high solar activity bins due to the nature of the solar cycle. Density solutions are generated from corrections to five different baseline atmospheric models, as well as

  7. Estimating the density-scaling exponent of a monatomic liquid from its pair potential

    DEFF Research Database (Denmark)

    Bøhling, Lasse; Bailey, Nicholas; Schrøder, Thomas

    2014-01-01

    This paper investigates two conjectures for calculating the density dependence of the density-scaling exponent γ of a single-component, pair-potential liquid with strong virial potential-energy correlations. The first conjecture gives an analytical expression for γ directly in terms of the pair...... potential. The second conjecture is a refined version of this involving the most likely nearest-neighbor distance determined from the pair-correlation function. The conjectures are tested by simulations of three systems, one of which is the standard Lennard-Jones liquid. While both expressions give...

  8. Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds.

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David B

    2012-08-01

    This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels.

  9. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    Science.gov (United States)

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  10. Decentralized State-Observer-Based Traffic Density Estimation of Large-Scale Urban Freeway Network by Dynamic Model

    Directory of Open Access Journals (Sweden)

    Yuqi Guo

    2017-08-01

    Full Text Available In order to estimate traffic densities in a large-scale urban freeway network in an accurate and timely fashion when traffic sensors do not cover the freeway network completely and thus only local measurement data can be utilized, this paper proposes a decentralized state observer approach based on a macroscopic traffic flow model. Firstly, by using the well-known cell transmission model (CTM, the urban freeway network is modeled in the way of distributed systems. Secondly, based on the model, a decentralized observer is designed. With the help of the Lyapunov function and S-procedure theory, the observer gains are computed by using linear matrix inequality (LMI technique. So, the traffic densities of the whole road network can be estimated by the designed observer. Finally, this method is applied to the outer ring of the Beijing’s second ring road and experimental results demonstrate the effectiveness and applicability of the proposed approach.

  11. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.; Claudel, Christian G.

    2012-01-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  12. Estimation of Low Concentration Magnetic Fluid Weight Density and Detection inside an Artificial Medium Using a Novel GMR Sensor

    Directory of Open Access Journals (Sweden)

    Chinthaka GOONERATNE

    2008-04-01

    Full Text Available Hyperthermia treatment has been gaining momentum in the past few years as a possible method to manage cancer. Cancer cells are different to normal cells in many ways including how they react to heat. Due to this difference it is possible for hyperthermia treatment to destroy cancer cells without harming the healthy normal cells surrounding the tumor. Magnetic particles injected into the body generate heat by hysteresis loss and temperature is increased when a time varying external magnetic field is applied. Successful treatment depends on how efficiently the heat is controlled. Thus, it is very important to estimate the magnetic fluid density in the body. Experimental apparatus designed for testing, numerical analysis, and results obtained by experimentation using a simple yet novel and minimally invasive needle type spin-valve giant magnetoresistance (SV-GMR sensor, to estimate low concentration magnetic fluid weight density and detection of magnetic fluid in a reference medium is reported.

  13. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.

    2012-09-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  14. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  15. Local linear density estimation for filtered survival data, with bias correction

    DEFF Research Database (Denmark)

    Nielsen, Jens Perch; Tanggaard, Carsten; Jones, M.C.

    2009-01-01

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a 'pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias-correction methods...... within our framework. The multiplicative bias-correction method proves to be the best in a simulation study comparing the performance of the considered estimators. An example concerning old-age mortality demonstrates the importance of the improvements provided....

  16. Local Linear Density Estimation for Filtered Survival Data with Bias Correction

    DEFF Research Database (Denmark)

    Tanggaard, Carsten; Nielsen, Jens Perch; Jones, M.C.

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a ‘pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias correction methods...... within our framework. The multiplicative bias correction method proves to be best in a simulation study comparing the performance of the considered estimators. An example concerning old age mortality demonstrates the importance of the improvements provided....

  17. Accessible light detection and ranging: estimating large tree density for habitat identification

    Science.gov (United States)

    Heather A. Kramer; Brandon M. Collins; Claire V. Gallagher; John Keane; Scott L. Stephens; Maggi Kelly

    2016-01-01

    Large trees are important to a wide variety of wildlife, including many species of conservation concern, such as the California spotted owl (Strix occidentalis occidentalis). Light detection and ranging (LiDAR) has been successfully utilized to identify the density of large-diameter trees, either by segmenting the LiDAR point cloud into...

  18. Excess Molar Volumes of (Propiophenone + Toluene) and Estimated Density of Liquid Propiophenone below Its Melting Temperature

    Czech Academy of Sciences Publication Activity Database

    Morávková, Lenka; Linek, Jan

    2006-01-01

    Roč. 38, č. 10 (2006), s. 1240-1244 ISSN 0021-9614 R&D Projects: GA ČR(CZ) GA203/02/1098 Institutional research plan: CEZ:AV0Z40720504 Keywords : density * excess volume * temperature dependence Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.842, year: 2006

  19. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    Science.gov (United States)

    The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...

  20. Construction of New Electronic Density Functionals with Error Estimation Through Fitting

    DEFF Research Database (Denmark)

    Petzold, V.; Bligaard, T.; Jacobsen, K. W.

    2012-01-01

    We investigate the possibilities and limitations for the development of new electronic density functionals through large-scale fitting to databases of binding energies obtained experimentally or through high-quality calculations. We show that databases with up to a few hundred entries allow for u...

  1. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2016-01-01

    application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The third application

  2. Estimating the population density of the Asian tapir (Tapirus indicus) in a selectively logged forest in Peninsular Malaysia.

    Science.gov (United States)

    Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

    2012-12-01

    The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.

  3. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    Science.gov (United States)

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  4. Contactless estimation of critical current density and its temperature dependence using magnetic measurements

    Czech Academy of Sciences Publication Activity Database

    Youssef, A.; Baničová, L.; Švindrych, Zdeněk; Janů, Zdeněk

    2010-01-01

    Roč. 118, č. 5 (2010), s. 1036-1037 ISSN 0587-4246. [Czech and Slovak Conference on Magnetism /14./. Košice, 06.07.2010-09.07.2010] R&D Projects: GA MŠk(CZ) ME10069 Institutional research plan: CEZ:AV0Z10100520 Keywords : superconductivity * critical state * Bean model * critical current density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.467, year: 2010

  5. Real time estimation of generation, extinction and flow of muscle fibre action potentials in high density surface EMG.

    Science.gov (United States)

    Mesin, Luca

    2015-02-01

    Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Estimating Brownian motion dispersal rate, longevity and population density from spatially explicit mark-recapture data on tropical butterflies.

    Science.gov (United States)

    Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J

    2012-07-01

    1. We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2. The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3. An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4. We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  7. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    International Nuclear Information System (INIS)

    Sorini, D.

    2017-01-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ∼ 0.80 h Mpc −1 and within 10% up to k ∼ 0.94 h Mpc −1 , well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.

  8. Fiber density estimation from single q-shell diffusion imaging by tensor divergence.

    Science.gov (United States)

    Reisert, Marco; Mader, Irina; Umarova, Roza; Maier, Simon; Tebartz van Elst, Ludger; Kiselev, Valerij G

    2013-08-15

    Diffusion-weighted magnetic resonance imaging provides information about the nerve fiber bundle geometry of the human brain. While the inference of the underlying fiber bundle orientation only requires single q-shell measurements, the absolute determination of their volume fractions is much more challenging with respect to measurement techniques and analysis. Unfortunately, the usually employed multi-compartment models cannot be applied to single q-shell measurements, because the compartment's diffusivities cannot be resolved. This work proposes an equation for fiber orientation densities that can infer the absolute fraction up to a global factor. This equation, which is inspired by the classical mass preservation law in fluid dynamics, expresses the fiber conservation associated with the assumption that fibers do not terminate in white matter. Simulations on synthetic phantoms show that the approach is able to derive the densities correctly for various configurations. Experiments with a pseudo ground truth phantom show that even for complex, brain-like geometries the method is able to infer the densities correctly. In-vivo results with 81 healthy volunteers are plausible and consistent. A group analysis with respect to age and gender show significant differences, such that the proposed maps can be used as a quantitative measure for group and longitudinal analysis. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Estimation of Neutral Density in Edge Plasma with Double Null Configuration in EAST

    International Nuclear Information System (INIS)

    Zhang Ling; Xu Guosheng; Ding Siye; Gao Wei; Wu Zhenwei; Chen Yingjie; Huang Juan; Liu Xiaoju; Zang Qing; Chang Jiafeng; Zhang Wei; Li Yingying; Qian Jinping

    2011-01-01

    In this work, population coefficients of hydrogen's n = 3 excited state from the hydrogen collisional-radiative (CR) model, from the data file of DEGAS 2, are used to calculate the photon emissivity coefficients (PECs) of hydrogen Balmer-α (n = 3 → n = 2) (H α ). The results are compared with the PECs from Atomic Data and Analysis Structure (ADAS) database, and a good agreement is found. A magnetic surface-averaged neutral density profile of typical double-null (DN) plasma in EAST is obtained by using FRANTIC, the 1.5-D fluid transport code. It is found that the sum of integral D α and H α emission intensity calculated via the neutral density agrees with the measured results obtained by using the absolutely calibrated multi-channel poloidal photodiode array systems viewing the lower divertor at the last closed flux surface (LCFS). It is revealed that the typical magnetic surface-averaged neutral density at LCFS is about 3.5 x 10 16 m -3 . (magnetically confined plasma)

  10. Estimation of winter wheat canopy nitrogen density at different growth stages based on Multi-LUT approach

    Science.gov (United States)

    Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang

    2017-10-01

    Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.

  11. mBEEF-vdW: Robust fitting of error estimation density functionals

    DEFF Research Database (Denmark)

    Lundgård, Keld Troen; Wellendorff, Jess; Voss, Johannes

    2016-01-01

    . The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012); J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014)]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function...... catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show...

  12. An Evaluation of Population Density Mapping and Built up Area Estimates in Sri Lanka Using Multiple Methodologies

    Science.gov (United States)

    Engstrom, R.; Soundararajan, V.; Newhouse, D.

    2017-12-01

    In this study we examine how well multiple population density and built up estimates that utilize satellite data compare in Sri Lanka. The population relationship is examined at the Gram Niladhari (GN) level, the lowest administrative unit in Sri Lanka from the 2011 census. For this study we have two spatial domains, the whole country and a 3,500km2 sub-sample, for which we have complete high spatial resolution imagery coverage. For both the entire country and the sub-sample we examine how consistent are the existing publicly available measures of population constructed from satellite imagery at predicting population density? For just the sub-sample we examine how well do a suite of values derived from high spatial resolution satellite imagery predict population density and how does our built up area estimate compare to other publicly available estimates. Population measures were obtained from the Sri Lankan census, and were downloaded from Facebook, WorldPoP, GPW, and Landscan. Percentage built-up area at the GN level was calculated from three sources: Facebook, Global Urban Footprint (GUF), and the Global Human Settlement Layer (GHSL). For the sub-sample we have derived a variety of indicators from the high spatial resolution imagery. Using deep learning convolutional neural networks, an object oriented, and a non-overlapping block, spatial feature approach. Variables calculated include: cars, shadows (a proxy for building height), built up area, and buildings, roof types, roads, type of agriculture, NDVI, Pantex, and Histogram of Oriented Gradients (HOG) and others. Results indicate that population estimates are accurate at the higher, DS Division level but not necessarily at the GN level. Estimates from Facebook correlated well with census population (GN correlation of 0.91) but measures from GPW and WorldPop are more weakly correlated (0.64 and 0.34). Estimates of built-up area appear to be reliable. In the 32 DSD-subsample, Facebook's built- up area measure

  13. Extensive spatio-temporal assessment of flood events by application of pair-copulas

    Directory of Open Access Journals (Sweden)

    M. Schulte

    2015-06-01

    Full Text Available Although the consequences of floods are strongly related to their peak discharges, a statistical classification of flood events that only depends on these peaks may not be sufficient for flood risk assessments. In many cases, the flood risk depends on a number of event characteristics. In case of an extreme flood, the whole river basin may be affected instead of a single watershed, and there will be superposition of peak discharges from adjoining catchments. These peaks differ in size and timing according to the spatial distribution of precipitation and watershed-specific processes of flood formation. Thus, the spatial characteristics of flood events should be considered as stochastic processes. Hence, there is a need for a multivariate statistical approach that represents the spatial interdependencies between floods from different watersheds and their coincidences. This paper addresses the question how these spatial interdependencies can be quantified. Each flood event is not only assessed with regard to its local conditions but also according to its spatio-temporal pattern within the river basin. In this paper we characterise the coincidence of floods by trivariate Joe-copula and pair-copulas. Their ability to link the marginal distributions of the variates while maintaining their dependence structure characterizes them as an adequate method. The results indicate that the trivariate copula model is able to represent the multivariate probabilities of the occurrence of simultaneous flood peaks well. It is suggested that the approach of this paper is very useful for the risk-based design of retention basins as it accounts for the complex spatio-temporal interactions of floods.

  14. Markedly divergent estimates of Amazon forest carbon density from ground plots and satellites

    NARCIS (Netherlands)

    Mitchard, Edward T. A.; Feldpausch, Ted R.; Brienen, Roel J. W.; Lopez-Gonzalez, Gabriela; Monteagudo, Abel; Baker, Timothy R.; Lewis, Simon L.; Lloyd, Jon; Quesada, Carlos A.; Gloor, Manuel; ter Steege, Hans|info:eu-repo/dai/nl/075217120; Meir, Patrick; Alvarez, Esteban; Araujo-Murakami, Alejandro; Aragao, Luiz E. O. C.; Arroyo, Luzmila; Aymard, Gerardo; Banki, Olaf; Bonal, Damien; Brown, Sandra; Brown, Foster I.; Ceron, Carlos E.; Chama Moscoso, Victor; Chave, Jerome; Comiskey, James A.; Cornejo, Fernando; Corrales Medina, Massiel; Da Costa, Lola; Costa, Flavia R. C.; Di Fiore, Anthony; Domingues, Tomas F.; Erwin, Terry L.; Frederickson, Todd; Higuchi, Niro; Honorio Coronado, Euridice N.; Levis, Carolina; Killeen, Tim J.; Laurance, William F.; Magnusson, William E.; Marimon, Beatriz S.; Marimon Junior, Ben Hur; Mendoza Polo, Irina; Mishra, Piyush; Nascimento, Marcelo T.; Neill, David; Nunez Vargas, Mario P.; Palacios, Walter A.; Parada, Alexander; Pardo Molina, Guido; Pena-Claros, Marielos; Pitman, Nigel; Peres, Carlos A.; Prieto, Adriana; Poorter, Lourens; Ramirez-Angulo, Hirma; Restrepo Correa, Zorayda; Roopsind, Anand; Roucoux, Katherine H.; Rudas, Agustin; Salomao, Rafael P.; Schietti, Juliana; Silveira, Marcos; de Souza, Priscila F.; Steininger, Marc K.; Stropp, Juliana; Terborgh, John; Thomas, Raquel; Toledo, Marisol; Torres-Lezama, Armando; van Andel, Tinde R.|info:eu-repo/dai/nl/205284868; van der Heijden, Geertje M. F.; Vieira, Ima C. G.; Vieira, Simone; Vilanova-Torre, Emilio; Vos, Vincent A.; Wang, Ophelia; Zartman, Charles E.; Malhi, Yadvinder; Phillips, Oliver L.; Cruz, A.P.; Cuenca, W.P.; Espejo, J.E.; Ferreira, L.; Germaine, A.; Penuela, M.C.; Silva, N.; Valenzuela Gamarra, L.

    Aim The accurate mapping of forest carbon stocks is essential for understanding the global carbon cycle, for assessing emissions from deforestation, and for rational land-use planning. Remote sensing (RS) is currently the key tool for this purpose, but RS does not estimate vegetation biomass

  15. Free-ranging domestic cats (Felis catus) on public lands: estimating density, activity, and diet in the Florida Keys

    Science.gov (United States)

    Cove, Michael V.; Gardner, Beth; Simons, Theodore R.; Kays, Roland; O'Connell, Allan F.

    2017-01-01

    Feral and free-ranging domestic cats (Felis catus) can have strong negative effects on small mammals and birds, particularly in island ecosystems. We deployed camera traps to study free-ranging cats in national wildlife refuges and state parks on Big Pine Key and Key Largo in the Florida Keys, USA, and used spatial capture–recapture models to estimate cat abundance, movement, and activities. We also used stable isotope analyses to examine the diet of cats captured on public lands. Top population models separated cats based on differences in movement and detection with three and two latent groups on Big Pine Key and Key Largo, respectively. We hypothesize that these latent groups represent feral, semi-feral, and indoor/outdoor house cats based on the estimated movement parameters of each group. Estimated cat densities and activity varied between the two islands, with relatively high densities (~4 cats/km2) exhibiting crepuscular diel patterns on Big Pine Key and lower densities (~1 cat/km2) exhibiting nocturnal diel patterns on Key Largo. These differences are most likely related to the higher proportion of house cats on Big Pine relative to Key Largo. Carbon and nitrogen isotope ratios from hair samples of free-ranging cats (n = 43) provided estimates of the proportion of wild and anthropogenic foods in cat diets. At the population level, cats on both islands consumed mostly anthropogenic foods (>80% of the diet), but eight individuals were effective predators of wildlife (>50% of the diet). We provide evidence that cat groups within a population move different distances, exhibit different activity patterns, and that individuals consume wildlife at different rates, which all have implications for managing this invasive predator.

  16. Estimation of cluster stability using the theory of electron density functional

    International Nuclear Information System (INIS)

    Borisov, Yu.A.

    1985-01-01

    Prospects of using simple versions of the electron density functional for studying the energy characteristics of cluster compounds Was discussed. These types of cluster compounds were considered: clusters of Cs, Be, B, Sr, Cd, Sc, In, V, Tl, I elements as intermediate form between molecule and solid body, metalloorganic Mo, W, Tc, Re, Rn clusters and elementoorganic compounds of nido-cluster type. The problem concerning changes in the binding energy of homoatomic clusters depending on their size and three-dimensional structure was analysed

  17. Pricing Equity-Indexed Annuities under Stochastic Interest Rates Using Copulas

    Directory of Open Access Journals (Sweden)

    Patrice Gaillardetz

    2010-01-01

    Full Text Available We develop a consistent evaluation approach for equity-linked insurance products under stochastic interest rates. This pricing approach requires that the premium information of standard insurance products is given exogenously. In order to evaluate equity-linked products, we derive three martingale probability measures that reproduce the information from standard insurance products, interest rates, and equity index. These risk adjusted martingale probability measures are determined using copula theory and evolve with the stochastic interest rate process. A detailed numerical analysis is performed for existing equity-indexed annuities in the North American market.

  18. On Value at Risk for Foreign Exchange Rates --- the Copula Approach

    Science.gov (United States)

    Jaworski, P.

    2006-11-01

    The aim of this paper is to determine the Value at Risk (VaR) of the portfolio consisting of long positions in foreign currencies on an emerging market. Basing on empirical data we restrict ourselves to the case when the tail parts of distributions of logarithmic returns of these assets follow the power laws and the lower tail of associated copula C follows the power law of degree 1. We will illustrate the practical usefulness of this approach by the analysis of the exchange rates of EUR and CHF at the Polish forex market.

  19. Covered Interest-Rate Parity Revisited: an Extreme Value Copula Analysis

    Directory of Open Access Journals (Sweden)

    Mikel Ugando-Peñate

    2015-11-01

    Full Text Available This article studied the covered interest-rate parity (CIP condition under extreme market movements using extreme value theory and extreme value copulas to characterize dependence between extreme interest rate differentials and forward premium. The empirical analysis for the CIP between interest rates for the US dollar and the British pound indicates that there is strong co-movement between interest rate differentials and forward premium at different maturities and in both upper and lower tails. This conclusion would support the existence of the CIP condition under extreme market movements.

  20. Time Series Model of Wind Speed for Multi Wind Turbines based on Mixed Copula

    Directory of Open Access Journals (Sweden)

    Nie Dan

    2016-01-01

    Full Text Available Because wind power is intermittent, random and so on, large scale grid will directly affect the safe and stable operation of power grid. In order to make a quantitative study on the characteristics of the wind speed of wind turbine, the wind speed time series model of the multi wind turbine generator is constructed by using the mixed Copula-ARMA function in this paper, and a numerical example is also given. The research results show that the model can effectively predict the wind speed, ensure the efficient operation of the wind turbine, and provide theoretical basis for the stability of wind power grid connected operation.

  1. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    Science.gov (United States)

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are

  2. Fishery-independent surface abundance and density estimates of swordfish (Xiphias gladius) from aerial surveys in the Central Mediterranean Sea

    Science.gov (United States)

    Lauriano, Giancarlo; Pierantonio, Nino; Kell, Laurence; Cañadas, Ana; Donovan, Gregory; Panigada, Simone

    2017-07-01

    Fishery-independent surface density and abundance estimates for the swordfish were obtained through aerial surveys carried out over a large portion of the Central Mediterranean, implementing distance sampling methodologies. Both design- and model-based abundance and density showed an uneven occurrence of the species throughout the study area, with clusters of higher density occurring near converging fronts, strong thermoclines and/or underwater features. The surface abundance was estimated for the Pelagos Sanctuary for Mediterranean Marine Mammals in the summer of 2009 (n=1152; 95%CI=669.0-1981.0; %CV=27.64), the Sea of Sardinia, the Pelagos Sanctuary and the Central Tyrrhenian Sea for the summer of 2010 (n=3401; 95%CI=2067.0-5596.0; %CV=25.51), and for the Southern Tyrrhenian Sea during the winter months of 2010-2011 (n=1228; 95%CI=578-2605; %CV=38.59). The Mediterranean swordfish stock deserves special attention in light of the heavy fishing pressures. Furthermore, the unreliability of fishery-related data has, to date, hampered our ability to effectively inform long-term conservation in the Mediterranean Region. Considering that the European countries have committed to protect the resources and all the marine-related economic and social dynamics upon which they depend, the information presented here constitute useful data towards the international legal requirements under the Marine Strategy Framework Directory, the Common Fisheries Policy, the Habitats and Species Directive and the Directive on Maritime Spatial Planning, among the others.

  3. Activity pattern of medium and large sized mammals and density estimates of Cuniculus paca (Rodentia: Cuniculidae in the Brazilian Pampa

    Directory of Open Access Journals (Sweden)

    C. Leuchtenberger

    2018-02-01

    Full Text Available Abstract Between July 2014 and April 2015, we conducted weekly inventories of the circadian activity patterns of mammals in Passo Novo locality, municipality of Alegrete, southern Brazil. The vegetation is comprised by a grassy-woody steppe (grassland. We used two camera traps alternately located on one of four 1 km transects, each separated by 1 km. We classified the activity pattern of species by the percentage of photographic records taken in each daily period. We identify Cuniculus paca individuals by differences in the patterns of flank spots. We then estimate the density 1 considering the area of riparian forest present in the sampling area, and 2 through capture/recapture analysis. Cuniculus paca, Conepatus chinga and Hydrochoerus hydrochaeris were nocturnal, Cerdocyon thous had a crepuscular/nocturnal pattern, while Mazama gouazoubira was cathemeral. The patterns of circadian activity observed for medium and large mammals in this Pampa region (southern grasslands may reflect not only evolutionary, biological and ecological affects, but also human impacts not assessed in this study. We identified ten individuals of C. paca through skin spot patterns during the study period, which were recorded in different transects and months. The minimum population density of C. paca was 3.5 individuals per km2 (resident animals only and the total density estimates varied from 7.1 to 11.8 individuals per km2, when considering all individuals recorded or the result of the capture/recapture analysis, respectively.

  4. Activity pattern of medium and large sized mammals and density estimates of Cuniculus paca (Rodentia: Cuniculidae) in the Brazilian Pampa.

    Science.gov (United States)

    Leuchtenberger, C; de Oliveira, Ê S; Cariolatto, L P; Kasper, C B

    2018-02-22

    Between July 2014 and April 2015, we conducted weekly inventories of the circadian activity patterns of mammals in Passo Novo locality, municipality of Alegrete, southern Brazil. The vegetation is comprised by a grassy-woody steppe (grassland). We used two camera traps alternately located on one of four 1 km transects, each separated by 1 km. We classified the activity pattern of species by the percentage of photographic records taken in each daily period. We identify Cuniculus paca individuals by differences in the patterns of flank spots. We then estimate the density 1) considering the area of riparian forest present in the sampling area, and 2) through capture/recapture analysis. Cuniculus paca, Conepatus chinga and Hydrochoerus hydrochaeris were nocturnal, Cerdocyon thous had a crepuscular/nocturnal pattern, while Mazama gouazoubira was cathemeral. The patterns of circadian activity observed for medium and large mammals in this Pampa region (southern grasslands) may reflect not only evolutionary, biological and ecological affects, but also human impacts not assessed in this study. We identified ten individuals of C. paca through skin spot patterns during the study period, which were recorded in different transects and months. The minimum population density of C. paca was 3.5 individuals per km2 (resident animals only) and the total density estimates varied from 7.1 to 11.8 individuals per km2, when considering all individuals recorded or the result of the capture/recapture analysis, respectively.

  5. Geometric contribution leading to anomalous estimation of two-dimensional electron gas density in GaN based heterostructures

    Science.gov (United States)

    Upadhyay, Bhanu B.; Jha, Jaya; Takhar, Kuldeep; Ganguly, Swaroop; Saha, Dipankar

    2018-05-01

    We have observed that the estimation of two-dimensional electron gas density is dependent on the device geometry. The geometric contribution leads to the anomalous estimation of the GaN based heterostructure properties. The observed discrepancy is found to originate from the anomalous area dependent capacitance of GaN based Schottky diodes, which is an integral part of the high electron mobility transistors. The areal capacitance density is found to increase for smaller radii Schottky diodes, contrary to a constant as expected intuitively. The capacitance is found to follow a second order polynomial on the radius of all the bias voltages and frequencies considered here. In addition to the quadratic dependency corresponding to the areal component, the linear dependency indicates a peripheral component. It is further observed that the peripheral to areal contribution is inversely proportional to the radius confirming the periphery as the location of the additional capacitance. The peripheral component is found to be frequency dependent and tends to saturate to a lower value for measurements at a high frequency. In addition, the peripheral component is found to vanish when the surface is passivated by a combination of N2 and O2 plasma treatments. The cumulative surface state density per unit length of the perimeter of the Schottky diodes as obtained by the integrated response over the distance between the ohmic and Schottky contacts is found to be 2.75 × 1010 cm-1.

  6. Using Clinical Factors and Mammographic Breast Density to Estimate Breast Cancer Risk: Development and Validation of a New Predictive Model

    Science.gov (United States)

    Tice, Jeffrey A.; Cummings, Steven R.; Smith-Bindman, Rebecca; Ichikawa, Laura; Barlow, William E.; Kerlikowske, Karla

    2009-01-01

    Background Current models for assessing breast cancer risk are complex and do not include breast density, a strong risk factor for breast cancer that is routinely reported with mammography. Objective To develop and validate an easy-to-use breast cancer risk prediction model that includes breast density. Design Empirical model based on Surveillance, Epidemiology, and End Results incidence, and relative hazards from a prospective cohort. Setting Screening mammography sites participating in the Breast Cancer Surveillance Consortium. Patients 1 095 484 women undergoing mammography who had no previous diagnosis of breast cancer. Measurements Self-reported age, race or ethnicity, family history of breast cancer, and history of breast biopsy. Community radiologists rated breast density by using 4 Breast Imaging Reporting and Data System categories. Results During 5.3 years of follow-up, invasive breast cancer was diagnosed in 14 766 women. The breast density model was well calibrated overall (expected–observed ratio, 1.03 [95% CI, 0.99 to 1.06]) and in racial and ethnic subgroups. It had modest discriminatory accuracy (concordance index, 0.66 [CI, 0.65 to 0.67]). Women with low-density mammograms had 5-year risks less than 1.67% unless they had a family history of breast cancer and were older than age 65 years. Limitation The model has only modest ability to discriminate between women who will develop breast cancer and those who will not. Conclusion A breast cancer prediction model that incorporates routinely reported measures of breast density can estimate 5-year risk for invasive breast cancer. Its accuracy needs to be further evaluated in independent populations before it can be recommended for clinical use. PMID:18316752

  7. Challenges of Estimating Fracture Risk with DXA: Changing Concepts About Bone Strength and Bone Density.

    Science.gov (United States)

    Licata, Angelo A

    2015-07-01

    Bone loss due to weightlessness is a significant concern for astronauts' mission safety and health upon return to Earth. This problem is monitored with bone densitometry (DXA), the clinical tool used to assess skeletal strength. DXA has served clinicians well in assessing fracture risk and has been particularly useful in diagnosing osteoporosis in the elderly postmenopausal population for which it was originally developed. Over the past 1-2 decades, however, paradoxical and contradictory findings have emerged when this technology was widely employed in caring for diverse populations unlike those for which it was developed. Although DXA was originally considered the surrogate marker for bone strength, it is now considered one part of a constellation of factors-described collectively as bone quality-that makes bone strong and resists fracturing, independent of bone density. These characteristics are beyond the capability of routine DXA to identify, and as a result, DXA can be a poor prognosticator of bone health in many clinical scenarios. New clinical tools are emerging to make measurement of bone strength more accurate. This article reviews the historical timeline of bone density measurement (dual X-ray absorptiometry), expands upon the clinical observations that modified the relationship of DXA and bone strength, discusses some of the new clinical tools to predict fracture risk, and highlights the challenges DXA poses in the assessment of fracture risk in astronauts.

  8. Improved estimation of receptor density and binding rate constants using a single tracer injection and displacement

    International Nuclear Information System (INIS)

    Syrota, A.; Delforge, J.; Mazoyer, B.M.

    1988-01-01

    The possibility of improving receptor model parameter estimation using a displacement experiment in which an excess of an unlabeled ligand (J) is injected after a delay (t D ) following injection of trace amounts of the β + - labeled ligand (J*) is investigated. The effects of varying t D and J/J* on parameter uncertainties are studied in the case of 11 C-MQNB binding to myocardial acetycholine receptor using parameters identified in a dog experiment

  9. Estimation of Engine Intake Air Mass Flow using a generic Speed-Density method

    OpenAIRE

    Vojtíšek Michal; Kotek Martin

    2014-01-01

    Measurement of real driving emissions (RDE) from internal combustion engines under real-world operation using portable, onboard monitoring systems (PEMS) is becoming an increasingly important tool aiding the assessment of the effects of new fuels and technologies on environment and human health. The knowledge of exhaust flow is one of the prerequisites for successful RDE measurement with PEMS. One of the simplest approaches for estimating the exhaust flow from virtually any engine is its comp...

  10. A NEW ELECTRON-DENSITY MODEL FOR ESTIMATION OF PULSAR AND FRB DISTANCES

    International Nuclear Information System (INIS)

    Yao, J. M.; Wang, N.; Manchester, R. N.

    2017-01-01

    We present a new model for the distribution of free electrons in the Galaxy, the Magellanic Clouds, and the intergalactic medium (IGM) that can be used to estimate distances to real or simulated pulsars and fast radio bursts (FRBs) based on their dispersion measure (DM). The Galactic model has an extended thick disk representing the so-called warm interstellar medium, a thin disk representing the Galactic molecular ring, spiral arms based on a recent fit to Galactic H ii regions, a Galactic Center disk, and seven local features including the Gum Nebula, Galactic Loop I, and the Local Bubble. An offset of the Sun from the Galactic plane and a warp of the outer Galactic disk are included in the model. Parameters of the Galactic model are determined by fitting to 189 pulsars with independently determined distances and DMs. Simple models are used for the Magellanic Clouds and the IGM. Galactic model distances are within the uncertainty range for 86 of the 189 independently determined distances and within 20% of the nearest limit for a further 38 pulsars. We estimate that 95% of predicted Galactic pulsar distances will have a relative error of less than a factor of 0.9. The predictions of YMW16 are compared to those of the TC93 and NE2001 models showing that YMW16 performs significantly better on all measures. Timescales for pulse broadening due to interstellar scattering are estimated for (real or simulated) Galactic and Magellanic Cloud pulsars and FRBs.

  11. A NEW ELECTRON-DENSITY MODEL FOR ESTIMATION OF PULSAR AND FRB DISTANCES

    Energy Technology Data Exchange (ETDEWEB)

    Yao, J. M.; Wang, N. [Xinjiang Astronomical Observatory, Chinese Academy of Sciences, 150, Science 1-Street, Urumqi, Xinjiang 830011 (China); Manchester, R. N. [CSIRO Astronomy and Space Science, Australia Telescope National Facility, P.O. Box 76, Epping NSW 1710 (Australia)

    2017-01-20

    We present a new model for the distribution of free electrons in the Galaxy, the Magellanic Clouds, and the intergalactic medium (IGM) that can be used to estimate distances to real or simulated pulsars and fast radio bursts (FRBs) based on their dispersion measure (DM). The Galactic model has an extended thick disk representing the so-called warm interstellar medium, a thin disk representing the Galactic molecular ring, spiral arms based on a recent fit to Galactic H ii regions, a Galactic Center disk, and seven local features including the Gum Nebula, Galactic Loop I, and the Local Bubble. An offset of the Sun from the Galactic plane and a warp of the outer Galactic disk are included in the model. Parameters of the Galactic model are determined by fitting to 189 pulsars with independently determined distances and DMs. Simple models are used for the Magellanic Clouds and the IGM. Galactic model distances are within the uncertainty range for 86 of the 189 independently determined distances and within 20% of the nearest limit for a further 38 pulsars. We estimate that 95% of predicted Galactic pulsar distances will have a relative error of less than a factor of 0.9. The predictions of YMW16 are compared to those of the TC93 and NE2001 models showing that YMW16 performs significantly better on all measures. Timescales for pulse broadening due to interstellar scattering are estimated for (real or simulated) Galactic and Magellanic Cloud pulsars and FRBs.

  12. Estimation of transient heat flux density during the heat supply of a catalytic wall steam methane reformer

    Science.gov (United States)

    Settar, Abdelhakim; Abboudi, Saïd; Madani, Brahim; Nebbali, Rachid

    2018-02-01

    Due to the endothermic nature of the steam methane reforming reaction, the process is often limited by the heat transfer behavior in the reactors. Poor thermal behavior sometimes leads to slow reaction kinetics, which is characterized by the presence of cold spots in the catalytic zones. Within this framework, the present work consists on a numerical investigation, in conjunction with an experimental one, on the one-dimensional heat transfer phenomenon during the heat supply of a catalytic-wall reactor, which is designed for hydrogen production. The studied reactor is inserted in an electric furnace where the heat requirement of the endothermic reaction is supplied by electric heating system. During the heat supply, an unknown heat flux density, received by the reactive flow, is estimated using inverse methods. In the basis of the catalytic-wall reactor model, an experimental setup is engineered in situ to measure the temperature distribution. Then after, the measurements are injected in the numerical heat flux estimation procedure, which is based on the Function Specification Method (FSM). The measured and estimated temperatures are confronted and the heat flux density which crosses the reactor wall is determined.

  13. Accurate estimate of the relic density and the kinetic decoupling in nonthermal dark matter models

    International Nuclear Information System (INIS)

    Arcadi, Giorgio; Ullio, Piero

    2011-01-01

    Nonthermal dark matter generation is an appealing alternative to the standard paradigm of thermal WIMP dark matter. We reconsider nonthermal production mechanisms in a systematic way, and develop a numerical code for accurate computations of the dark matter relic density. We discuss, in particular, scenarios with long-lived massive states decaying into dark matter particles, appearing naturally in several beyond the standard model theories, such as supergravity and superstring frameworks. Since nonthermal production favors dark matter candidates with large pair annihilation rates, we analyze the possible connection with the anomalies detected in the lepton cosmic-ray flux by Pamela and Fermi. Concentrating on supersymmetric models, we consider the effect of these nonstandard cosmologies in selecting a preferred mass scale for the lightest supersymmetric particle as a dark matter candidate, and the consequent impact on the interpretation of new physics discovered or excluded at the LHC. Finally, we examine a rather predictive model, the G2-MSSM, investigating some of the standard assumptions usually implemented in the solution of the Boltzmann equation for the dark matter component, including coannihilations. We question the hypothesis that kinetic equilibrium holds along the whole phase of dark matter generation, and the validity of the factorization usually implemented to rewrite the system of a coupled Boltzmann equation for each coannihilating species as a single equation for the sum of all the number densities. As a byproduct we develop here a formalism to compute the kinetic decoupling temperature in case of coannihilating particles, which can also be applied to other particle physics frameworks, and also to standard thermal relics within a standard cosmology.

  14. Determinants of the reliability of ultrasound tomography sound speed estimates as a surrogate for volumetric breast density

    Energy Technology Data Exchange (ETDEWEB)

    Khodr, Zeina G.; Pfeiffer, Ruth M.; Gierach, Gretchen L., E-mail: GierachG@mail.nih.gov [Department of Health and Human Services, Division of Cancer Epidemiology and Genetics, National Cancer Institute, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States); Sak, Mark A.; Bey-Knight, Lisa [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 (United States); Duric, Nebojsa; Littrup, Peter [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 and Delphinus Medical Technologies, 46701 Commerce Center Drive, Plymouth, Michigan 48170 (United States); Ali, Haythem; Vallieres, Patricia [Henry Ford Health System, 2799 W Grand Boulevard, Detroit, Michigan 48202 (United States); Sherman, Mark E. [Division of Cancer Prevention, National Cancer Institute, Department of Health and Human Services, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States)

    2015-10-15

    Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (two scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authors’ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1–13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (∼0%). When evaluating change in sound speed between scans, 2.7% and ∼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced rater’s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density.

  15. Using kernel density estimates to investigate lymphatic filariasis in northeast Brazil

    Science.gov (United States)

    Medeiros, Zulma; Bonfim, Cristine; Brandão, Eduardo; Netto, Maria José Evangelista; Vasconcellos, Lucia; Ribeiro, Liany; Portugal, José Luiz

    2012-01-01

    After more than 10 years of the Global Program to Eliminate Lymphatic Filariasis (GPELF) in Brazil, advances have been seen, but the endemic disease persists as a public health problem. The aim of this study was to describe the spatial distribution of lymphatic filariasis in the municipality of Jaboatão dos Guararapes, Pernambuco, Brazil. An epidemiological survey was conducted in the municipality, and positive filariasis cases identified in this survey were georeferenced in point form, using the GPS. A kernel intensity estimator was applied to identify clusters with greater intensity of cases. We examined 23 673 individuals and 323 individuals with microfilaremia were identified, representing a mean prevalence rate of 1.4%. Around 88% of the districts surveyed presented cases of filarial infection, with prevalences of 0–5.6%. The male population was more affected by the infection, with 63.8% of the cases (P<0.005). Positive cases were found in all age groups examined. The kernel intensity estimator identified the areas of greatest intensity and least intensity of filarial infection cases. The case distribution was heterogeneous across the municipality. The kernel estimator identified spatial clusters of cases, thus indicating locations with greater intensity of transmission. The main advantage of this type of analysis lies in its ability to rapidly and easily show areas with the highest concentration of cases, thereby contributing towards planning, monitoring, and surveillance of filariasis elimination actions. Incorporation of geoprocessing and spatial analysis techniques constitutes an important tool for use within the GPELF. PMID:22943547

  16. A first estimate of the structure and density of the populations of pet cats and dogs across Great Britain.

    Science.gov (United States)

    Aegerter, James; Fouracre, David; Smith, Graham C

    2017-01-01

    Policy development, implementation, and effective contingency response rely on a strong evidence base to ensure success and cost-effectiveness. Where this includes preventing the establishment or spread of zoonotic or veterinary diseases infecting companion cats and dogs, descriptions of the structure and density of the populations of these pets are useful. Similarly, such descriptions may help in supporting diverse fields of study such as; evidence-based veterinary practice, veterinary epidemiology, public health and ecology. As well as maps of where pets are, estimates of how many may rarely, or never, be seen by veterinarians and might not be appropriately managed in the event of a disease outbreak are also important. Unfortunately both sources of evidence are absent from the scientific and regulatory literatures. We make this first estimate of the structure and density of pet populations by using the most recent national population estimates of cats and dogs across Great Britain and subdividing these spatially, and categorically across ownership classes. For the spatial model we used the location and size of veterinary practises across GB to predict the local density of pets, using client travel time to define catchments around practises, and combined this with residential address data to estimate the rate of ownership. For the estimates of pets which may provoke problems in managing a veterinary or zoonotic disease we reviewed the literature and defined a comprehensive suite of ownership classes for cats and dogs, collated estimates of the sub-populations for each ownership class as well as their rates of interaction and produced a coherent scaled description of the structure of the national population. The predicted density of pets varied substantially, with the lowest densities in rural areas, and the highest in the centres of large cities where each species could exceed 2500 animals.km-2. Conversely, the number of pets per household showed the opposite

  17. Innovative Methods for Estimating Densities and Detection Probabilities of Secretive Reptiles Including Invasive Constrictors and Rare Upland Snakes

    Science.gov (United States)

    2018-01-30

    home range  maintenance  or attraction to or avoidance of  landscape features, including  roads  (Morales et al. 2004, McClintock et al. 2012). For example...radiotelemetry and extensive road survey data are used to generate the first density estimates available for the species. The results show that southern...secretive snakes that combines behavioral observations of snake road crossing speed, systematic road survey data, and simulations of spatial

  18. Study of the Spacecraft Potential Under Active Control and Plasma Density Estimates During the MMS Commissioning Phase

    Science.gov (United States)

    Andriopoulou, M.; Nakamura, R.; Torkar, K.; Baumjohann, W.; Torbert, R. B.; Lindqvist, P.-A.; Khotyaintsev, Y. V.; Dorelli, John Charles; Burch, J. L.; Russell, C. T.

    2016-01-01

    Each spacecraft of the recently launched magnetospheric multiscale MMS mission is equipped with Active Spacecraft Potential Control (ASPOC) Instruments, which control the spacecraft potential in order to reduce spacecraft charging effects. ASPOC typically reduces the spacecraft potential to a few volts. On several occasions during the commissioning phase of the mission, the ASPOC instruments were operating only on one spacecraft at a time. Taking advantage of such intervals, we derive photoelectron curves and also perform reconstructions of the uncontrolled spacecraft potential for the spacecraft with active control and estimate the electron plasma density during those periods. We also establish the criteria under which our methods can be applied.

  19. Management of Water Quantity and Quality Based on Copula for a Tributary to Miyun Reservoir, Beijing

    Science.gov (United States)

    Zang, N.; Wang, X.; Liang, P.

    2017-12-01

    Due to the complex mutual influence between water quantity and water quality of river, it is difficult to reflect the actual characters of the tributaries to reservoir. In this study, the acceptable marginal probability distributions for water quantity and quality of reservoir inflow were calculated. A bivariate Archimedean copula was further applied to establish the joint distribution function of them. Then multiple combination scenarios of water quantity and water quality were designed to analyze their coexistence relationship and reservoir management strategies. Taking Bai river, an important tributary into the Miyun Reservoir, as a study case. The results showed that it is feasible to apply Frank copula function to describe the jointed distribution function of water quality and water quantity for Bai river. Furthermore, the monitoring of TP concentration needs to be strengthen in Bai river. This methodology can be extended to larger dimensions and is transferable to other reservoirs via establishment of models with relevant data for a particular area. Our findings help better analyzing the coexistence relationship and influence degree of the water quantity and quality of the tributary to reservoir for the purpose of water resources protection.

  20. Spatial dependence in wind and optimal wind power allocation: A copula-based analysis

    International Nuclear Information System (INIS)

    Grothe, Oliver; Schnieders, Julius

    2011-01-01

    The investment decision on the placement of wind turbines is, neglecting legal formalities, mainly driven by the aim to maximize the expected annual energy production of single turbines. The result is a concentration of wind farms at locations with high average wind speed. While this strategy may be optimal for single investors maximizing their own return on investment, the resulting overall allocation of wind turbines may be unfavorable for energy suppliers and the economy because of large fluctuations in the overall wind power output. This paper investigates to what extent optimal allocation of wind farms in Germany can reduce these fluctuations. We analyze stochastic dependencies of wind speed for a large data set of German on- and offshore weather stations and find that these dependencies turn out to be highly nonlinear but constant over time. Using copula theory we determine the value at risk of energy production for given allocation sets of wind farms and derive optimal allocation plans. We find that the optimized allocation of wind farms may substantially stabilize the overall wind energy supply on daily as well as hourly frequency. - Highlights: → Spatial modeling of wind forces in Germany. → A novel way to assess nonlinear dependencies of wind forces by copulas. → Wind turbine allocation by maximizing lower quantiles of energy production. → Optimal results show major increase in reliable part of wind energy.

  1. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    Science.gov (United States)

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  2. Comparing Johnson’s SBB, Weibull and Logit-Logistic bivariate distributions for modeling tree diameters and heights using copulas

    Energy Technology Data Exchange (ETDEWEB)

    Cardil Forradellas, A.; Molina Terrén, D.M.; Oliveres, J.; Castellnou, M.

    2016-07-01

    Aim of study: In this study we compare the accuracy of three bivariate distributions: Johnson’s SBB, Weibull-2P and LL-2P functions for characterizing the joint distribution of tree diameters and heights. Area of study: North-West of Spain. Material and methods: Diameter and height measurements of 128 plots of pure and even-aged Tasmanian blue gum (Eucalyptus globulus Labill.) stands located in the North-west of Spain were considered in the present study. The SBB bivariate distribution was obtained from SB marginal distributions using a Normal Copula based on a four-parameter logistic transformation. The Plackett Copula was used to obtain the bivariate models from the Weibull and Logit-logistic univariate marginal distributions. The negative logarithm of the maximum likelihood function was used to compare the results and the Wilcoxon signed-rank test was used to compare the related samples of these logarithms calculated for each sample plot and each distribution. Main results: The best results were obtained by using the Plackett copula and the best marginal distribution was the Logit-logistic. Research highlights: The copulas used in this study have shown a good performance for modeling the joint distribution of tree diameters and heights. They could be easily extended for modelling multivariate distributions involving other tree variables, such as tree volume or biomass. (Author)

  3. Multivariate hydrological frequency analysis for extreme events using Archimedean copula. Case study: Lower Tunjuelo River basin (Colombia)

    Science.gov (United States)

    Gómez, Wilmar

    2017-04-01

    By analyzing the spatial and temporal variability of extreme precipitation events we can prevent or reduce the threat and risk. Many water resources projects require joint probability distributions of random variables such as precipitation intensity and duration, which can not be independent with each other. The problem of defining a probability model for observations of several dependent variables is greatly simplified by the joint distribution in terms of their marginal by taking copulas. This document presents a general framework set frequency analysis bivariate and multivariate using Archimedean copulas for extreme events of hydroclimatological nature such as severe storms. This analysis was conducted in the lower Tunjuelo River basin in Colombia for precipitation events. The results obtained show that for a joint study of the intensity-duration-frequency, IDF curves can be obtained through copulas and thus establish more accurate and reliable information from design storms and associated risks. It shows how the use of copulas greatly simplifies the study of multivariate distributions that introduce the concept of joint return period used to represent the needs of hydrological designs properly in frequency analysis.

  4. 'A device for being able to book P&L': the organizational embedding of the Gaussian copula.

    Science.gov (United States)

    MacKenzie, Donald; Spears, Taylor

    2014-06-01

    This article, the second of two articles on the Gaussian copula family of models, discusses the attitude of 'quants' (modellers) to these models, showing that contrary to some accounts, those quants were not 'model dopes' who uncritically accepted the outputs of the models. Although sometimes highly critical of Gaussian copulas - even 'othering' them as not really being models --they nevertheless nearly all kept using them, an outcome we explain with reference to the embedding of these models in inter- and intra-organizational processes: communication, risk control and especially the setting of bonuses. The article also examines the role of Gaussian copula models in the 2007-2008 global crisis and in a 2005 episode known as 'the correlation crisis'. We end with the speculation that all widely used derivatives models (and indeed the evaluation culture in which they are embedded) help generate inter-organizational co-ordination, and all that is special in this respect about the Gaussian copula is that its status as 'other' makes this role evident.

  5. Carbon pool densities and a first estimate of the total carbon pool in the Mongolian forest-steppe.

    Science.gov (United States)

    Dulamsuren, Choimaa; Klinge, Michael; Degener, Jan; Khishigjargal, Mookhor; Chenlemuge, Tselmeg; Bat-Enerel, Banzragch; Yeruult, Yolk; Saindovdon, Davaadorj; Ganbaatar, Kherlenchimeg; Tsogtbaatar, Jamsran; Leuschner, Christoph; Hauck, Markus

    2016-02-01

    The boreal forest biome represents one of the most important terrestrial carbon stores, which gave reason to intensive research on carbon stock densities. However, such an analysis does not yet exist for the southernmost Eurosiberian boreal forests in Inner Asia. Most of these forests are located in the Mongolian forest-steppe, which is largely dominated by Larix sibirica. We quantified the carbon stock density and total carbon pool of Mongolia's boreal forests and adjacent grasslands and draw conclusions on possible future change. Mean aboveground carbon stock density in the interior of L. sibirica forests was 66 Mg C ha(-1) , which is in the upper range of values reported from boreal forests and probably due to the comparably long growing season. The density of soil organic carbon (SOC, 108 Mg C ha(-1) ) and total belowground carbon density (149 Mg C ha(-1) ) are at the lower end of the range known from boreal forests, which might be the result of higher soil temperatures and a thinner permafrost layer than in the central and northern boreal forest belt. Land use effects are especially relevant at forest edges, where mean carbon stock density was 188 Mg C ha(-1) , compared with 215 Mg C ha(-1) in the forest interior. Carbon stock density in grasslands was 144 Mg C ha(-1) . Analysis of satellite imagery of the highly fragmented forest area in the forest-steppe zone showed that Mongolia's total boreal forest area is currently 73 818 km(2) , and 22% of this area refers to forest edges (defined as the first 30 m from the edge). The total forest carbon pool of Mongolia was estimated at ~ 1.5-1.7 Pg C, a value which is likely to decrease in future with increasing deforestation and fire frequency, and global warming. © 2015 John Wiley & Sons Ltd.

  6. An isometric muscle force estimation framework based on a high-density surface EMG array and an NMF algorithm

    Science.gov (United States)

    Huang, Chengjun; Chen, Xiang; Cao, Shuai; Qiu, Bensheng; Zhang, Xu

    2017-08-01

    Objective. To realize accurate muscle force estimation, a novel framework is proposed in this paper which can extract the input of the prediction model from the appropriate activation area of the skeletal muscle. Approach. Surface electromyographic (sEMG) signals from the biceps brachii muscle during isometric elbow flexion were collected with a high-density (HD) electrode grid (128 channels) and the external force at three contraction levels was measured at the wrist synchronously. The sEMG envelope matrix was factorized into a matrix of basis vectors with each column representing an activation pattern and a matrix of time-varying coefficients by a nonnegative matrix factorization (NMF) algorithm. The activation pattern with the highest activation intensity, which was defined as the sum of the absolute values of the time-varying coefficient curve, was considered as the major activation pattern, and its channels with high weighting factors were selected to extract the input activation signal of a force estimation model based on the polynomial fitting technique. Main results. Compared with conventional methods using the whole channels of the grid, the proposed method could significantly improve the quality of force estimation and reduce the electrode number. Significance. The proposed method provides a way to find proper electrode placement for force estimation, which can be further employed in muscle heterogeneity analysis, myoelectric prostheses and the control of exoskeleton devices.

  7. Assessment of the reliability of human corneal endothelial cell-density estimates using a noncontact specular microscope.

    Science.gov (United States)

    Doughty, M J; Müller, A; Zaman, M L

    2000-03-01

    We sought to determine the variance in endothelial cell density (ECD) estimates for human corneal endothelia. Noncontact specular micrographs were obtained from white subjects without any history of contact lens wear, or major eye disease or surgery; subjects were within four age groups (children, young adults, older adults, senior citizens). The endothelial image was scanned, and the areas from > or =75 cells measured from an overlay by planimetry. The cell-area values were used to calculate the ECD repeatedly so that the intra- and intersubject variation in an average ECD estimate could be made by using different numbers of cells (5, 10, 15, etc.). An average ECD of 3,519 cells/mm2 (range, 2,598-5,312 cells/mm2) was obtained of counts of 75 cells/ endothelium from individuals aged 6-83 years. Average ECD estimates in each age group were 4,124, 3,457, 3,360, and 3,113 cells/mm2, respectively. Analysis of intersubject variance revealed that ECD estimates would be expected to be no better than +/-10% if only 25 cells were measured per endothelium, but approach +/-2% if 75 cells are measured. In assessing the corneal endothelium by noncontact specular microscopy, cell count should be given, and this should be > or =75/ endothelium for an expected variance to be at a level close to that recommended for monitoring age-, stress-, or surgery-related changes.

  8. Axonal diameter and density estimated with 7-Tesla hybrid diffusion imaging in transgenic Alzheimer rats

    Science.gov (United States)

    Daianu, Madelaine; Jacobs, Russell E.; Town, Terrence; Thompson, Paul M.

    2016-03-01

    Diffusion-weighted MR imaging (DWI) is a powerful tool to study brain tissue microstructure. DWI is sensitive to subtle changes in the white matter (WM), and can provide insight into abnormal brain changes in diseases such as Alzheimer's disease (AD). In this study, we used 7-Tesla hybrid diffusion imaging (HYDI) to scan 3 transgenic rats (line TgF344-AD; that model the full clinico-pathological spectrum of the human disease) ex vivo at 10, 15 and 24 months. We acquired 300 DWI volumes across 5 q-sampling shells (b=1000, 3000, 4000, 8000, 12000 s/mm2). From the top three b-value shells with highest signal-to-noise ratios, we reconstructed markers of WM disease, including indices of axon density and diameter in the corpus callosum (CC) - directly quantifying processes that occur in AD. As expected, apparent anisotropy progressively decreased with age; there were also decreases in the intra- and extra-axonal MR signal along axons. Axonal diameters were larger in segments of the CC (splenium and body, but not genu), possibly indicating neuritic dystrophy - characterized by enlarged axons and dendrites as previously observed at the ultrastructural level (see Cohen et al., J. Neurosci. 2013). This was further supported by increases in MR signals trapped in glial cells, CSF and possibly other small compartments in WM structures. Finally, tractography detected fewer fibers in the CC at 10 versus 24 months of age. These novel findings offer great potential to provide technical and scientific insight into the biology of brain disease.

  9. Estimation of probability density functions of damage parameter for valve leakage detection in reciprocating pump used in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo [Div. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.

  10. Using non-invasively collected genetic data to estimate density and population size of tigers in the Bangladesh Sundarbans

    Directory of Open Access Journals (Sweden)

    M. Abdul Aziz

    2017-10-01

    Full Text Available Population density is a key parameter to monitor endangered carnivores in the wild. The photographic capture-recapture method has been widely used for decades to monitor tigers, Panthera tigris, however the application of this method in the Sundarbans tiger landscape is challenging due to logistical difficulties. Therefore, we carried out molecular analyses of DNA contained in non-invasively collected genetic samples to assess the tiger population in the Bangladesh Sundarbans within a spatially explicit capture-recapture (SECR framework. By surveying four representative sample areas totalling 1994 km2 of the Bangladesh Sundarbans, we collected 440 suspected tiger scat and hair samples. Genetic screening of these samples provided 233 authenticated tiger samples, which we attempted to amplify at 10 highly polymorphic microsatellite loci. Of these, 105 samples were successfully amplified, representing 45 unique genotype profiles. The capture-recapture analyses of these unique genotypes within the SECR model provided a density estimate of 2.85 ± SE 0.44 tigers/100 km2 (95% CI: 1.99–3.71 tigers/100 km2 for the area sampled, and an estimate of 121 tigers (95% CI: 84–158 tigers for the total area of the Bangladesh Sundarbans. We demonstrate that this non-invasive genetic surveillance can be an additional approach for monitoring tiger populations in a landscape where camera-trapping is challenging.

  11. Estimation of probability density functions of damage parameter for valve leakage detection in reciprocating pump used in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo

    2016-01-01

    This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage

  12. Estimation of Probability Density Functions of Damage Parameter for Valve Leakage Detection in Reciprocating Pump Used in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Jong Kyeom Lee

    2016-10-01

    Full Text Available This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.

  13. Core Power Control of the fast nuclear reactors with estimation of the delayed neutron precursor density using Sliding Mode method

    International Nuclear Information System (INIS)

    Ansarifar, G.R.; Nasrabadi, M.N.; Hassanvand, R.

    2016-01-01

    Highlights: • We present a S.M.C. system based on the S.M.O for control of a fast reactor power. • A S.M.O has been developed to estimate the density of delayed neutron precursor. • The stability analysis has been given by means Lyapunov approach. • The control system is guaranteed to be stable within a large range. • The comparison between S.M.C. and the conventional PID controller has been done. - Abstract: In this paper, a nonlinear controller using sliding mode method which is a robust nonlinear controller is designed to control a fast nuclear reactor. The reactor core is simulated based on the point kinetics equations and one delayed neutron group. Considering the limitations of the delayed neutron precursor density measurement, a sliding mode observer is designed to estimate it and finally a sliding mode control based on the sliding mode observer is presented. The stability analysis is given by means Lyapunov approach, thus the control system is guaranteed to be stable within a large range. Sliding Mode Control (SMC) is one of the robust and nonlinear methods which have several advantages such as robustness against matched external disturbances and parameter uncertainties. The employed method is easy to implement in practical applications and moreover, the sliding mode control exhibits the desired dynamic properties during the entire output-tracking process independent of perturbations. Simulation results are presented to demonstrate the effectiveness of the proposed controller in terms of performance, robustness and stability.

  14. The attraction of the pyramids: virtual realization of Hutton's suggestion to improve Maskelyne's 1774 Earth density estimate

    Directory of Open Access Journals (Sweden)

    J. R. Smallwood

    2018-01-01

    Full Text Available Charles Hutton suggested in 1821 that the pyramids of Egypt be used to site an experiment to measure the deflection of the vertical by a large mass. The suggestion arose as he had estimated the attraction of a Scottish mountain as part of Nevil Maskelyne's (1774 "Schiehallion Experiment", a demonstration of Isaac Newton's law of gravitational attraction and the earliest reasonable quantitative estimate of Earth's mean density. I present a virtual realization of an experiment at the Giza pyramids to investigate how Hutton's concept might have emerged had it been undertaken as he suggested. The attraction of the Great Pyramid would have led to inward north–south deflections of the vertical totalling 1.8 arcsec (0.0005°, and east–west deflections totalling 2.0 arcsec (0.0006°, which although small, would have been within the contemporaneous detectable range, and potentially given, as Hutton wished, a more accurate Earth density measurement than he reported from the Schiehallion experiment.

  15. The attraction of the pyramids: virtual realization of Hutton's suggestion to improve Maskelyne's 1774 Earth density estimate

    Science.gov (United States)

    Smallwood, John R.

    2018-01-01

    Charles Hutton suggested in 1821 that the pyramids of Egypt be used to site an experiment to measure the deflection of the vertical by a large mass. The suggestion arose as he had estimated the attraction of a Scottish mountain as part of Nevil Maskelyne's (1774) "Schiehallion Experiment", a demonstration of Isaac Newton's law of gravitational attraction and the earliest reasonable quantitative estimate of Earth's mean density. I present a virtual realization of an experiment at the Giza pyramids to investigate how Hutton's concept might have emerged had it been undertaken as he suggested. The attraction of the Great Pyramid would have led to inward north-south deflections of the vertical totalling 1.8 arcsec (0.0005°), and east-west deflections totalling 2.0 arcsec (0.0006°), which although small, would have been within the contemporaneous detectable range, and potentially given, as Hutton wished, a more accurate Earth density measurement than he reported from the Schiehallion experiment.

  16. The visualization and analysis of urban facility pois using network kernel density estimation constrained by multi-factors

    Directory of Open Access Journals (Sweden)

    Wenhao Yu

    Full Text Available The urban facility, one of the most important service providers is usually represented by sets of points in GIS applications using POI (Point of Interest model associated with certain human social activities. The knowledge about distribution intensity and pattern of facility POIs is of great significance in spatial analysis, including urban planning, business location choosing and social recommendations. Kernel Density Estimation (KDE, an efficient spatial statistics tool for facilitating the processes above, plays an important role in spatial density evaluation, because KDE method considers the decay impact of services and allows the enrichment of the information from a very simple input scatter plot to a smooth output density surface. However, the traditional KDE is mainly based on the Euclidean distance, ignoring the fact that in urban street network the service function of POI is carried out over a network-constrained structure, rather than in a Euclidean continuous space. Aiming at this question, this study proposes a computational method of KDE on a network and adopts a new visualization method by using 3-D "wall" surface. Some real conditional factors are also taken into account in this study, such as traffic capacity, road direction and facility difference. In practical works the proposed method is implemented in real POI data in Shenzhen city, China to depict the distribution characteristic of services under impacts of multi-factors.

  17. Effective Dysphonia Detection Using Feature Dimension Reduction and Kernel Density Estimation for Patients with Parkinson’s Disease

    Science.gov (United States)

    Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar

    2014-01-01

    Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson’s disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher’s linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406

  18. Mobile sailing robot for automatic estimation of fish density and monitoring water quality.

    Science.gov (United States)

    Koprowski, Robert; Wróbel, Zygmunt; Kleszcz, Agnieszka; Wilczyński, Sławomir; Woźnica, Andrzej; Łozowski, Bartosz; Pilarczyk, Maciej; Karczewski, Jerzy; Migula, Paweł

    2013-07-01

    The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health.

  19. Estimating canopy bulk density and canopy base height for conifer stands in the interior Western United States using the Forest Vegetation Simulator Fire and Fuels Extension.

    Science.gov (United States)

    Seth Ex; Frederick Smith; Tara Keyser; Stephanie Rebain

    2017-01-01

    The Forest Vegetation Simulator Fire and Fuels Extension (FFE-FVS) is often used to estimate canopy bulk density (CBD) and canopy base height (CBH), which are key indicators of crown fire hazard for conifer stands in the Western United States. Estimated CBD from FFE-FVS is calculated as the maximum 4 m running mean bulk density of predefined 0.3 m thick canopy layers (...

  20. Contributory fault and level of personal injury to drivers involved in head-on collisions: Application of copula-based bivariate ordinal models.

    Science.gov (United States)

    Wali, Behram; Khattak, Asad J; Xu, Jingjing

    2018-01-01

    The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability