WorldWideScience

Sample records for estimation target distributions

  1. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters

    Energy Technology Data Exchange (ETDEWEB)

    Xu Huijun; Gordon, J. James; Siebers, Jeffrey V. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2011-02-15

    Purpose: A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D{sub v} exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Methods: Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals {omega} (e.g., {omega}=1 deg., 2 deg., 5 deg., 10 deg., 20 deg.). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment {omega}{sub eff}. In each direction, the DM was calculated by moving the structure in radial steps of size {delta}(=0.1,0.2,0.5,1 mm) until the specified isodose was crossed. Coverage estimation accuracy {Delta}Q was quantified as a function of the sampling parameters {omega} or

  2. Development of distributed target

    CERN Document Server

    Yu Hai Jun; Li Qin; Zhou Fu Xin; Shi Jin Shui; Ma Bing; Chen Nan; Jing Xiao Bing

    2002-01-01

    Linear introduction accelerator is expected to generate small diameter X-ray spots with high intensity. The interaction of the electron beam with plasmas generated at the X-ray converter will make the spot on target increase with time and debase the X-ray dose and the imaging resolving power. A distributed target is developed which has about 24 pieces of thin 0.05 mm tantalum films distributed over 1 cm. due to the structure adoption, the distributed target material over a large volume decreases the energy deposition per unit volume and hence reduces the temperature of target surface, then reduces the initial plasma formalizing and its expansion velocity. The comparison and analysis with two kinds of target structures are presented using numerical calculation and experiments, the results show the X-ray dose and normalized angle distribution of the two is basically the same, while the surface of the distributed target is not destroyed like the previous block target

  3. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A. [VTT Energy, Espoo (Finland)

    1996-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  4. Distribution load estimation (DLE)

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A.; Lehtonen, M. [VTT Energy, Espoo (Finland)

    1998-08-01

    The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented

  5. Multisensor estimation: New distributed algorithms

    Directory of Open Access Journals (Sweden)

    Plataniotis K. N.

    1997-01-01

    Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  6. Estimation of Bridge Reliability Distributions

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper...

  7. Poverty Targeting Classifications and Distributional Effects

    OpenAIRE

    Elio H Londero

    2004-01-01

    This paper reviews two common definitions of poverty targeted projects, discusses the limitations of poverty targeting classifications, calls for a poverty focused cost-benefit analysis that looks at the main policy constraints affecting the distribution of project benefits, and argues for looking at the distribution of net benefits. Finally, it offers some conclusions for the distributionally-minded applied economists.

  8. Estimator design for re-entry targets.

    Science.gov (United States)

    Huang, Chun-Wei; Lin, Chun-Liang; Lin, Yu-Ping

    2014-03-01

    This study proposes a trajectory estimation scheme for tactical ballistic missiles (TBMs). Target information acquired from the ground-based radar system is investigated by incorporating input estimation (IE) and extended Kalman filtering techniques. In addition to estimate the missile's position and velocity, our special focus is put on the estimation of the TBMs evasive acceleration and ballistic coefficient. In the demonstrative example, radar measurement errors are served as specifications while characterizing the acquirable zone of the ground-based radar system. Effect of the proposed design is fully verified by examining the estimation performance. © 2013 ISA Published by ISA All rights reserved.

  9. Estimating Loan-to-value Distributions

    DEFF Research Database (Denmark)

    Korteweg, Arthur; Sørensen, Morten

    2016-01-01

    procedure to recover the price path for individual properties and produce selection-corrected estimates of historical CLTV distributions. Estimating our model with transactions of residential properties in Alameda, California, we find that 35% of single-family homes are underwater, compared to 19% estimated...... by existing approaches. Our results reduce the index revision problem and have applications for pricing mortgage-backed securities....

  10. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  11. Adaptive link selection algorithms for distributed estimation

    Science.gov (United States)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  12. Neural Network for Estimating Conditional Distribution

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Kulczycki, P.

    Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...

  13. Distributed Estimation using Bayesian Consensus Filtering

    Science.gov (United States)

    2014-06-06

    Convergence rate analysis of distributed gossip (linear parameter) estimation: Fundamental limits and tradeoffs,” IEEE J. Sel. Topics Signal Process...Dimakis, S. Kar, J. Moura, M. Rabbat, and A. Scaglione, “ Gossip algorithms for distributed signal processing,” Proc. of the IEEE, vol. 98, no. 11, pp

  14. Score matching estimators for directional distributions

    OpenAIRE

    Mardia, Kanti V.; Kent, John T.; Laha, Arnab K

    2016-01-01

    One of the major problems for maximum likelihood estimation in the well-established directional models is that the normalising constants can be difficult to evaluate. A new general method of "score matching estimation" is presented here on a compact oriented Riemannian manifold. Important applications include von Mises-Fisher, Bingham and joint models on the sphere and related spaces. The estimator is consistent and asymptotically normally distributed under mild regularity conditions. Further...

  15. Statistical distributions applications and parameter estimates

    CERN Document Server

    Thomopoulos, Nick T

    2017-01-01

    This book gives a description of the group of statistical distributions that have ample application to studies in statistics and probability.  Understanding statistical distributions is fundamental for researchers in almost all disciplines.  The informed researcher will select the statistical distribution that best fits the data in the study at hand.  Some of the distributions are well known to the general researcher and are in use in a wide variety of ways.  Other useful distributions are less understood and are not in common use.  The book describes when and how to apply each of the distributions in research studies, with a goal to identify the distribution that best applies to the study.  The distributions are for continuous, discrete, and bivariate random variables.  In most studies, the parameter values are not known a priori, and sample data is needed to estimate parameter values.  In other scenarios, no sample data is available, and the researcher seeks some insight that allows the estimate of ...

  16. Auditory risk estimates for youth target shooting.

    Science.gov (United States)

    Meinke, Deanna K; Murphy, William J; Finan, Donald S; Lankford, James E; Flamme, Gregory A; Stewart, Michael; Soendergaard, Jacob; Jerome, Trevor W

    2014-03-01

    To characterize the impulse noise exposure and auditory risk for youth recreational firearm users engaged in outdoor target shooting events. The youth shooting positions are typically standing or sitting at a table, which places the firearm closer to the ground or reflective surface when compared to adult shooters. Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit suggested by the World Health Organization (1999) for children. Impulses were generated by 26 firearm/ammunition configurations representing rifles, shotguns, and pistols used by youth. Measurements were obtained relative to a youth shooter's left ear. All firearms generated peak levels that exceeded the 120 dB peak limit suggested by the WHO for children. In general, shooting from the seated position over a tabletop increases the peak levels, LAeq8 and reduces the unprotected maximum permissible exposures (MPEs) for both rifles and pistols. Pistols pose the greatest auditory risk when fired over a tabletop. Youth should utilize smaller caliber weapons, preferably from the standing position, and always wear hearing protection whenever engaging in shooting activities to reduce the risk for auditory damage.

  17. Distributed estimation for adaptive sensor selection in wireless sensor networks

    Science.gov (United States)

    Mahmoud, Magdi S.; Hassan Hamid, Matasm M.

    2014-05-01

    Wireless sensor networks (WSNs) are usually deployed for monitoring systems with the distributed detection and estimation of sensors. Sensor selection in WSNs is considered for target tracking. A distributed estimation scenario is considered based on the extended information filter. A cost function using the geometrical dilution of precision measure is derived for active sensor selection. A consensus-based estimation method is proposed in this paper for heterogeneous WSNs with two types of sensors. The convergence properties of the proposed estimators are analyzed under time-varying inputs. Accordingly, a new adaptive sensor selection (ASS) algorithm is presented in which the number of active sensors is adaptively determined based on the absolute local innovations vector. Simulation results show that the tracking accuracy of the ASS is comparable to that of the other algorithms.

  18. Wireless sensor networks distributed consensus estimation

    CERN Document Server

    Chen, Cailian; Guan, Xinping

    2014-01-01

    This SpringerBrief evaluates the cooperative effort of sensor nodes to accomplish high-level tasks with sensing, data processing and communication. The metrics of network-wide convergence, unbiasedness, consistency and optimality are discussed through network topology, distributed estimation algorithms and consensus strategy. Systematic analysis reveals that proper deployment of sensor nodes and a small number of low-cost relays (without sensing function) can speed up the information fusion and thus improve the estimation capability of wireless sensor networks (WSNs). This brief also investiga

  19. Extended Target Shape Estimation by Fitting B-Spline Curve

    Directory of Open Access Journals (Sweden)

    Jin-long Yang

    2014-01-01

    Full Text Available Taking into account the difficulty of shape estimation for the extended targets, a novel algorithm is proposed by fitting the B-spline curve. For the single extended target tracking, the multiple frame statistic technique is introduced to construct the pseudomeasurement sets and the control points are selected to form the B-spline curve. Then the shapes of the extended targets are extracted under the Bayes framework. Furthermore, the proposed shape estimation algorithm is modified suitably and combined with the probability hypothesis density (PHD filter for multiple extended target tracking. Simulations show that the proposed algorithm has a good performance for shape estimate of any extended targets.

  20. Distributed Particle Filter for Target Tracking: With Reduced Sensor Communications

    Directory of Open Access Journals (Sweden)

    Tadesse Ghirmai

    2016-09-01

    Full Text Available For efficient and accurate estimation of the location of objects, a network of sensors can be used to detect and track targets in a distributed manner. In nonlinear and/or non-Gaussian dynamic models, distributed particle filtering methods are commonly applied to develop target tracking algorithms. An important consideration in developing a distributed particle filtering algorithm in wireless sensor networks is reducing the size of data exchanged among the sensors because of power and bandwidth constraints. In this paper, we propose a distributed particle filtering algorithm with the objective of reducing the overhead data that is communicated among the sensors. In our algorithm, the sensors exchange information to collaboratively compute the global likelihood function that encompasses the contribution of the measurements towards building the global posterior density of the unknown location parameters. Each sensor, using its own measurement, computes its local likelihood function and approximates it using a Gaussian function. The sensors then propagate only the mean and the covariance of their approximated likelihood functions to other sensors, reducing the communication overhead. The global likelihood function is computed collaboratively from the parameters of the local likelihood functions using an average consensus filter or a forward-backward propagation information exchange strategy.

  1. Estimating the Distribution of Dietary Consumption Patterns

    KAUST Repository

    Carroll, Raymond J.

    2014-02-01

    In the United States the preferred method of obtaining dietary intake data is the 24-hour dietary recall, yet the measure of most interest is usual or long-term average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. We were interested in estimating the population distribution of the Healthy Eating Index-2005 (HEI-2005), a multi-component dietary quality index involving ratios of interrelated dietary components to energy, among children aged 2-8 in the United States, using a national survey and incorporating survey weights. We developed a highly nonlinear, multivariate zero-inflated data model with measurement error to address this question. Standard nonlinear mixed model software such as SAS NLMIXED cannot handle this problem. We found that taking a Bayesian approach, and using MCMC, resolved the computational issues and doing so enabled us to provide a realistic distribution estimate for the HEI-2005 total score. While our computation and thinking in solving this problem was Bayesian, we relied on the well-known close relationship between Bayesian posterior means and maximum likelihood, the latter not computationally feasible, and thus were able to develop standard errors using balanced repeated replication, a survey-sampling approach.

  2. Kalman filter data assimilation: targeting observations and parameter estimation.

    Science.gov (United States)

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  3. Low Complexity Parameter Estimation For Off-the-Grid Targets

    KAUST Repository

    Jardak, Seifallah

    2015-10-05

    In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.

  4. Odds Ratios Estimation of Rare Event in Binomial Distribution

    Directory of Open Access Journals (Sweden)

    Kobkun Raweesawat

    2016-01-01

    Full Text Available We introduce the new estimator of odds ratios in rare events using Empirical Bayes method in two independent binomial distributions. We compare the proposed estimates of odds ratios with two estimators, modified maximum likelihood estimator (MMLE and modified median unbiased estimator (MMUE, using the Estimated Relative Error (ERE as a criterion of comparison. It is found that the new estimator is more efficient when compared to the other methods.

  5. Joint DOA and DOD Estimation in Bistatic MIMO Radar without Estimating the Number of Targets

    Directory of Open Access Journals (Sweden)

    Zaifang Xi

    2014-01-01

    established without prior knowledge of the signal environment. In this paper, an efficient method for joint DOA and DOD estimation in bistatic MIMO radar without estimating the number of targets is presented. The proposed method computes an estimate of the noise subspace using the power of R (POR technique. Then the two-dimensional (2D direction finding problem is decoupled into two successive one-dimensional (1D angle estimation problems by employing the rank reduction (RARE estimator.

  6. Multivariate phase type distributions - Applications and parameter estimation

    DEFF Research Database (Denmark)

    Meisch, David

    , allowing for different estimation methods for the whole class or subclasses of phase type distributions. These attributes make this class of distributions an interesting alternative to the normal distribution. When facing multivariate problems, the only general distribution that allows for estimation...... and statistical inference, is the multivariate normal distribution. Unfortunately only little is known about the general class of multivariate phase type distribution. Considering the results concerning parameter estimation and inference theory of univariate phase type distributions, the class of multivariate...... and reducing model uncertainties. Research has shown that the errors on cost estimates for infrastructure projects clearly do not follow a normal distribution but is skewed towards cost overruns. This skewness can be described using phase type distributions. Cost benefit analysis assesses potential future...

  7. Interferometric Calibration with Natural Distributed Targets

    DEFF Research Database (Denmark)

    Dall, Jørgen; Christensen, Erik Lintz

    2002-01-01

    Cross-calibration is a fully automated algorithm for calibration of interferometric synthetic aperture radar (IFSAR) data. It has been developed for single-pass interferometry, but the principles may be applicable to multi-pass interferometry, too. The algorithm is based on natural distributed ta....... The algorithm appears to be fairly robust with respect to the terrain type. However, the result of the calibration may deteriorate if the terrain elevation, as measured with the SAR, changes systematically with the incidence angle or the aspect angle....

  8. Optimal regionalization of extreme value distributions for flood estimation

    Science.gov (United States)

    Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.

    2018-01-01

    Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.

  9. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  10. Estimating the relationship between abundance and distribution

    DEFF Research Database (Denmark)

    Rindorf, Anna; Lewy, Peter

    2012-01-01

    Numerous studies investigate the relationship between abundance and distribution using indices reflecting one of the three aspects of distribution: proportion of area occupied, aggregation, and geographical range. Using simulations and analytical derivations, we examine whether these indices...... based on Euclidean distance to the centre of gravity of the spatial distribution. Only the proportion of structurally empty areas, Lloyds index, and indices of the distance to the centre of gravity of the spatial distribution are unbiased at all levels of abundance. The remaining indices generate...... relationships between abundance and distribution even in cases where no underlying relationships exists, although the problem decreases for measures derived from Lorenz curves when samples contain more than four individuals on average. To illustrate the problem, the indices are applied to juvenile North Sea cod...

  11. comparison of estimation methods for fitting weibull distribution

    African Journals Online (AJOL)

    Tersor

    QuercusroburL.) stands in northwest Spain with the beta distribution. Investigación Agraria: Sistemasy Recursos Forestales 17(3):. 271-281. COMPARISON OF ESTIMATION METHODS FOR FITTING WEIBULL DISTRIBUTION TO THE NATURAL ...

  12. ESTIMATION OF SIGNAL PARAMETERS FOR MULTIPLE TARGET LOCALIZATION

    Directory of Open Access Journals (Sweden)

    X. Susan Christina

    2014-12-01

    Full Text Available Target detection and localization is an active research area due to its importance in a wide range of application such as biomedical and military applications. In this paper, a novel method for the detection and estimation of signal parameters such as range and direction of arrival for multiple far- field target using wideband echo chirp signals is proposed. Sonar and radar are the active detection systems transmit well defined signals into a region of interest. A model preprocessing procedure is designed for the echo signal. The parameters estimation method for multiple targets is developed based on the Linear Canonical transform and the Fast Root MUSIC algorithm which is a high resolution DOA estimation method originally proposed for any arbitrary arrays to reduce the computational complexity in existing systems. The proposed method provides high accuracy of detection and resolution even in very low SNR values.

  13. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions ...

  14. Self organizing distributed state-estimators

    NARCIS (Netherlands)

    Sijs, J.; Papp, Z.

    2012-01-01

    Distributed solutions for signal processing techniques are important for establishing large-scale monitoring and control applications. They enable the deployment of scalable sensor networks for particular application areas. Typically, such networks consists of a large number of vulnerable components

  15. Efficient estimation of smooth distributions from coarsely grouped data.

    Science.gov (United States)

    Rizzi, Silvia; Gampe, Jutta; Eilers, Paul H C

    2015-07-15

    Ungrouping binned data can be desirable for many reasons: Bins can be too coarse to allow for accurate analysis; comparisons can be hindered when different grouping approaches are used in different histograms; and the last interval is often wide and open-ended and, thus, covers a lot of information in the tail area. Age group-specific disease incidence rates and abridged life tables are examples of binned data. We propose a versatile method for ungrouping histograms that assumes that only the underlying distribution is smooth. Because of this modest assumption, the approach is suitable for most applications. The method is based on the composite link model, with a penalty added to ensure the smoothness of the target distribution. Estimates are obtained by maximizing a penalized likelihood. This maximization is performed efficiently by a version of the iteratively reweighted least-squares algorithm. Optimal values of the smoothing parameter are chosen by minimizing Akaike's Information Criterion. We demonstrate the performance of this method in a simulation study and provide several examples that illustrate the approach. Wide, open-ended intervals can be handled properly. The method can be extended to the estimation of rates when both the event counts and the exposures to risk are grouped. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  16. Distribution system intelligent state estimation through minimal meter placement

    Energy Technology Data Exchange (ETDEWEB)

    Ramesh, L. [Jadavpur Univ., Kolkata (India). Dept. of Electrical Engineering; Chowdhury, S.; Chowdhury, S.P.; Gaunt, C.T. [Cape Town Univ. (South Africa)

    2009-03-11

    A method of accurately estimating electrical parameters for distribution automation systems in power distribution networks was presented. A particle swarm optimization (PSO) algorithm was used to identify meter locations as well as to monitor and estimate parameters in the distribution system. The determined locations were then used to monitor and estimate parameters using a SCADA system arrangement. Parameter values were estimated using a hybrid artificial neural network (ANN) based estimation technique where pseudo-measurements were injected at un-metered buses. An Institute of Electrical and Electronic Engineers (IEEE) 13-node system with a distribution system used by the Tamil Nadu Electrical Board in India was used to verify the method. Results of the study showed that the proposed algorithm can be used to accurately estimate the electrical parameters of distribution automation systems. The method will be used to determine the position of future switch position and transformer locations. 22 refs., 1 tab., 6 figs.

  17. Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies.

    Science.gov (United States)

    Schuler, Megan S; Rose, Sherri

    2017-01-01

    Estimation of causal effects using observational data continues to grow in popularity in the epidemiologic literature. While many applications of causal effect estimation use propensity score methods or G-computation, targeted maximum likelihood estimation (TMLE) is a well-established alternative method with desirable statistical properties. TMLE is a doubly robust maximum-likelihood-based approach that includes a secondary "targeting" step that optimizes the bias-variance tradeoff for the target parameter. Under standard causal assumptions, estimates can be interpreted as causal effects. Because TMLE has not been as widely implemented in epidemiologic research, we aim to provide an accessible presentation of TMLE for applied researchers. We give step-by-step instructions for using TMLE to estimate the average treatment effect in the context of an observational study. We discuss conceptual similarities and differences between TMLE and 2 common estimation approaches (G-computation and inverse probability weighting) and present findings on their relative performance using simulated data. Our simulation study compares methods under parametric regression misspecification; our results highlight TMLE's property of double robustness. Additionally, we discuss best practices for TMLE implementation, particularly the use of ensembled machine learning algorithms. Our simulation study demonstrates all methods using super learning, highlighting that incorporation of machine learning may outperform parametric regression in observational data settings. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Targeting estimation of CCC-GARCH models with infinite fourth moments

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    . In this paper we consider the large-sample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using non-standard limit theory we derive new results...

  19. Probability distribution functions of echo signals from meteorological targets

    Science.gov (United States)

    Vasilyev, G. V.

    1975-01-01

    Simple expressions are obtained for the laws and moments of the probability distributions of averaged echo signals from meteorological targets at the output of a logarithmic radar receiver. Here, the distribution function is assumed to be represented in the form of an Edgeworth series.

  20. Maximum likelihood estimation of exponential distribution under ...

    African Journals Online (AJOL)

    In addition, a new numerical method for parameter estimation is provided. Using the parametric bootstrap method, the construction of confidence intervals for the mean parameter is discussed. Monte Carlo simulations are performed to investigate performance of the different methods. Finally, an illustrative example is also ...

  1. Target Doppler Estimation Using Wideband Frequency Modulated Signals

    NARCIS (Netherlands)

    Doisy, Y.; Deruaz, L.; Beerens, S.P.; Been, R.

    2000-01-01

    The topic of this paper is the design and performance analysis of wideband sonar waveforms capable of estimating both target range and Döppler using as few replicas in the processing as possible. First, it is shown that for conventional Döppler sensitive waveforms, for which the Döppler and delay

  2. Variance targeting estimation of the BEKK-X model

    OpenAIRE

    Thieu, Le Quyen

    2016-01-01

    This paper studies the BEKK model with exogenous variables (BEKK-X), which intends to take into account the influence of explanatory variables on the conditional covariance of the asset returns. Strong consistency and asymptotic normality of a variance targeting estimator (VTE) is proved. Monte Carlo experiments and an application to financial series illustrate the asymptotic results.

  3. Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations

    Science.gov (United States)

    Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.

    2016-01-01

    Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…

  4. Estimation of Log-Linear-Binomial Distribution with Applications

    Directory of Open Access Journals (Sweden)

    Elsayed Ali Habib

    2010-01-01

    Full Text Available Log-linear-binomial distribution was introduced for describing the behavior of the sum of dependent Bernoulli random variables. The distribution is a generalization of binomial distribution that allows construction of a broad class of distributions. In this paper, we consider the problem of estimating the two parameters of log-linearbinomial distribution by moment and maximum likelihood methods. The distribution is used to fit genetic data and to obtain the sampling distribution of the sign test under dependence among trials.

  5. A neural network applied to estimate Burr XII distribution parameters

    Energy Technology Data Exchange (ETDEWEB)

    Abbasi, B., E-mail: b.abbasi@gmail.co [Department of Industrial Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Hosseinifard, S.Z. [Department of Statistics and Operations Research, RMIT University, Melbourne (Australia); Coit, D.W. [Department of Industrial and System Engineering, Rutgers University, Piscataway, NJ (United States)

    2010-06-15

    The Burr XII distribution can closely approximate many other well-known probability density functions such as the normal, gamma, lognormal, exponential distributions as well as Pearson type I, II, V, VII, IX, X, XII families of distributions. Considering a wide range of shape and scale parameters of the Burr XII distribution, it can have an important role in reliability modeling, risk analysis and process capability estimation. However, estimating parameters of the Burr XII distribution can be a complicated task and the use of conventional methods such as maximum likelihood estimation (MLE) and moment method (MM) is not straightforward. Some tables to estimate Burr XII parameters have been provided by Burr (1942) but they are not adequate for many purposes or data sets. Burr tables contain specific values of skewness and kurtosis and their corresponding Burr XII parameters. Using interpolation or extrapolation to estimate other values may provide inappropriate estimations. In this paper, we present a neural network to estimate Burr XII parameters for different values of skewness and kurtosis as inputs. A trained network is presented, and one can use it without previous knowledge about neural networks to estimate Burr XII distribution parameters. Accurate estimation of the Burr parameters is an extension of simulation studies.

  6. Optimal Joint Target Detection and Parameter Estimation By MIMO Radar

    CERN Document Server

    Tajer, Ali; Wang, Xiaodong; Moustakides, George V

    2009-01-01

    We consider multiple-input multiple-output (MIMO) radar systems with widely-spaced antennas. Such antenna configuration facilitates capturing the inherent diversity gain due to independent signal dispersion by the target scatterers. We consider a new MIMO radar framework for detecting a target that lies in an unknown location. This is in contrast with conventional MIMO radars which break the space into small cells and aim at detecting the presence of a target in a specified cell. We treat this problem through offering a novel composite hypothesis testing framework for target detection when (i) one or more parameters of the target are unknown and we are interested in estimating them, and (ii) only a finite number of observations are available. The test offered optimizes a metric which accounts for both detection and estimation accuracies. In this paper as the parameter of interest we focus on the vector of time-delays that the waveforms undergo from being emitted by the transmit antennas until being observed b...

  7. Control and Estimation of Distributed Parameter Systems

    CERN Document Server

    Kappel, F; Kunisch, K

    1998-01-01

    Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.

  8. Blind Reverberation Time Estimation Based on Laplace Distribution

    OpenAIRE

    Jan, Tariqullah; Wang, Wenwu

    2012-01-01

    We propose an algorithm for the estimation of reverberation time (RT) from the reverberant speech signal by using a maximum likelihood (ML) estimator. Based on the analysis of an existing RT estimation method, which models the reverberation decay as a Gaussian random process modulated by a deterministic envelope, a Laplacian distribution based decay model is proposed in which an efficient procedure for locating free decay from reverberant speech is also incorporated. Then the RT is estimated ...

  9. A novel method for estimating distributions of body mass index.

    Science.gov (United States)

    Ng, Marie; Liu, Patrick; Thomson, Blake; Murray, Christopher J L

    2016-01-01

    Understanding trends in the distribution of body mass index (BMI) is a critical aspect of monitoring the global overweight and obesity epidemic. Conventional population health metrics often only focus on estimating and reporting the mean BMI and the prevalence of overweight and obesity, which do not fully characterize the distribution of BMI. In this study, we propose a novel method which allows for the estimation of the entire distribution. The proposed method utilizes the optimization algorithm, L-BFGS-B, to derive the distribution of BMI from three commonly available population health statistics: mean BMI, prevalence of overweight, and prevalence of obesity. We conducted a series of simulations to examine the properties, accuracy, and robustness of the method. We then illustrated the practical application of the method by applying it to the 2011-2012 US National Health and Nutrition Examination Survey (NHANES). Our method performed satisfactorily across various simulation scenarios yielding empirical (estimated) distributions which aligned closely with the true distributions. Application of the method to the NHANES data also showed a high level of consistency between the empirical and true distributions. In situations where there were considerable outliers, the method was less satisfactory at capturing the extreme values. Nevertheless, it remained accurate at estimating the central tendency and quintiles. The proposed method offers a tool that can efficiently estimate the entire distribution of BMI. The ability to track the distributions of BMI will improve our capacity to capture changes in the severity of overweight and obesity and enable us to better monitor the epidemic.

  10. Monopulse joint parameter estimation of multiple unresolved targets within the radar beam

    Science.gov (United States)

    Yuan, Hui; Wang, Chunyang; An, Lei; Li, Xin

    2017-06-01

    Aiming at the problem of the parameter estimation of multiple unresolved targets within the radar beam, using the joint bin processing model, a method of jointly estimating the number and the position of the targets is proposed based on reversible jump Markov Chain Monte Carlo (RJ-MCMC). Reasonable assumptions of the prior distributions and Bayesian theory are adopted to obtain the posterior probability density function of the estimated parameters from the conditional likelihood function of the observation, and then the acceptance ratios of the birth, death and update moves are given. During the update move, a hybrid Metropolis-Hastings (MH) sampling algorithm is used to make a better exploration of the parameter space. The simulation results show that this new method outperforms the method of ML-MLD [11] proposed by X.Zhang for similar estimation accuracy is achieved while fewer sub-pulses are needed.

  11. Waveguide invariant broadband target detection and reverberation estimation.

    Science.gov (United States)

    Goldhahn, Ryan; Hickman, Granger; Krolik, Jeffrey

    2008-11-01

    Reverberation often limits the performance of active sonar systems. In particular, backscatter off of a rough ocean floor can obscure target returns and/or large bottom scatterers can be easily confused with water column targets of interest. Conventional active sonar detection involves constant false alarm rate (CFAR) normalization of the reverberation return which does not account for the frequency-selective fading caused by multipath propagation. This paper presents an alternative to conventional reverberation estimation motivated by striations observed in time-frequency analysis of active sonar data. A mathematical model for these reverberation striations is derived using waveguide invariant theory. This model is then used to motivate waveguide invariant reverberation estimation which involves averaging the time-frequency spectrum along these striations. An evaluation of this reverberation estimate using real Mediterranean data is given and its use in a generalized likelihood ratio test based CFAR detector is demonstrated. CFAR detection using waveguide invariant reverberation estimates is shown to outperform conventional cell-averaged and frequency-invariant CFAR detection methods in shallow water environments producing strong reverberation returns which exhibit the described striations.

  12. Body-size distribution, biomass estimates and life histories of ...

    African Journals Online (AJOL)

    The body-size distributions and biomass estimates of Caenis (Ephemeroptera: Caenidae), Cloeon (Ephemeroptera: Baetidae), Coenagrionidae (Odonata), Micronecta (Hemiptera: Corixidae), Chironominae (Diptera: Chironomidae) and Orthocladiinae (Diptera: Chironomidae), the most common and abundant insect taxa ...

  13. Convolutional neural networks for estimating spatially distributed evapotranspiration

    Science.gov (United States)

    García-Pedrero, Angel M.; Gonzalo-Martín, Consuelo; Lillo-Saavedra, Mario F.; Rodriguéz-Esparragón, Dionisio; Menasalvas, Ernestina

    2017-10-01

    Efficient water management in agriculture requires an accurate estimation of evapotranspiration (ET). There are available several balance energy surface models that provide a daily ET estimation (ETd) spatially and temporarily distributed for different crops over wide areas. These models need infrared thermal spectral band (gathered from remotely sensors) to estimate sensible heat flux from the surface temperature. However, this spectral band is not available for most current operational remote sensors. Even though the good results provided by machine learning (ML) methods in many different areas, few works have applied these approaches for forecasting distributed ETd on space and time when aforementioned information is missing. However, these methods do not exploit the land surface characteristics and the relationships among land covers producing estimation errors. In this work, we have developed and evaluated a methodology that provides spatial distributed estimates of ETd without thermal information by means of Convolutional Neural Networks.

  14. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  15. Capabilities of the Johnson SB distribution in estimating rain variables

    Science.gov (United States)

    D'Adderio, Leo Pio; Cugerone, Katia; Porcù, Federico; De Michele, Carlo; Tokay, Ali

    2016-11-01

    Numerous fields of atmospheric and hydrological sciences require the parametric form of the raindrop size distribution (DSD) to estimate the rainfall rate from radar observables as well as in cloud resolving and weather forecasting models. This study aims to investigate the capability of the Johnson SB distribution (JSB) in estimating rain integral parameters. Specifically, rainfall rate (R), reflectivity factor (Z) and mean mass diameter (Dmass) estimated by JSB are compared with those estimated by a three-parameter Gamma distribution, widely used by radar meteorologists and atmospheric physicists to model natural DSD. A large dataset consisting of more than 155,000 one-minute DSD, from six field campaigns of Ground Validation (GV) program of NASA/JAXA Global Precipitation Measurement mission (GPM), is used to test the performance of both JSB and Gamma distribution. The available datasets cover a wide range of rain regimes because of the field campaigns were carried out in different seasons and locations. Correlation coefficient, bias, root mean square error (RMSE) and fractional standard error (FSE) between estimated and measured integral parameters are calculated to compare the performances of the two distributions. The capability of JSB in estimating the integral parameters, especially R and Z, resulted very close to that of Gamma distribution. In particular, for light precipitation, JSB is superior to Gamma distribution in estimating R with FSE of 11% with respect to values ranging between 25% and 37% about for Gamma. Comparison of the estimated and measured DSDs shows that the JSB distribution reproduces the natural DSD quite accurately.

  16. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  17. The duplicate method of uncertainty estimation: are eight targets enough?

    Science.gov (United States)

    Lyn, Jennifer A; Ramsey, Michael H; Coad, D Stephen; Damant, Andrew P; Wood, Roger; Boon, Katy A

    2007-11-01

    This paper presents methods for calculating confidence intervals for estimates of sampling uncertainty (s(samp)) and analytical uncertainty (s(anal)) using the chi-squared distribution. These uncertainty estimates are derived from application of the duplicate method, which recommends a minimum of eight duplicate samples. The methods are applied to two case studies--moisture in butter and nitrate in lettuce. Use of the recommended minimum of eight duplicate samples is justified for both case studies as the confidence intervals calculated using greater than eight duplicates did not show any appreciable reduction in width. It is considered that eight duplicates provide estimates of uncertainty that are both acceptably accurate and cost effective.

  18. Distributive estimation of frequency selective channels for massive MIMO systems

    KAUST Repository

    Zaib, Alam

    2015-12-28

    We consider frequency selective channel estimation in the uplink of massive MIMO-OFDM systems, where our major concern is complexity. A low complexity distributed LMMSE algorithm is proposed that attains near optimal channel impulse response (CIR) estimates from noisy observations at receive antenna array. In proposed method, every antenna estimates the CIRs of its neighborhood followed by recursive sharing of estimates with immediate neighbors. At each step, every antenna calculates the weighted average of shared estimates which converges to near optimal LMMSE solution. The simulation results validate the near optimal performance of proposed algorithm in terms of mean square error (MSE). © 2015 EURASIP.

  19. Statistical motor number estimation assuming a binomial distribution

    NARCIS (Netherlands)

    Blok, J.H.; Visser, G.H.; de Graaf, S.; Zwarts, M.J.; Stegeman, D.F.

    2005-01-01

    The statistical method of motor unit number estimation (MUNE) uses the natural stochastic variation in a muscle's compound response to electrical stimulation to obtain an estimate of the number of recruitable motor units. The current method assumes that this variation follows a Poisson distribution.

  20. Statistical motor number estimation assuming a binomial distribution.

    NARCIS (Netherlands)

    Blok, J.H.; Visser, G.H.A.; Graaf, S.S.N. de; Zwarts, M.J.; Stegeman, D.F.

    2005-01-01

    The statistical method of motor unit number estimation (MUNE) uses the natural stochastic variation in a muscle's compound response to electrical stimulation to obtain an estimate of the number of recruitable motor units. The current method assumes that this variation follows a Poisson distribution.

  1. Linear Estimation of Standard Deviation of Logistic Distribution ...

    African Journals Online (AJOL)

    Linear Estimation of Standard Deviation of Logistic Distribution: Theory and Algorithm. ... African Journal of Science and Technology ... of the standard deviation of the logistic population with respect to the Cramer-Rao lower variance bound and the best linear unbiased estimators (BLUE\\'s) when the mean is unknown.

  2. On optimality of the empirical distribution function for the estimation ...

    African Journals Online (AJOL)

    In this work we present some results on the optimality of the empirical distribution function as estimator of the invariant distribution function of an ergodic diffusion process. The results presented were obtained in different previous works under conditions that are are rewritten in a unified form that make comparable those ...

  3. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.

    2011-01-01

    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  4. tmle : An R Package for Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Susan Gruber

    2012-11-01

    Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.

  5. Load research and load estimation in electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A. [VTT Energy, Espoo (Finland). Energy Systems

    1996-12-31

    The topics introduced in this thesis are: the Finnish load research project, a simple form customer class load model, analysis of the origins of customers load distribution, a method for the estimation of the confidence interval of customer loads and Distribution Load Estimation (DLE) which utilises both the load models and measurements from distribution networks. The Finnish load research project started in 1983. The project was initially coordinated by the Association of Finnish Electric Utilities and 40 utilities joined the project. Now there are over 1000 customer hourly load recordings in a database. A simple form customer class load model is introduced. The model is designed to be practical for most utility applications and has been used by the Finnish utilities for several years. The only variable of the model is the customers annual energy consumption. The model gives the customers average hourly load and standard deviation for a selected month, day and hour. The statistical distribution of customer loads is studied and a model for customer electric load variation is developed. The model results in a lognormal distribution as an extreme case. Using the `simple form load model`, a method for estimating confidence intervals (confidence limits) of customer hourly load is developed. The two methods selected for final analysis are based on normal and lognormal distribution estimated in a simplified manner. The estimation of several cumulated customer class loads is also analysed. Customer class load estimation which combines the information from load models and distribution network load measurements is developed. This method, called Distribution Load Estimation (DLE), utilises information already available in the utilities databases and is thus easy to apply

  6. State estimators for tracking sharply-maneuvering ground targets

    Science.gov (United States)

    Visina, Radu S.; Bar-Shalom, Yaakov; Willett, Peter

    2017-05-01

    This paper presents an algorithm, based on the Interacting Multiple Model Estimator, that can be used to track the state of kinematic point targets, moving in two dimensions, that are capable of making sharp heading maneuvers over short periods of time, such as certain ground vehicles moving in an open field. The targets are capable of up to 60 °/s turn rates, while polar measurements are received at 1 Hz. We introduce the Non-Zero Mean, White Noise Turn-Rate IMM (IMM-WNTR) that consists of 3 modes based on a White Noise Turn Rate (WNTR) kinematic model that contains additive, white, Gaussian turn rate process noises. Two of the modes are considered maneuvering modes, and they have opposite (left/right), non-zero mean turn rate input noise. The need for non-zero mean turn rate process noise is explained, and Monte Carlo simulations compare this novel design to the traditional (single-mode) White Noise Acceleration Kalman Filter (WNA KF) and the two-mode White Noise Acceleration/Nearly-Coordinated Turn Rate IMM (IMM-CT). Results show that the IMM-WNTR filter achieves better accuracy and real-time consistency between expected error and actual error as compared to the (single-mode) WNA KF and the IMM-CT in all simulated scenarios, making it a very accurate state estimator for targets with sharp coordinated turn capability in 2D.

  7. Distributed Kalman-Consensus Filtering for Sparse Signal Estimation

    Directory of Open Access Journals (Sweden)

    Yisha Liu

    2014-01-01

    Full Text Available A Kalman filtering-based distributed algorithm is proposed to deal with the sparse signal estimation problem. The pseudomeasurement-embedded Kalman filter is rebuilt in the information form, and an improved parameter selection approach is discussed. By introducing the pseudomeasurement technology into Kalman-consensus filter, a distributed estimation algorithm is developed to fuse the measurements from different nodes in the network, such that all filters can reach a consensus on the estimate of sparse signals. Some numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  8. Distributed estimation based on observations prediction in wireless sensor networks

    KAUST Repository

    Bouchoucha, Taha

    2015-03-19

    We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.

  9. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Ridi Ferdiana

    2011-01-01

    Full Text Available Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation. DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming.

  10. Joint DOD and DOA Estimation for High Speed Target Using Bistatic MIMO Radar

    Directory of Open Access Journals (Sweden)

    Jinli Chen

    2014-01-01

    Full Text Available In bistatic multiple-input multiple-output (MIMO radar, range migration and invalidly synthesized virtual array resulting from the serious mismatch of matched filter make it difficult to estimate direction of departure (DOD and direction of arrival (DOA of high speed target using the traditional superresolution algorithms. In this study, a method for joint DOD and DOA estimation of high speed target using bistatic MIMO radar is proposed. After multiplying the received signals with the conjugate of the delayed versions of the transmitted signals, Fourier transform (FT of the multiplied signals over both fast time and slow time is employed. Then, the target components of radar return corresponding to the different transmitted waveforms can be perfectly separated at the receivers by extracting the target frequency-domain data along slow-time frequency dimension when the delay between the transmitted signals and their subsequent returns is timed. By splicing the separated target components distributed along several range cells, the virtual array can be formed, and then DOD and DOA of high speed target can be estimated using the superresolution algorithm with the range migration and the mismatch of matched filter properly removed. Simulation results have proved the validity of the proposed algorithm.

  11. Underwater Target Direction of Arrival Estimation by Small Acoustic Sensor Array Based on Sparse Bayesian Learning

    Directory of Open Access Journals (Sweden)

    Biao Wang

    2017-08-01

    Full Text Available Assuming independently but identically distributed sources, the traditional DOA (direction of arrival estimation method of underwater acoustic target normally has poor estimation performance and provides inaccurate estimation results. To solve this problem, a new high-accuracy DOA algorithm based on sparse Bayesian learning algorithm is proposed in terms of temporally correlated source vectors. In novel method, we regarded underwater acoustic source as a first-order auto-regressive process. And then we used the new algorithm of multi-vector SBL to reconstruct the signal spatial spectrum. Then we used the CS-MMV model to estimate the DOA. The experiment results have shown the novel algorithm has a higher spatial resolution and estimation accuracy than other DOA algorithms in the cases of less array element space and less snapshots.

  12. A targeted maximum likelihood estimator of a causal effect on a bounded continuous outcome.

    Science.gov (United States)

    Gruber, Susan; van der Laan, Mark J

    2010-01-01

    Targeted maximum likelihood estimation of a parameter of a data generating distribution, known to be an element of a semi-parametric model, involves constructing a parametric model through an initial density estimator with parameter ɛ representing an amount of fluctuation of the initial density estimator, where the score of this fluctuation model at ɛ = 0 equals the efficient influence curve/canonical gradient. The latter constraint can be satisfied by many parametric fluctuation models since it represents only a local constraint of its behavior at zero fluctuation. However, it is very important that the fluctuations stay within the semi-parametric model for the observed data distribution, even if the parameter can be defined on fluctuations that fall outside the assumed observed data model. In particular, in the context of sparse data, by which we mean situations where the Fisher information is low, a violation of this property can heavily affect the performance of the estimator. This paper presents a fluctuation approach that guarantees the fluctuated density estimator remains inside the bounds of the data model. We demonstrate this in the context of estimation of a causal effect of a binary treatment on a continuous outcome that is bounded. It results in a targeted maximum likelihood estimator that inherently respects known bounds, and consequently is more robust in sparse data situations than the targeted MLE using a naive fluctuation model. When an estimation procedure incorporates weights, observations having large weights relative to the rest heavily influence the point estimate and inflate the variance. Truncating these weights is a common approach to reducing the variance, but it can also introduce bias into the estimate. We present an alternative targeted maximum likelihood estimation (TMLE) approach that dampens the effect of these heavily weighted observations. As a substitution estimator, TMLE respects the global constraints of the observed data

  13. METHODS FOR ESTIMATING THE PARAMETERS OF THE POWER FUNCTION DISTRIBUTION.

    Directory of Open Access Journals (Sweden)

    azam zaka

    2013-10-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE In this paper, we present some methods for estimating the parameters of the two parameter Power function distribution. We used the least squares method (LSM, relative least squares method (RELS and ridge regression method (RR. Sampling behavior of the estimates is indicated by a Monte Carlo simulation. The objective of identifying the best estimator amongst them we use the Total Deviation (T.D and Mean Square Error (M.S.E as performance index. We determined the best method for estimation using different values for the parameters and different sample sizes.

  14. Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation

    KAUST Repository

    Jardak, Seifallah

    2014-04-01

    Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location

  15. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    Science.gov (United States)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  16. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  17. Impact of microbial count distributions on human health risk estimates.

    Science.gov (United States)

    Duarte, A S R; Nauta, M J

    2015-02-16

    Quantitative microbiological risk assessment (QMRA) is influenced by the choice of the probability distribution used to describe pathogen concentrations, as this may eventually have a large effect on the distribution of doses at exposure. When fitting a probability distribution to microbial enumeration data, several factors may have an impact on the accuracy of that fit. Analysis of the best statistical fits of different distributions alone does not provide a clear indication of the impact in terms of risk estimates. Thus, in this study we focus on the impact of fitting microbial distributions on risk estimates, at two different concentration scenarios and at a range of prevalence levels. By using five different parametric distributions, we investigate whether different characteristics of a good fit are crucial for an accurate risk estimate. Among the factors studied are the importance of accounting for the Poisson randomness in counts, the difference between treating "true" zeroes as such or as censored below a limit of quantification (LOQ) and the importance of making the correct assumption about the underlying distribution of concentrations. By running a simulation experiment with zero-inflated Poisson-lognormal distributed data and an existing QMRA model from retail to consumer level, it was possible to assess the difference between expected risk and the risk estimated with using a lognormal, a zero-inflated lognormal, a Poisson-gamma, a zero-inflated Poisson-gamma and a zero-inflated Poisson-lognormal distribution. We show that the impact of the choice of different probability distributions to describe concentrations at retail on risk estimates is dependent both on concentration and prevalence levels. We also show that the use of an LOQ should be done consciously, especially when zero-inflation is not used. In general, zero-inflation does not necessarily improve the absolute risk estimation, but performance of zero-inflated distributions in QMRA tends to be

  18. Nearest Neighbor Estimates of Entropy for Multivariate Circular Distributions

    Directory of Open Access Journals (Sweden)

    Neeraj Misra

    2010-05-01

    Full Text Available In molecular sciences, the estimation of entropies of molecules is important for the understanding of many chemical and biological processes. Motivated by these applications, we consider the problem of estimating the entropies of circular random vectors and introduce non-parametric estimators based on circular distances between n sample points and their k th nearest neighbors (NN, where k (≤ n – 1 is a fixed positive integer. The proposed NN estimators are based on two different circular distances, and are proven to be asymptotically unbiased and consistent. The performance of one of the circular-distance estimators is investigated and compared with that of the already established Euclidean-distance NN estimator using Monte Carlo samples from an analytic distribution of six circular variables of an exactly known entropy and a large sample of seven internal-rotation angles in the molecule of tartaric acid, obtained by a realistic molecular-dynamics simulation.

  19. Estimation of the target stem-cell population size in chronic myeloid leukemogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Radivoyevitch, T. [Department of Biometry and Epidemiology, Medical University of South Carolina, Charleston, SC 29425 (United States); Ramsey, M.J.; Tucker, J.D. [Biology and Biotechnology Research Program, L-452, Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States)

    1999-09-01

    Estimation of the number of hematopoietic stem cells capable of causing chronic myeloid leukemia (CML) is relevant to the development of biologically based risk models of radiation-induced CML. Through a comparison of the age structure of CML incidence data from the Surveillance, Epidemiology, and End Results (SEER) Program and the age structure of chromosomal translocations found in healthy subjects, the number of CML target stem cells is estimated for individuals above 20 years of age. The estimation involves three steps. First, CML incidence among adults is fit to an exponentially increasing function of age. Next, assuming a relatively short waiting time distribution between BCR-ABL induction and the appearance of CML, an exponential age function with rate constants fixed to the values found for CML is fitted to the translocation data. Finally, assuming that translocations are equally likely to occur between any two points in the genome, the parameter estimates found in the first two steps are used to estimate the number of target stem cells for CML. The population-averaged estimates of this number are found to be 1.86 x 10{sup 8} for men and 1.21 x 10{sup 8} for women; the 95% confidence intervals of these estimates are (1.34 x 10{sup 8}, 2.50 x 10{sup 8}) and (0.84 x 10{sup 8}, 1.83 x 10{sup 8}), respectively. (orig.)

  20. Estimation of Parameters of the Beta-Extreme Value Distribution

    Directory of Open Access Journals (Sweden)

    Zafar Iqbal

    2008-09-01

    Full Text Available In this research paper The Beta Extreme Value Type (III distribution which is developed by Zafar and Aleem (2007 is considered and parameters are estimated by using moments of the Beta-Extreme Value (Type III Distribution when the parameters ‘m’ & ‘n’ are real and moments of the Beta-Extreme Value (Type III Distribution when the parameters ‘m��� & ‘n’ are integers and then a Comparison between rth moments about origin when parameters are ‘m’ & ‘n’ are real and when parameters are ‘m’ & ‘n’ are integers. At the end second method, method of Maximum Likelihood is used to estimate the unknown parameters of the Beta Extreme Value Type (III distribution.

  1. Estimating probable flaw distributions in PWR steam generator tubes

    Energy Technology Data Exchange (ETDEWEB)

    Gorman, J.A.; Turner, A.P.L. [Dominion Engineering, Inc., McLean, VA (United States)

    1997-02-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses.

  2. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-21

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.

  3. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  4. On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Fei; Mather, Barry

    2017-07-01

    This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of the proposed approach on increasing PV hosting capacity is demonstrated.

  5. Estimation of Extreme Wind Speeds by Using Mixed Distributions

    Directory of Open Access Journals (Sweden)

    Escalante-Sandoval Carlos Agustín

    2013-04-01

    Full Text Available Structures are designed with the intention of safely withstanding ordinary and extreme wind loads over the entire intended economic lifetime. Due to the fact that extreme wind speeds are essentially random, appropriate statistical procedures needed to be developed in order design more accurately wind-sensitive structures. Five mixed extreme value distributions, with Gumbel, reverse Weibull and General Extreme Value components along with the Two Component Extreme Value distribution were used to model extreme wind speeds. The general procedure to estimate their parameters based on the maximum likelihood method is presented in the paper. A total of 45 sets, ranging from 9-year to 56-year, of largest annual wind speeds gathered from stations located in The Netherlands were fitted to mixed distributions. The best model was selected based on a goodness-of-fit test. The return levels were estimated and compared with those obtained by assuming the data arise from a single distribution. 87% of analyzed samples were better fitted with a mixed distribution. The best mixed models were the mixed reverse Weibull distribution and the mixture Gumbel-Reverse Weibull. Results suggest that it is very important to consider the mixed distributions as an additional mathematical tool when analyzing extreme wind speeds.

  6. Estimation of thermochemical behavior of spallation products in mercury target

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Kaoru; Kaminaga, Masanori; Haga, Katsuhiro; Kinoshita, Hidetaka; Aso, Tomokazu; Teshigawara, Makoto; Hino, Ryutaro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-02-01

    In order to examine the radiation safety of a spallation mercury target system, especially source term evaluation, it is necessary to clarify the chemical forms of spallation products generated by spallation reaction with proton beam. As for the chemical forms of spallation products in mercury that involves large amounts of spallation products, these forms were estimated by using the binary phase diagrams and the thermochemical equilibrium calculation based on the amounts of spallation product. Calculation results showed that the mercury would dissolve Al, As, B, Be, Bi, C, Co, Cr, Fe, Ga, Ge, Ir, Mo, Nb, Os, Re, Ru, Sb, Si, Ta, Tc, V and W in the element state, and Ag, Au, Ba, Br, Ca, Cd, Ce, Cl, Cs, Cu, Dy, Er, Eu, F, Gd, Hf, Ho, I, In, K, La, Li, Lu, Mg, Mn, Na, Nd, Ni, O, Pb, Pd, Pr, Pt, Rb, Rh, S, Sc, Se, Sm, Sn, Sr, Tb, Te, Ti, Tl, Tm, Y, Yb, Zn and Zr in the form of inorganic mercury compounds. As for As, Be, Co, Cr, Fe, Ge, Ir, Mo, Nb, Os, Pt, Re, Ru, Se, Ta, V, W and Zr, precipitation could be occurred when increasing the amounts of spallation products with operation time of the spallation target system. On the other hand, beryllium-7 (Be-7), which is produced by spallation reaction of oxygen in the cooling water of a safety hull, becomes the main factor of the external exposure to maintain the cooling loop. Based on the thermochemical equilibrium calculation to Be-H{sub 2}O binary system, the chemical forms of Be in the cooling water were estimated. Then the Be could exist in the form of cations such as BeOH{sup +}, BeO{sup +} and Be{sup 2+} under the condition of less than 10{sup -8} of the Be mole fraction in the cooling water. (author)

  7. Estimating option-implied distributions in illiquid markets and ...

    African Journals Online (AJOL)

    Estimating option-implied distributions in illiquid markets and implementing the Ross recovery theorem. Emlyn Flint, Eben Maré. Abstract. In this research we describe how forward-looking information on the statistical properties of an asset can be extracted directly from options market data and demonstrate how this can be ...

  8. Voltage Estimation in Active Distribution Grids Using Neural Networks

    DEFF Research Database (Denmark)

    Pertl, Michael; Heussen, Kai; Gehrke, Oliver

    2016-01-01

    the observability of distribution systems has to be improved. To increase the situational awareness of the power system operator data driven methods can be employed. These methods benefit from newly available data sources such as smart meters. This paper presents a voltage estimation method based on neural networks...

  9. Estimation of wind energy potential using finite mixture distribution models

    Energy Technology Data Exchange (ETDEWEB)

    Akpinar, Sinan [Physics Department, Firat University, 23279 Elazig (Turkey); Akpinar, Ebru Kavak [Mechanical Engineering Department, Firat University, 23279 Elazig (Turkey)

    2009-04-15

    In this paper has been investigated an analysis of wind characteristics of four stations (Elazig, Elazig-Maden, Elazig-Keban, and Elazig-Agin) over a period of 8 years (1998-2005). The probabilistic distributions of wind speed are a critical piece of information needed in the assessment of wind energy potential, and have been conventionally described by various empirical correlations. Among the empirical correlations, there are the Weibull distribution and the Maximum Entropy Principle. These wind speed distributions can not accurately represent all wind regimes observed in that region. However, this study represents a theoretical approach of wind speed frequency distributions observed in that region through applications of a Singly Truncated from below Normal Weibull mixture distribution and a two component mixture Weibull distribution and offer less relative errors in determining the annual mean wind power density. The parameters of the distributions are estimated using the least squares method and Statistica software. The suitability of the distributions is judged from the probability plot correlation coefficient plot R{sup 2}, RMSE and {chi}{sup 2}. Based on the results obtained, we conclude that the two mixture distributions proposed here provide very flexible models for wind speed studies. (author)

  10. Private and Secure Distribution of Targeted Advertisements to Mobile Phones

    Directory of Open Access Journals (Sweden)

    Stylianos S. Mamais

    2017-05-01

    Full Text Available Online Behavioural Advertising (OBA enables promotion companies to effectively target users with ads that best satisfy their purchasing needs. This is highly beneficial for both vendors and publishers who are the owners of the advertising platforms, such as websites and app developers, but at the same time creates a serious privacy threat for users who expose their consumer interests. In this paper, we categorize the available ad-distribution methods and identify their limitations in terms of security, privacy, targeting effectiveness and practicality. We contribute our own system, which utilizes opportunistic networking in order to distribute targeted adverts within a social network. We improve upon previous work by eliminating the need for trust among the users (network nodes while at the same time achieving low memory and bandwidth overhead, which are inherent problems of many opportunistic networks. Our protocol accomplishes this by identifying similarities between the consumer interests of users and then allows them to share access to the same adverts, which need to be downloaded only once. Although the same ads may be viewed by multiple users, privacy is preserved as the users do not learn each other’s advertising interests. An additional contribution is that malicious users cannot alter the ads in order to spread malicious content, and also, they cannot launch impersonation attacks.

  11. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  12. Angular Rate Estimation Using a Distributed Set of Accelerometers

    Directory of Open Access Journals (Sweden)

    Sung Kyung Hong

    2011-11-01

    Full Text Available A distributed set of accelerometers based on the minimum number of 12 accelerometers allows for computation of the magnitude of angular rate without using the integration operation. However, it is not easy to extract the magnitude of angular rate in the presence of the accelerometer noises, and even worse, it is difficult to determine the direction of a rotation because the angular rate is present in its quadratic form within the inertial measurement system equations. In this paper, an extended Kalman filter scheme to correctly estimate both the direction and magnitude of the angular rate through fusion of the angular acceleration and quadratic form of the angular rate is proposed. We also provide observability analysis for the general distributed accelerometers-based inertial measurement unit, and show that the angular rate can be correctly estimated by general nonlinear state estimators such as an extended Kalman filter, except under certain extreme conditions.

  13. Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dexin; Yang, Liuqing; Florita, Anthony; Alam, S.M. Shafiul; Elgindy, Tarek; Hodge, Bri-Mathias

    2017-04-24

    The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the help of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.

  14. Distribution Line Parameter Estimation Under Consideration of Measurement Tolerances

    DEFF Research Database (Denmark)

    Prostejovsky, Alexander; Gehrke, Oliver; Kosek, Anna Magdalena

    2016-01-01

    State estimation and control approaches in electric distribution grids rely on precise electric models that may be inaccurate. This work presents a novel method of estimating distribution line parameters using only root mean square voltage and power measurements under consideration of measurement...... tolerances, noise, and asynchronous timestamps. A measurement tolerance compensation model and an alternative representation of the power flow equations without voltage phase angles are introduced. The line parameters are obtained using numeric methods. The simulation demonstrates in case of the series...... conductance that the absolute compensated error is −1.05% and −1.07% for both representations, as opposed to the expected uncompensated error of −79.68%. Identification of a laboratory distribution line using real measurement data grid yields a deviation of 6.75% and 4.00%, respectively, from a calculation...

  15. Impact of microbial count distributions on human health risk estimates

    DEFF Research Database (Denmark)

    Ribeiro Duarte, Ana Sofia; Nauta, Maarten

    2015-01-01

    . In general, zero-inflation does not necessarily improve the absolute risk estimation, but performance of zero-inflated distributions in QMRA tends to be more robust to changes in prevalence and concentration levels, and to the use of an LOQ to interpret zero values, compared to that of their non-zero...... of accounting for the Poisson randomness in counts, the difference between treating “true” zeroes as such or as censored below a limit of quantification (LOQ) and the importance of making the correct assumption about the underlying distribution of concentrations. By running a simulation experiment with zero......-inflated Poisson-lognormal distributed data and an existing QMRA model from retail to consumer level, it was possible to assess the difference between expected risk and the risk estimated with using a lognormal, a zero-inflated lognormal, a Poisson-gamma, a zero-inflated Poisson-gamma and a zero-inflated Poisson...

  16. A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions

    Science.gov (United States)

    Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver L.

    2016-01-01

    Satellite constellations and Distributed Spacecraft Mission (DSM) architectures offer unique benefits to Earth observation scientists and unique challenges to cost estimators. The Cost and Risk (CR) module of the Tradespace Analysis Tool for Constellations (TAT-C) being developed by NASA Goddard seeks to address some of these challenges by providing a new approach to cost modeling, which aggregates existing Cost Estimating Relationships (CER) from respected sources, cost estimating best practices, and data from existing and proposed satellite designs. Cost estimation through this tool is approached from two perspectives: parametric cost estimating relationships and analogous cost estimation techniques. The dual approach utilized within the TAT-C CR module is intended to address prevailing concerns regarding early design stage cost estimates, and offer increased transparency and fidelity by offering two preliminary perspectives on mission cost. This work outlines the existing cost model, details assumptions built into the model, and explains what measures have been taken to address the particular challenges of constellation cost estimating. The risk estimation portion of the TAT-C CR module is still in development and will be presented in future work. The cost estimate produced by the CR module is not intended to be an exact mission valuation, but rather a comparative tool to assist in the exploration of the constellation design tradespace. Previous work has noted that estimating the cost of satellite constellations is difficult given that no comprehensive model for constellation cost estimation has yet been developed, and as such, quantitative assessment of multiple spacecraft missions has many remaining areas of uncertainty. By incorporating well-established CERs with preliminary approaches to approaching these uncertainties, the CR module offers more complete approach to constellation costing than has previously been available to mission architects or Earth

  17. Optimal Node Grouping for Water Distribution System Demand Estimation

    Directory of Open Access Journals (Sweden)

    Donghwi Jung

    2016-04-01

    Full Text Available Real-time state estimation is defined as the process of calculating the state variable of interest in real time not being directly measured. In a water distribution system (WDS, nodal demands are often considered as the state variable (i.e., unknown variable and can be estimated using nodal pressures and pipe flow rates measured at sensors installed throughout the system. Nodes are often grouped for aggregation to decrease the number of unknowns (demands in the WDS demand estimation problem. This study proposes an optimal node grouping model to maximize the real-time WDS demand estimation accuracy. This Kalman filter-based demand estimation method is linked with a genetic algorithm for node group optimization. The modified Austin network demand is estimated to demonstrate the proposed model. True demands and field measurements are synthetically generated using a hydraulic model of the study network. Accordingly, the optimal node groups identified by the proposed model reduce the total root-mean-square error of the estimated node group demand by 24% compared to that determined by engineering knowledge. Based on the results, more pipe flow sensors should be installed to measure small flows and to further enhance the demand estimation accuracy.

  18. Graph theoretic framework based cooperative control and estimation of multiple UAVs for target tracking

    Science.gov (United States)

    Ahmed, Mousumi

    Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication

  19. Distributed Information Compression for Target Tracking in Cluster-Based Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shi-Kuan Liao

    2016-06-01

    Full Text Available Target tracking is a critical wireless sensor application, which involves signal and information processing technologies. In conventional target position estimation methods, an estimate is usually demonstrated by an average target position. In contrast, this work proposes a distributed information compression method to describe the measurement uncertainty of tracking problems in cluster-based wireless sensor networks. The leader-based information processing scheme is applied to perform target positioning and energy conservation. A two-level hierarchical network topology is adopted for energy-efficient target tracking with information compression. A Level 1 network architecture is a cluster-based network topology for managing network operations. A Level 2 network architecture is an event-based and leader-based topology, utilizing the concept of information compression to process the estimates of sensor nodes. The simulation results show that compared to conventional schemes, the proposed data processing scheme has a balanced system performance in terms of tracking accuracy, data size for transmission and energy consumption.

  20. Bayes Estimation of Change Point in Discrete Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    Mayuri Pandya

    2011-01-01

    Full Text Available A sequence of independent lifetimes X1,…,Xm,Xm+1,…,Xn was observed from Maxwell distribution with reliability r1(t at time t but later, it was found that there was a change in the system at some point of time m and it is reflected in the sequence after Xm by change in reliability r2(t at time t. The Bayes estimators of m, θ1, θ2 are derived under different asymmetric loss functions. The effects of correct and wrong prior information on the Bayes estimates are studied.

  1. Estimation of Nanoparticle Size Distributions by Image Analysis

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael; Hansen, Mikkel Fougt

    2000-01-01

    . In this paper, we present an automated image analysis technique based on a deformable ellipse model that can perform this task. Results of using this technique are shown for both nearly spherical particles and more irregularly shaped particles. The technique proves to be a very useful tool for nanoparticle......Knowledge of the nanoparticle size distribution is important for the interpretation of experimental results in many studies of nanoparticle properties. An automated method is needed for accurate and robust estimation of particle size distribution from nanoparticle images with thousands of particles...

  2. The influence of drug distribution and drug-target binding on target occupancy : The rate-limiting step approximation

    NARCIS (Netherlands)

    Witte, de W.E.A.; Vauquelin, G.; Graaf, van der P.H.; Lange, de E.C.M.

    2017-01-01

    The influence of drug-target binding kinetics on target occupancy can be influenced by drug distribution and diffusion around the target, often referred to as "rebinding" or "diffusion-limited binding". This gives rise to a decreased decline of the drug-target complex concentration as a result of a

  3. Structure Learning and Statistical Estimation in Distribution Networks - Part II

    Energy Technology Data Exchange (ETDEWEB)

    Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-13

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.

  4. A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions

    Science.gov (United States)

    Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver

    2016-01-01

    Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities

  5. Spherical Hamiltonian Monte Carlo for Constrained Target Distributions.

    Science.gov (United States)

    Lan, Shiwei; Zhou, Bo; Shahbaba, Babak

    2014-06-18

    Statistical models with constrained probability distributions are abundant in machine learning. Some examples include regression models with norm constraints (e.g., Lasso), probit models, many copula models, and Latent Dirichlet Allocation (LDA) models. Bayesian inference involving probability distributions confined to constrained domains could be quite challenging for commonly used sampling algorithms. For such problems, we propose a novel Markov Chain Monte Carlo (MCMC) method that provides a general and computationally efficient framework for handling boundary conditions. Our method first maps the D-dimensional constrained domain of parameters to the unit ball [Formula: see text], then augments it to a D-dimensional sphere SD such that the original boundary corresponds to the equator of SD . This way, our method handles the constraints implicitly by moving freely on the sphere generating proposals that remain within boundaries when mapped back to the original space. To improve the computational efficiency of our algorithm, we divide the dynamics into several parts such that the resulting split dynamics has a partial analytical solution as a geodesic flow on the sphere. We apply our method to several examples including truncated Gaussian, Bayesian Lasso, Bayesian bridge regression, and a copula model for identifying synchrony among multiple neurons. Our results show that the proposed method can provide a natural and efficient framework for handling several types of constraints on target distributions.

  6. Estimating the Age Distribution of Oceanic Dissolved Organic Carbon

    Science.gov (United States)

    Follett, C. L.; Forney, D. C.; Repeta, D.; Rothman, D.

    2010-12-01

    Dissolved organic carbon (DOC) is a large, ubiquitous component of open ocean water at all depths and impacts atmospheric carbon dioxide levels at both short and long timescales. It is currently believed that oceanic DOC contains a multi-thousand-year-old refractory deep-water component which is mixed with a young labile component in surface waters. Unfortunately, the only evidence for this comes from a few isolated depth profiles of both DOC concentration and bulk radiocarbon. Although the profile data is consistent with a two-component mixing model, directly separating the two components has proven to be a challenge. We explore the validity of the two component mixing model by directly estimating the age distribution of oceanic DOC. The two-component model suggests that the age distribution is composed of two distinct peaks. In order to obtain an estimate of the age distribution we first record changes in both concentration and percent radiocarbon as a sample is oxidized under ultra-violet radiation [1]. We formulate a mathematical model relating the age distribution to these changes, assuming that they result from components of different radiocarbon age and UV-reactivity. This allows us to numerically invert the data and estimate the age distribution. We apply our procedure to DOC samples collected from three distinct depths (50, 500, and 2000 meters) in the north-central Pacific Ocean. [1] S.R. Beaupre, E.R.M. Druffel, and S. Griffin. A low-blank photochemical extraction system for concentration and isotopic analyses of marine dissolved organic carbon. Limnol. Oceanogr. Methods, 5:174-184, 2007.

  7. Polynomial probability distribution estimation using the method of moments

    Science.gov (United States)

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  8. Polynomial probability distribution estimation using the method of moments.

    Science.gov (United States)

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  9. ESTIMATING FIBRE DIRECTION DISTRIBUTIONS OF REINFORCED COMPOSITES FROM TOMOGRAPHIC IMAGES

    Directory of Open Access Journals (Sweden)

    Oliver Wirjadi

    2016-12-01

    Full Text Available Fibre reinforced composites constitute a relevant class of materials used chiefly in lightweight constructions for example in fuselages or car bodies. The spatial arrangement of the fibres and in particular their direction distribution have huge impact on macroscopic properties and, thus, its determination is an important topic of material characterisation. The fibre direction distribution is defined on the unit sphere, and it is therefore preferable to work with fully three-dimensional images of the microstructure as obtained, e.g., by computed micro-tomography. A number of recent image analysis algorithms exploit local grey value variations to estimate a preferred direction in each fibre point. Averaging these local results leads estimates of the volume-weighted fibre direction distribution. We show how the thus derived fibre direction distribution is related to quantities commonly used in engineering applications. Furthermore, we discuss four algorithms for local orientation analysis, namely those based on the response of anisotropic Gaussian filters, moments and axes of inertia derived from directed distance transforms, the structure tensor, or the Hessian matrix. Finally, the feasibility of these algorithms is demonstrated for application examples and some advantages and disadvantages of the underlying methods are pointed out.

  10. Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks

    Directory of Open Access Journals (Sweden)

    José Raúl Machado-Fernández

    2016-12-01

    Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.

  11. Comparing four methods to estimate usual intake distributions.

    Science.gov (United States)

    Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P

    2011-07-01

    The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced

  12. Hypothesis likelihood function estimation for synthetic aperture radar targets

    Science.gov (United States)

    Fister, Thomas; Garber, Frederick D.; Sawtelle, Steven C.; Withman, Raymond L.

    1993-10-01

    The work described in this paper focuses on recent progress in radar signal processing and target recognition techniques developed in support of WL/AARA target recognition programs. The goal of the program is to develop evaluation methodologies of hypotheses in a model- based framework. In this paper, we describe an hypothesis evaluation strategy that is predicated on a generalized likelihood function framework, and allows for incomplete or inaccurate descriptions of the observed unknown target. The target hypothesis evaluation procedure we have developed begins with a structural analysis by means of parametric modeling of the several radar scattering centers. The energy, location, dispersion, and shape of all measured target scattering centers are parametrized. The resulting structural description is used to represent each target and, subsequently, to evaluate the hypotheses of each of the targets in the candidate set.

  13. Structure Learning and Statistical Estimation in Distribution Networks - Part I

    Energy Technology Data Exchange (ETDEWEB)

    Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-13

    Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of the metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presents algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time– which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.

  14. P3T+: A Performance Estimator for Distributed and Parallel Programs

    Directory of Open Access Journals (Sweden)

    T. Fahringer

    2000-01-01

    Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.

  15. Improving Distribution Resiliency with Microgrids and State and Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Williams, Tess L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schneider, Kevin P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elizondo, Marcelo A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Yannan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Chen-Ching [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Yin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil Gup [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-09-30

    Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using

  16. Cyber-EDA: Estimation of Distribution Algorithms with Adaptive Memory Programming

    Directory of Open Access Journals (Sweden)

    Peng-Yeng Yin

    2013-01-01

    Full Text Available The estimation of distribution algorithm (EDA aims to explicitly model the probability distribution of the quality solutions to the underlying problem. By iterative filtering for quality solution from competing ones, the probability model eventually approximates the distribution of global optimum solutions. In contrast to classic evolutionary algorithms (EAs, EDA framework is flexible and is able to handle inter variable dependence, which usually imposes difficulties on classic EAs. The success of EDA relies on effective and efficient building of the probability model. This paper facilitates EDA from the adaptive memory programming (AMP domain which has developed several improved forms of EAs using the Cyber-EA framework. The experimental result on benchmark TSP instances supports our anticipation that the AMP strategies can enhance the performance of classic EDA by deriving a better approximation for the true distribution of the target solutions.

  17. Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models

    Science.gov (United States)

    Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.

    2014-12-01

    We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.

  18. Estimating a distribution function of the tumor size at metastasis.

    Science.gov (United States)

    Xu, J L; Prorok, P C

    1998-09-01

    In studying the relationship between the size of primary cancers and the occurrence of metastases, two quantities are of prime importance. The first is the distribution of tumor size at the point of metastatic transition, while the second is the probability that detectable metastases are present when cancer comes to medical attention. Kimmel and Flehinger (1991, Biometrics 47, 987-1004) developed a general nonparametric model and studied its two limiting cases. Because of unidentifiablity of their general model, a new identifiable model is introduced by making the hazard function for detecting a metastatic cancer a constant. The new model includes Kimmel and Flehinger's (1991) second limiting model as a special case. An estimator of the tumor size distribution at metastases is proposed. The result is applied to a set of colorectal cancer data.

  19. Estimating maximum depth distribution of seagrass using underwater videography

    Energy Technology Data Exchange (ETDEWEB)

    Norris, J.G. [Marine Resources Consultants, Port Townsend, WA (United States); Wyllie-Echeverria, S.

    1997-06-01

    The maximum depth distribution of eelgrass (Zostera marina) beds in Willapa Bay, Washington appears to be limited by light penetration which is likely related to water turbidity. Using underwater videographic techniques we estimated that the maximum depth penetration in the less turbid outer bay was -5.85 ft (MILW) and in the more turbid inner bay was only -1.59 ft (MLLW). Eelgrass beds had well defined deepwater edges and no eelgrass was observed in the deep channels of the bay. The results from this study suggest that aerial photographs taken during low tide periods are capable of recording the majority of eelgrass beds in Willapa Bay.

  20. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar

    Directory of Open Access Journals (Sweden)

    Teng Long

    2016-09-01

    Full Text Available Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.

  1. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar

    Science.gov (United States)

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-01-01

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058

  2. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  3. Estimation of Raindrop size Distribution over Darjeeling (India)

    Science.gov (United States)

    Mehta, Shyam; Mitra, Amitabha

    2016-07-01

    A study of rain drop size distribution (DSD) model over Darjeeling (27001'N, 88015'E), India, has been carried out using a Micro Rain Radar (MRR). In this article on the basis of MRR which measured DSD (number of rain drop size and rain rates with the time interval of one minute), at the particular heights and the different rain rates. It starts the simulating data for using the general formula moment of the gamma DSD; however, Applying the method by DSD model of exponential, lognormal, and gamma, to check the true estimation of drop size distributions and it has been estimated by the lower order moments and higher order moments for gamma Distributions. It shows the DSD at different altitudes from 150 m to 2000 m, in the vertical steps of 500 m. however it has been simulated the DSD data about 2 km out of 4.5 km. (I). At the height of 150 m where most of DSD behaves gamma Distributions according to the moments order of low and the moments order of high, However, where occupying low concentrations for any rain rates, (ii). Upper altitudes from 450 m to 2000 m as where as shown most of DSD behaves gamma Distributions according to the moments order of high only, However, where occupying high concentrations for any rain rates. DSD at the altitudes of 2 km and even more 4.5 km (as not shown) but every height behaves more or less similar manner except at the height of 150 m, The DSD of empirical model has been derived on the basis of fit parameters evaluated from experimental data. It is observed that data fits well in gamma distribution for Darjeeling. And relation between slope (ΛɅ) and shape (μµ) which bears the best resemblance at the height of 150m (ground surface) at the lower order moments by the linear fit for any rain rates. In higher altitudes obtained where shape (μ) and slope (ΛɅ) which is not making any resemblance by the linear fit or polynomial fit for any rain rates in Darjeeling.

  4. Systematic procedure for generating operational policies to achieve target crystal size distribution (CSD) in batch cooling crystallization

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli; Singh, Ravendra; Sin, Gürkan

    2011-01-01

    A systematic procedure to achieve a target crystal size distribution (CSD) under generated operational policies in batch cooling crystallization is presented. An analytical CSD estimator has been employed in the systematic procedure to generate the necessary operational policies to achieve the ta...

  5. Wireless Power Transfer for Distributed Estimation in Sensor Networks

    Science.gov (United States)

    Mai, Vien V.; Shin, Won-Yong; Ishibashi, Koji

    2017-04-01

    This paper studies power allocation for distributed estimation of an unknown scalar random source in sensor networks with a multiple-antenna fusion center (FC), where wireless sensors are equipped with radio-frequency based energy harvesting technology. The sensors' observation is locally processed by using an uncoded amplify-and-forward scheme. The processed signals are then sent to the FC, and are coherently combined at the FC, at which the best linear unbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve the following two power allocation problems: 1) minimizing distortion under various power constraints; and 2) minimizing total transmit power under distortion constraints, where the distortion is measured in terms of mean-squared error of the BLUE. Two iterative algorithms are developed to solve the non-convex problems, which converge at least to a local optimum. In particular, the above algorithms are designed to jointly optimize the amplification coefficients, energy beamforming, and receive filtering. For each problem, a suboptimal design, a single-antenna FC scenario, and a common harvester deployment for colocated sensors, are also studied. Using the powerful semidefinite relaxation framework, our result is shown to be valid for any number of sensors, each with different noise power, and for an arbitrarily number of antennas at the FC.

  6. Multiple Model Adaptive Estimator Target Tracker for Maneuvering Targets in Clutter

    National Research Council Canada - National Science Library

    Smith, Brian D

    2005-01-01

    ...) to be implemented directly. Poorly known or varying target dynamics complicate the design of any tracking filter, and filters using only a single dynamics model can rarely handle anything beyond the most benign target maneuvers...

  7. Particle size distribution: A key factor in estimating powder dustiness.

    Science.gov (United States)

    López Lilao, Ana; Sanfélix Forner, Vicenta; Mallol Gasch, Gustavo; Monfort Gimeno, Eliseo

    2017-12-01

    A wide variety of raw materials, involving more than 20 samples of quartzes, feldspars, nephelines, carbonates, dolomites, sands, zircons, and alumina, were selected and characterised. Dustiness, i.e., a materials' tendency to generate dust on handling, was determined using the continuous drop method. These raw materials were selected to encompass a wide range of particle sizes (1.6-294 µm) and true densities (2650-4680 kg/m 3 ). The dustiness of the raw materials, i.e., their tendency to generate dust on handling, was determined using the continuous drop method. The influence of some key material parameters (particle size distribution, flowability, and specific surface area) on dustiness was assessed. In this regard, dustiness was found to be significantly affected by particle size distribution. Data analysis enabled development of a model for predicting the dustiness of the studied materials, assuming that dustiness depended on the particle fraction susceptible to emission and on the bulk material's susceptibility to release these particles. On the one hand, the developed model allows the dustiness mechanisms to be better understood. In this regard, it may be noted that relative emission increased with mean particle size. However, this did not necessarily imply that dustiness did, because dustiness also depended on the fraction of particles susceptible to be emitted. On the other hand, the developed model enables dustiness to be estimated using just the particle size distribution data. The quality of the fits was quite good and the fact that only particle size distribution data are needed facilitates industrial application, since these data are usually known by raw materials managers, thus making additional tests unnecessary. This model may therefore be deemed a key tool in drawing up efficient preventive and/or corrective measures to reduce dust emissions during bulk powder processing, both inside and outside industrial facilities. It is recommended, however

  8. Efficient Estimation of Smooth Distributions From Coarsely Grouped Data

    DEFF Research Database (Denmark)

    Rizzi, Silvia; Gampe, Jutta; Eilers, Paul H C

    2015-01-01

    Ungrouping binned data can be desirable for many reasons: Bins can be too coarse to allow for accurate analysis; comparisons can be hindered when different grouping approaches are used in different histograms; and the last interval is often wide and open-ended and, thus, covers a lot of information...... in the tail area. Age group-specific disease incidence rates and abridged life tables are examples of binned data. We propose a versatile method for ungrouping histograms that assumes that only the underlying distribution is smooth. Because of this modest assumption, the approach is suitable for most...... to the estimation of rates when both the event counts and the exposures to risk are grouped....

  9. Fast Parabola Detection Using Estimation of Distribution Algorithms

    Science.gov (United States)

    Sierra-Hernandez, Juan Manuel; Avila-Garcia, Maria Susana; Rojas-Laguna, Roberto

    2017-01-01

    This paper presents a new method based on Estimation of Distribution Algorithms (EDAs) to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications. PMID:28321264

  10. Fast Parabola Detection Using Estimation of Distribution Algorithms

    Directory of Open Access Journals (Sweden)

    Jose de Jesus Guerrero-Turrubiates

    2017-01-01

    Full Text Available This paper presents a new method based on Estimation of Distribution Algorithms (EDAs to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications.

  11. ESTIMATING SOIL PARTICLE-SIZE DISTRIBUTION FOR SICILIAN SOILS

    Directory of Open Access Journals (Sweden)

    Vincenzo Bagarello

    2009-09-01

    Full Text Available The soil particle-size distribution (PSD is commonly used for soil classification and for estimating soil behavior. An accurate mathematical representation of the PSD is required to estimate soil hydraulic properties and to compare texture measurements from different classification systems. The objective of this study was to evaluate the ability of the Haverkamp and Parlange (HP and Fredlund et al. (F PSD models to fit 243 measured PSDs from a wide range of 38 005_Bagarello(547_33 18-11-2009 11:55 Pagina 38 soil textures in Sicily and to test the effect of the number of measured particle diameters on the fitting of the theoretical PSD. For each soil textural class, the best fitting performance, established using three statistical indices (MXE, ME, RMSE, was obtained for the F model with three fitting parameters. In particular, this model performed better in the fine-textured soils than the coarse-textured ones but a good performance (i.e., RMSE < 0.03 was detected for the majority of the investigated soil textural classes, i.e. clay, silty-clay, silty-clay-loam, silt-loam, clay-loam, loamy-sand, and loam classes. Decreasing the number of measured data pairs from 14 to eight determined a worse fitting of the theoretical distribution to the measured one. It was concluded that the F model with three fitting parameters has a wide applicability for Sicilian soils and that the comparison of different PSD investigations can be affected by the number of measured data pairs.

  12. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  13. Fast Image Segmentation Using Two-Dimensional Otsu Based on Estimation of Distribution Algorithm

    Directory of Open Access Journals (Sweden)

    Wuli Wang

    2017-01-01

    Full Text Available Traditional two-dimensional Otsu algorithm has several drawbacks; that is, the sum of probabilities of target and background is approximate to 1 inaccurately, the details of neighborhood image are not obvious, and the computational cost is high. In order to address these problems, a method of fast image segmentation using two-dimensional Otsu based on estimation of distribution algorithm is proposed. Firstly, in order to enhance the performance of image segmentation, the guided filtering is employed to improve neighborhood image template instead of mean filtering. Additionally, the probabilities of target and background in two-dimensional histogram are exactly calculated to get more accurate threshold. Finally, the trace of the interclass dispersion matrix is taken as the fitness function of estimation of distributed algorithm, and the optimal threshold is obtained by constructing and sampling the probability model. Extensive experimental results demonstrate that our method can effectively preserve details of the target, improve the segmentation precision, and reduce the running time of algorithms.

  14. Multiplicity distributions of shower particles and target fragments in 7 ...

    Indian Academy of Sciences (India)

    emulsion) collisions at 3 A GeV/c are experimentally studied. In the framework of the multisource thermal model, the multicomponent Erlang distribution is used to describe the experimental multiplicity distributions of shower particles, grey fragments ...

  15. Estimating investor preferences towards portfolio return distribution in investment funds

    Directory of Open Access Journals (Sweden)

    Margareta Gardijan

    2015-03-01

    Full Text Available Recent research in the field of investor preference has emphasised the need to go beyond just simply analyzing the first two moments of a portfolio return distribution used in a MV (mean-variance paradigm. The suggestion is to observe an investor's utility function as an nth order Taylor approximation. In such terms, the assumption is that investors prefer greater values of odd and smaller values of even moments. In order to investigate the preferences of Croatian investment funds, an analysis of the moments of their return distribution is conducted. The sample contains data on monthly returns of 30 investment funds in Croatia for the period from January 1999 to May 2014. Using the theoretical utility functions (DARA, CARA, CRRA, we compare changes in their preferences when higher moments are included. Moreover, we investigate an extension of the CAPM model in order to find out whether including higher moments can explain better the relationship between the awards and risk premium, and whether we can apply these findings to estimate preferences of Croatian institutional investors. The results indicate that Croatian institutional investors do not seek compensation for bearing greater market risk.

  16. Constant false alarm rate algorithm for the dim-small target detection based on the distribution characteristics of target coordinates

    Science.gov (United States)

    Fei, Xiao-Liang; Ren, Kan; Qian, Wei-xian; Wang, Peng-cheng

    2015-10-01

    CFAR (Constant False Alarm Rate) is a key technology in Infrared dim-small target detection system. Because the traditional constant false alarm rate detection algorithm gets the probability density distribution which is based on the pixel information of each area in the whole image and calculates the target segmentation threshold of each area by formula of Constant false alarm rate, the problems including the difficulty of probability distribution statistics and large amount of algorithm calculation and long delay time are existing. In order to solve the above problems effectively, a formula of Constant false alarm rate based on target coordinates distribution is presented. Firstly, this paper proposes a new formula of Constant false alarm rate by improving the traditional formula of Constant false alarm rate based on the single grayscale distribution which objective statistical distribution features are introduced. So the control of false alarm according to the target distribution information is implemented more accurately and the problem of high false alarm that is caused of the complex background in local area as the cloud reflection and the ground clutter interference is solved. At the same time, in order to reduce the amount of algorithm calculation and improve the real-time characteristics of algorithm, this paper divides the constant false-alarm statistical area through two-dimensional probability density distribution of target number adaptively which is different from the general identifying methods of constant false-alarm statistical area. Finally, the target segmentation threshold of next frame is calculated by iteration based on the function of target distribution probability density in image sequence which can achieve the purpose of controlling the false alarm until the false alarm is down to the upper limit. The experiment results show that the proposed method can significantly improve the operation time and meet the real-time requirements on

  17. UAV to UAV Target Detection and Pose Estimation

    Science.gov (United States)

    2012-06-01

    open computer vision) for real-time implementation and faster computation since OpenCV has precompiled libraries that may work better for real image...affordable CCD cameras and open coding libraries . We accomplish this by reviewing past literature about UAV detection and pose estimation and exploring...capabilities suitable for the purpose of UAV to UAV detection and pose estima- tion using affordable CCD cameras and open coding libraries . We

  18. [Assessment of accumulation of the shot products on the targets in shooting from a short-barreled gun with a muffler for estimation of shooting distance].

    Science.gov (United States)

    Demidov, I V; Luzanova, I S; Sonis, M A

    2006-01-01

    A practical expert task--to estimate shot distance and order of shots made in two victims from a gun with muffler--is described as illustration of opportunities of the complex investigation with experimental shots and emission spectral analysis of the targets. Distribution of the shot soot on the targets in shooting from the distance up to 1 m is analyzed.

  19. Target similarity effects: support for the parallel distributed processing assumptions.

    Science.gov (United States)

    Humphreys, M S; Tehan, G; O'Shea, A; Bolland, S W

    2000-07-01

    Recent research has begun to provide support for the assumptions that memories are stored as a composite and are accessed in parallel (Tehan & Humphreys, 1998). New predictions derived from these assumptions and from the Chappell and Humphreys (1994) implementation of these assumptions were tested. In three experiments, subjects studied relatively short lists of words. Some of the lists contained two similar targets (thief and theft) or two dissimilar targets (thief and steal) associated with the same cue (robbery). As predicted, target similarity affected performance in cued recall but not free association. Contrary to predictions, two spaced presentations of a target did not improve performance in free association. Two additional experiments confirmed and extended this finding. Several alternative explanations for the target similarity effect, which incorporate assumptions about separate representations and sequential search, are rejected. The importance of the finding that, in at least one implicit memory paradigm, repetition does not improve performance is also discussed.

  20. Distributed Dynamic State Estimator, Generator Parameter Estimation and Stability Monitoring Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Meliopoulos, Sakis [Georgia Inst. of Technology, Atlanta, GA (United States); Cokkinides, George [Georgia Inst. of Technology, Atlanta, GA (United States); Fardanesh, Bruce [New York Power Authority, NY (United States); Hedrington, Clinton [U.S. Virgin Islands Water and Power Authority (WAPA), St. Croix (U.S. Virgin Islands)

    2013-12-31

    This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based

  1. Cramér-Rao Bound Study of Multiple Scattering Effects in Target Separation Estimation

    Directory of Open Access Journals (Sweden)

    Edwin A. Marengo

    2013-01-01

    Full Text Available The information about the distance of separation between two-point targets that is contained in scattering data is explored in the context of the scalar Helmholtz operator via the Fisher information and associated Cramér-Rao bound (CRB relevant to unbiased target separation estimation. The CRB results are obtained for the exact multiple scattering model and, for reference, also for the single scattering or Born approximation model applicable to weak scatterers. The effects of the sensing configuration and the scattering parameters in target separation estimation are analyzed. Conditions under which the targets' separation cannot be estimated are discussed for both models. Conditions for multiple scattering to be useful or detrimental to target separation estimation are discussed and illustrated.

  2. Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials

    Science.gov (United States)

    Niu, Qifei; Zhang, Chi

    2018-03-01

    There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.

  3. Consensus-based distributed estimation in multi-agent systems with time delay

    Science.gov (United States)

    Abdelmawgoud, Ahmed

    During the last years, research in the field of cooperative control of swarm of robots, especially Unmanned Aerial Vehicles (UAV); have been improved due to the increase of UAV applications. The ability to track targets using UAVs has a wide range of applications not only civilian but also military as well. For civilian applications, UAVs can perform tasks including, but not limited to: map an unknown area, weather forecasting, land survey, and search and rescue missions. On the other hand, for military personnel, UAV can track and locate a variety of objects, including the movement of enemy vehicles. Consensus problems arise in a number of applications including coordination of UAVs, information processing in wireless sensor networks, and distributed multi-agent optimization. We consider a widely studied consensus algorithms for processing sensed data by different sensors in wireless sensor networks of dynamic agents. Every agent involved in the network forms a weighted average of its own estimated value of some state with the values received from its neighboring agents. We introduced a novelty of consensus-based distributed estimation algorithms. We propose a new algorithm to reach a consensus given time delay constraints. The proposed algorithm performance was observed in a scenario where a swarm of UAVs measuring the location of a ground maneuvering target. We assume that each UAV computes its state prediction and shares it with its neighbors only. However, the shared information applied to different agents with variant time delays. The entire group of UAVs must reach a consensus on target state. Different scenarios were also simulated to examine the effectiveness and performance in terms of overall estimation error, disagreement between delayed and non-delayed agents, and time to reach a consensus for each parameter contributing on the proposed algorithm.

  4. Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs

    Science.gov (United States)

    Kar, Soummya; Moura, José M. F.

    2011-08-01

    The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.

  5. Estimating Mandibular Motion Based on Chin Surface Targets During Speech

    Science.gov (United States)

    Green, Jordan R.; Wilson, Erin M.; Wang, Yu-Tsai; Moore, Christopher A.

    2009-01-01

    Purpose The movement of the jaw during speech and chewing has frequently been studied by tracking surface landmarks on the chin. However, the extent to which chin motions accurately represent those of the underlying mandible remains in question. In this investigation, the movements of a pellet attached to the incisor of the mandible were compared with those of pellets attached to different regions of the chin. Method Ten healthy talkers served as participants. Three speaking contexts were recorded from each participant: word, sentence, and paragraph. Chin position errors were estimated by computing the standard distance between the mandibular incisor pellet and the chin pellets. Results Relative to the underlying mandible, chin pellets moved with an average absolute and relative error of 0.81 mm and 7.30%, respectively. The movements of chin and mandibular pellets were tightly coupled in time. Conclusion The chin tracking errors observed in this investigation are considered acceptable for descriptive studies of oromotor behavior, particularly in situations where mandibular placements are not practical (e.g., young children or edentulous adults). The observed amount of error, however, may not be tolerable for fine-grained analyses of mandibular biomechanics. Several guidelines are provided for minimizing error associated with tracking surface landmarks on the chin. PMID:17675597

  6. LEADER: fast estimates of asteroid shape elongation and spin latitude distributions from scarce photometry

    Science.gov (United States)

    Nortunen, H.; Kaasalainen, M.

    2017-12-01

    Context. Many asteroid databases with lightcurve brightness measurements (e.g. WISE, Pan-STARRS1) contain enormous amounts of data for asteroid shape and spin modelling. While lightcurve inversion is not plausible for individual targets with scarce data, it is possible for large populations with thousands of asteroids, where the distributions of the shape and spin characteristics of the populations are obtainable. Aims: We aim to introduce a software implementation of a method that computes the joint shape elongation p and spin latitude β distributions for a population, with the brightness observations given in an asteroid database. Other main goals are to include a method for performing validity checks of the algorithm, and a tool for a statistical comparison of populations. Methods: The LEADER software package read the brightness measurement data for a user-defined subpopulation from a given database. The observations were used to compute estimates of the brightness variations of the population members. A cumulative distribution function (CDF) was constructed of these estimates. A superposition of known analytical basis functions yielded this CDF as a function of the (shape, spin) distribution. The joint distribution can be reconstructed by solving a linear constrained inverse problem. To test the validity of the method, the algorithm can be run with synthetic asteroid models, where the shape and spin characteristics are known, and by using the geometries taken from the examined database. Results: LEADER is a fast and robust software package for solving shape and spin distributions for large populations. There are major differences in the quality and coverage of measurements depending on the database used, so synthetic simulations are always necessary before a database can be reliably used. We show examples of differences in the results when switching to another database.

  7. Mechanisms of distribution and targeting of neuronal ion channels.

    Science.gov (United States)

    Thayer, Desiree A; Jan, Lily Y

    2010-09-01

    The discovery and development of pharmaceutical drugs targeting ion channels is important for treating a variety of medical conditions and diseases. Ion channels are expressed ubiquitously throughout the body, and are involved in many basic physiological processes. Neuronal ion channels are particularly appealing drug targets, and recent advances in screening ion channel function using optical-based and electrophysiological technologies have improved drug development in this field. Moreover, methods for the discovery of peptide-based neurotoxins and other natural products have proven useful in the pharmacological assessment of ion channel structure and function, while also contributing to the identification of lead molecules for drug development.

  8. A Novel Target-Height Estimation Approach Using Radar-Wave Multipath Propagation for Automotive Applications

    Science.gov (United States)

    Laribi, Amir; Hahn, Markus; Dickmann, Jürgen; Waldschmidt, Christian

    2017-09-01

    This paper introduces a novel target height estimation approach using a Frequency Modulation Continuous Wave (FMCW) automotive radar. The presented algorithm takes advantage of radar wave multipath propagation to measure the height of objects in the vehicle surroundings. A multipath propagation model is presented first, then a target height is formulated using geometry, based on the presented propagation model. It is then shown from Sensor-Target geometry that height estimation of targets is highly dependent on the radar range resolution, target range and target height. The high resolution algorithm RELAX is discussed and applied to collected raw data to enhance the radar range resolution capability. This enables a more accurate height estimation especially for low targets. Finally, the results of a measurement campaign using corner reflectors at different heights are discussed to show that target heights can be very accurately resolved by the proposed algorithm and that for low targets an average mean height estimation error of 0.03 m has been achieved by the proposed height finding algorithm.

  9. A Novel Target-Height Estimation Approach Using Radar-Wave Multipath Propagation for Automotive Applications

    Directory of Open Access Journals (Sweden)

    A. Laribi

    2017-09-01

    Full Text Available This paper introduces a novel target height estimation approach using a Frequency Modulation Continuous Wave (FMCW automotive radar. The presented algorithm takes advantage of radar wave multipath propagation to measure the height of objects in the vehicle surroundings. A multipath propagation model is presented first, then a target height is formulated using geometry, based on the presented propagation model. It is then shown from Sensor-Target geometry that height estimation of targets is highly dependent on the radar range resolution, target range and target height. The high resolution algorithm RELAX is discussed and applied to collected raw data to enhance the radar range resolution capability. This enables a more accurate height estimation especially for low targets. Finally, the results of a measurement campaign using corner reflectors at different heights are discussed to show that target heights can be very accurately resolved by the proposed algorithm and that for low targets an average mean height estimation error of 0.03 m has been achieved by the proposed height finding algorithm.

  10. A comparison of parameter estimations of the Poisson-generalised Lindley distribution

    Science.gov (United States)

    Denthet, Sunthree

    2017-11-01

    In this paper, the Poisson-generalised Lindley distribution is presented. It is obtained by mixing the Poisson distribution with a generalised Lindley distribution. This distribution is an alternative distribution for count data with over-distribution. We apply two methods of parameter estimation, maximum likelihood estimation and method of moment, to estimate the parameters. The Monte Carlo simulation study is conducted for efficiency comparison between two methods of estimation based on root of mean squared error. The study exposes that method of moment is highly efficient with maximum likelihood estimation when the model is decreasing or bimodal model. Finally, the proposed distribution is applied to real data sets,but the result based on p-value of the discrete Anderson-Daring test show that maximum likelihood estimation can be hight efficiency for fitting data set.

  11. An Iterated Local Search Algorithm for Estimating the Parameters of the Gamma/Gompertz Distribution

    Directory of Open Access Journals (Sweden)

    Behrouz Afshar-Nadjafi

    2014-01-01

    Full Text Available Extensive research has been devoted to the estimation of the parameters of frequently used distributions. However, little attention has been paid to estimation of parameters of Gamma/Gompertz distribution, which is often encountered in customer lifetime and mortality risks distribution literature. This distribution has three parameters. In this paper, we proposed an algorithm for estimating the parameters of Gamma/Gompertz distribution based on maximum likelihood estimation method. Iterated local search (ILS is proposed to maximize likelihood function. Finally, the proposed approach is computationally tested using some numerical examples and results are analyzed.

  12. Distribution functions to estimate radionuclide solid-liquid distribution coefficients in soils: the case of Cs

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)

    2014-07-01

    In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for

  13. The Most Likely Distribution of Target Echo Amplitudes

    NARCIS (Netherlands)

    Moll, C.A.M. van; Ainslie, M.A.; Janmaat, J.

    2007-01-01

    Whether for sonar performance modelling or for performance optimisation, the detection and false alarm probabilities of a sonar system must be determined. An accurate calculation of both probabilities requires knowledge of the distributions of signal and noise. The scope of this article is limited

  14. Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); D. Thierens (Dirk); D. Thierens (Dirk)

    2007-01-01

    htmlabstractRecent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we

  15. Risk of human helminthiases: geospatial distribution and targeted control.

    Science.gov (United States)

    Yu, Weiwei; Ross, Allen G; Olveda, Remigio M; Harn, Donald A; Li, Yuesheng; Chy, Delia; Williams, Gail M

    2017-02-01

    We conducted a cross-sectional survey in 2012 among 22 rural barangays in Northern Samar, the Philippines in order to determine the prevalence of single and multiple species helminth infections, their geospatial distribution and underlying risk factors. A total of 10,434 individuals who had completed both a medical questionnaire and a stool examination were included in the analysis. Barangay specific prevalence rates were displayed in ArcMap. The prevalence of Trichuris trichiura infection was found to be the highest at 62.4%, followed by Ascaris lumbricoides, hookworm and S. japonicum with the prevalence rates of 40.2%, 31.32%, and 27.1%, respectively. 52.7% of people were infected with at least two parasites and 4.8% with all four parasites. Males aged 10-19 years were the most vulnerable to coinfection infection. Students, fishermen, farmers and housewives were the most vulnerable occupations for co-infection of A. lumbricoides and T. trichiura. Considerable heterogeneity in the spatial distribution was observed for the different parasite species. There was a considerably higher risk of A. lumbricoides and T. trichiura co-infection in villages with no schistosomiasis infection (Pgeospatial distribution of multi-parasitism will guide future integrated strategies leading to elimination. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. Distributed fusion estimation for sensor networks with communication constraints

    CERN Document Server

    Zhang, Wen-An; Song, Haiyu; Yu, Li

    2016-01-01

    This book systematically presents energy-efficient robust fusion estimation methods to achieve thorough and comprehensive results in the context of network-based fusion estimation. It summarizes recent findings on fusion estimation with communication constraints; several novel energy-efficient and robust design methods for dealing with energy constraints and network-induced uncertainties are presented, such as delays, packet losses, and asynchronous information... All the results are presented as algorithms, which are convenient for practical applications.

  17. Deepwater Horizon - Estimating surface oil volume distribution in real time

    Science.gov (United States)

    Lehr, B.; Simecek-Beatty, D.; Leifer, I.

    2011-12-01

    Spill responders to the Deepwater Horizon (DWH) oil spill required both the relative spatial distribution and total oil volume of the surface oil. The former was needed on a daily basis to plan and direct local surface recovery and treatment operations. The latter was needed less frequently to provide information for strategic response planning. Unfortunately, the standard spill observation methods were inadequate for an oil spill this size, and new, experimental, methods, were not ready to meet the operational demands of near real-time results. Traditional surface oil estimation tools for large spills include satellite-based sensors to define the spatial extent (but not thickness) of the oil, complemented with trained observers in small aircraft, sometimes supplemented by active or passive remote sensing equipment, to determine surface percent coverage of the 'thick' part of the slick, where the vast majority of the surface oil exists. These tools were also applied to DWH in the early days of the spill but the shear size of the spill prevented synoptic information of the surface slick through the use small aircraft. Also, satellite images of the spill, while large in number, varied considerably in image quality, requiring skilled interpretation of them to identify oil and eliminate false positives. Qualified staff to perform this task were soon in short supply. However, large spills are often events that overcome organizational inertia to the use of new technology. Two prime examples in DWH were the application of hyper-spectral scans from a high-altitude aircraft and more traditional fixed-wing aircraft using multi-spectral scans processed by use of a neural network to determine, respectively, absolute or relative oil thickness. But, with new technology, come new challenges. The hyper-spectral instrument required special viewing conditions that were not present on a daily basis and analysis infrastructure to process the data that was not available at the command

  18. Identifying a common distribution for flood estimation in ungauged ...

    African Journals Online (AJOL)

    This paper attempts to identify a possible common statistical distribution, to model the annual maximum floods observed at various rivers and streams of Botswana using goodness of fit indices based on K-S Statistics and L-Moment Ratios. Results from the two approaches, suggest that Log-Normal distribution adequately ...

  19. Fuzzy modeling, maximum likelihood estimation, and Kalman filtering for target tracking in NLOS scenarios

    Science.gov (United States)

    Yan, Jun; Yu, Kegen; Wu, Lenan

    2014-12-01

    To mitigate the non-line-of-sight (NLOS) effect, a three-step positioning approach is proposed in this article for target tracking. The possibility of each distance measurement under line-of-sight condition is first obtained by applying the truncated triangular probability-possibility transformation associated with fuzzy modeling. Based on the calculated possibilities, the measurements are utilized to obtain intermediate position estimates using the maximum likelihood estimation (MLE), according to identified measurement condition. These intermediate position estimates are then filtered using a linear Kalman filter (KF) to produce the final target position estimates. The target motion information and statistical characteristics of the MLE results are employed in updating the KF parameters. The KF position prediction is exploited for MLE parameter initialization and distance measurement selection. Simulation results demonstrate that the proposed approach outperforms the existing algorithms in the presence of unknown NLOS propagation conditions and achieves a performance close to that when propagation conditions are perfectly known.

  20. SAR target recognition and posture estimation using spatial pyramid pooling within CNN

    Science.gov (United States)

    Peng, Lijiang; Liu, Xiaohua; Liu, Ming; Dong, Liquan; Hui, Mei; Zhao, Yuejin

    2018-01-01

    Many convolution neural networks(CNN) architectures have been proposed to strengthen the performance on synthetic aperture radar automatic target recognition (SAR-ATR) and obtained state-of-art results on targets classification on MSTAR database, but few methods concern about the estimation of depression angle and azimuth angle of targets. To get better effect on learning representation of hierarchies of features on both 10-class target classification task and target posture estimation tasks, we propose a new CNN architecture with spatial pyramid pooling(SPP) which can build high hierarchy of features map by dividing the convolved feature maps from finer to coarser levels to aggregate local features of SAR images. Experimental results on MSTAR database show that the proposed architecture can get high recognition accuracy as 99.57% on 10-class target classification task as the most current state-of-art methods, and also get excellent performance on target posture estimation tasks which pays attention to depression angle variety and azimuth angle variety. What's more, the results inspire us the application of deep learning on SAR target posture description.

  1. Estimating Non-Normal Latent Trait Distributions within Item Response Theory Using True and Estimated Item Parameters

    Science.gov (United States)

    Sass, D. A.; Schmitt, T. A.; Walker, C. M.

    2008-01-01

    Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…

  2. Measurements of activation reaction rate distributions on a mercury target bombarded with high-energy protons at AGS

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Hiroshi; Kasugai, Yoshimi; Nakashima, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ino, Takashi; Kawai, Masayoshi [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Jerde, Eric; Glasgow, David [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2000-02-01

    A neutronics experiment was carried out using a thick mercury target at the Alternating Gradient Synchrotron (AGS) facility of Brookhaven National Laboratory in a framework of the ASTE (AGS Spallation Target Experiment) collaboration. Reaction rate distributions around the target were measured by the activation technique at incident proton energies of 1.6, 12 and 24 GeV. Various activation detectors such as the {sup 115}In(n,n'){sup 115m}In, {sup 93}Nb(n,2n){sup 92m}Nb, and {sup 209}Bi(n,xn) reactions with threshold energies ranging from 0.3 to 70.5 MeV were employed to obtain the reaction rate data for estimating spallation source neutron characteristics of the mercury target. It was found from the measured {sup 115}In(n,n'){sup 115m}In reaction rate distribution that the number of leakage neutrons becomes maximum at about 11 cm from the top of hemisphere of the mercury target for the 1.6-GeV proton incidence and the peak position moves towards forward direction with increase of the incident proton energy. The similar result was observed in the reaction rate distributions of other activation detectors. The experimental procedures and a full set of experimental data in numerical form are summarized in this report. (author)

  3. Design wave estimation considering directional distribution of waves

    Digital Repository Service at National Institute of Oceanography (India)

    SanilKumar, V.; Deo, M.C.

    The design of coastal and offshore structures requires design significant wave height having a certain return period. The commonly followed procedure to estimate the design wave height, does not give any consideration to the directions of waves...

  4. Velocity estimation of high-speed target for step frequency radar

    Science.gov (United States)

    Tian, Ruiqi; Lin, Caiyong; Bao, Qinglong; Chen, Zengping

    2016-04-01

    Aiming to precisely estimate the velocity of high-speed targets for step frequency (SF) radar, a positive-positive-negative SF waveform consisting of two continuous positive SF pulse trains and a negative one is designed, and a velocity estimation method is proposed based on two-dimensional time-domain cross correlation (2-D TDCC). Making full use of the characteristics of the designed waveform, the coarse velocity estimation is achieved by 2-D TDCC of positive-positive SF pulse trains and then the Radon transform is applied to solve velocity ambiguity for high-speed targets. After velocity compensation for positive-negative SF pulse trains, the velocity residual is estimated precisely by 2-D TDCC. Simulation results show that the proposed method exhibits good performance for estimation accuracy, stability performance, computational complexity, and data rate by comparisons.

  5. Hydroacoustic Estimates of Fish Density Distributions in Cougar Reservoir, 2011

    Energy Technology Data Exchange (ETDEWEB)

    Ploskey, Gene R.; Zimmerman, Shon A.; Hennen, Matthew J.; Batten, George W.; Mitchell, T. D.

    2012-09-01

    Day and night mobile hydroacoustic surveys were conducted once each month from April through December 2011 to quantify the horizontal and vertical distributions of fish throughout Cougar Reservoir, Lane County, Oregon.

  6. Low Complexity Moving Target Parameter Estimation for MIMO Radar using 2D-FFT

    KAUST Repository

    Jardak, Seifallah

    2017-06-16

    In multiple-input multiple-output radar, to localize a target and estimate its reflection coefficient, a given cost function is usually optimized over a grid of points. The performance of such algorithms is directly affected by the grid resolution. Increasing the number of grid points enhances the resolution of the estimator but also increases its computational complexity exponentially. In this work, two reduced complexity algorithms are derived based on Capon and amplitude and phase estimation (APES) to estimate the reflection coefficient, angular location and, Doppler shift of multiple moving targets. By exploiting the structure of the terms, the cost-function is brought into a form that allows us to apply the two-dimensional fast-Fourier-transform (2D-FFT) and reduce the computational complexity of estimation. Using low resolution 2D-FFT, the proposed algorithm identifies sub-optimal estimates and feeds them as initial points to the derived Newton gradient algorithm. In contrast to the grid-based search algorithms, the proposed algorithm can optimally estimate on- and off-the-grid targets in very low computational complexity. A new APES cost-function with better estimation performance is also discussed. Generalized expressions of the Cramér-Rao lower bound are derived to asses the performance of the proposed algorithm.

  7. Estimation of percentage depth dose distributions for therapeutic machines

    Science.gov (United States)

    Pal, Surajit; Muthukrishnan, G.; Ravishankar, R.; Sharma, R. P.; Ghose, A. M.

    2002-12-01

    A mathematical formulation has been carried out to predict megavoltage photon depth dose distributions for different field sizes inside a water phantom. From the studies it is found that it would be possible to predict depth dose distributions for different energies and different field sizes based on measurements carried out for single energy and single field size. The method has been successfully applied for 60Co γ-rays also.

  8. Low complexity algorithms to independently and jointly estimate the location and range of targets using FMCW

    KAUST Repository

    Ahmed, Sajid

    2017-05-12

    The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.

  9. Molybdenum target specifications for cyclotron production of 99mTc based on patient dose estimates.

    Science.gov (United States)

    Hou, X; Tanguay, J; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A

    2016-01-21

    In response to the recognized fragility of reactor-produced (99)Mo supply, direct production of (99m)Tc via (100)Mo(p,2n)(99m)Tc reaction using medical cyclotrons has been investigated. However, due to the existence of other Molybdenum (Mo) isotopes in the target, in parallel with (99m)Tc, other technetium (Tc) radioactive isotopes (impurities) will be produced. They will be incorporated into the labeled radiopharmaceuticals and result in increased patient dose. The isotopic composition of the target and beam energy are main factors that determine production of impurities, thus also dose increases. Therefore, they both must be considered when selecting targets for clinical (99m)Tc production. Although for any given Mo target, the patient dose can be predicted based on complicated calculations of production yields for each Tc radioisotope, it would be very difficult to reverse these calculations to specify target composition based on dosimetry considerations. In this article, a relationship between patient dosimetry and Mo target composition is studied. A simple and easy algorithm for dose estimation, based solely on the knowledge of target composition and beam energy, is described. Using this algorithm, the patient dose increase due to every Mo isotope that could be present in the target is estimated. Most importantly, a technique to determine Mo target composition thresholds that would meet any given dosimetry requirement is proposed.

  10. Depth-Dose and LET Distributions of Antiproton Beams in Various Target Materials

    DEFF Research Database (Denmark)

    Herrmann, Rochus; Olsen, Sune; Petersen, Jørgen B.B.

    therapy is unlikely to yield any clinical enhancement  since unrealistic high concentrations are required in order to observe a beneficial effect. Substituting a water target with the aforementioned ICRP tissues has only minor effect on the depth-dose distribution and the LET-distribution. However......-dose distributions and an increased biological effect in the target region from the production of secondary nuclear fragments with increased LET. Earlier it has been speculated how the target material will affect the depth-dose curve of antiprotons and secondary particle production. Intuitively, the presence...... unrestricted LET is calculated for all configurations. Finally, we investigate which concentrations of gadolinium and boron are needed in a water target in order to observe a significant change in the antiproton depth-dose distribution.  Results Results indicate, that there is no significant change...

  11. Estimation of current density distribution under electrodes for external defibrillation

    Directory of Open Access Journals (Sweden)

    Papazov Sava P

    2002-12-01

    Full Text Available Abstract Background Transthoracic defibrillation is the most common life-saving technique for the restoration of the heart rhythm of cardiac arrest victims. The procedure requires adequate application of large electrodes on the patient chest, to ensure low-resistance electrical contact. The current density distribution under the electrodes is non-uniform, leading to muscle contraction and pain, or risks of burning. The recent introduction of automatic external defibrillators and even wearable defibrillators, presents new demanding requirements for the structure of electrodes. Method and Results Using the pseudo-elliptic differential equation of Laplace type with appropriate boundary conditions and applying finite element method modeling, electrodes of various shapes and structure were studied. The non-uniformity of the current density distribution was shown to be moderately improved by adding a low resistivity layer between the metal and tissue and by a ring around the electrode perimeter. The inclusion of openings in long-term wearable electrodes additionally disturbs the current density profile. However, a number of small-size perforations may result in acceptable current density distribution. Conclusion The current density distribution non-uniformity of circular electrodes is about 30% less than that of square-shaped electrodes. The use of an interface layer of intermediate resistivity, comparable to that of the underlying tissues, and a high-resistivity perimeter ring, can further improve the distribution. The inclusion of skin aeration openings disturbs the current paths, but an appropriate selection of number and size provides a reasonable compromise.

  12. Linear Estimation of Standard Deviation of Logistic Distribution ...

    African Journals Online (AJOL)

    The paper presents a theoretical method based on order statistics and a FORTRAN program for computing the variance and relative efficiencies of the standard deviation of the logistic population with respect to the Cramer-Rao lower variance bound and the best linear unbiased estimators (BLUE\\'s) when the mean is ...

  13. Estimating Functions of Distributions Defined over Spaces of Unknown Size

    Directory of Open Access Journals (Sweden)

    David H. Wolpert

    2013-10-01

    Full Text Available We consider Bayesian estimation of information-theoretic quantities from data, using a Dirichlet prior. Acknowledging the uncertainty of the event space size m and the Dirichlet prior’s concentration parameter c, we treat both as random variables set by a hyperprior. We show that the associated hyperprior, P(c, m, obeys a simple “Irrelevance of Unseen Variables” (IUV desideratum iff P(c, m = P(cP(m. Thus, requiring IUV greatly reduces the number of degrees of freedom of the hyperprior. Some information-theoretic quantities can be expressed multiple ways, in terms of different event spaces, e.g., mutual information. With all hyperpriors (implicitly used in earlier work, different choices of this event space lead to different posterior expected values of these information-theoretic quantities. We show that there is no such dependence on the choice of event space for a hyperprior that obeys IUV. We also derive a result that allows us to exploit IUV to greatly simplify calculations, like the posterior expected mutual information or posterior expected multi-information. We also use computer experiments to favorably compare an IUV-based estimator of entropy to three alternative methods in common use. We end by discussing how seemingly innocuous changes to the formalization of an estimation problem can substantially affect the resultant estimates of posterior expectations.

  14. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    Science.gov (United States)

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  15. Estimation of snow cover distribution in Beas basin, Indian Himalaya ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 118; Issue 5 ... The satellite estimated snow or non-snow pixel information using proposed methodology was validated with the snow cover information collected at three observatory locations and it was found that the algorithm classify all the sample points correctly ...

  16. Asymptotically Distribution-Free (ADF) Interval Estimation of Coefficient Alpha

    Science.gov (United States)

    Maydeu-Olivares, Alberto; Coffman, Donna L.; Hartmann, Wolfgang M.

    2007-01-01

    The point estimate of sample coefficient alpha may provide a misleading impression of the reliability of the test score. Because sample coefficient alpha is consistently biased downward, it is more likely to yield a misleading impression of poor reliability. The magnitude of the bias is greatest precisely when the variability of sample alpha is…

  17. A determination of parton distributions with faithful uncertainty estimation

    NARCIS (Netherlands)

    Ball, Richard D.; Debbio, Luigi Del; Forte, Stefano; Guffanti, Alberto; Latorre, Jose I.; Piccione, Andrea; Rojo, Juan; Ubiali, Maria

    2008-01-01

    We present the determination of a set of parton distributions of the nucleon, at next-to-leading order, from a global set of deep-inelastic scattering data: NNPDF1.0. The determination is based on a Monte Carlo approach, with neural networks used as unbiased interpolants. This method, previously

  18. Can anchovy age structure be estimated from length distribution ...

    African Journals Online (AJOL)

    The analysis provides a new time-series of proportions-at-age 1, together with associated standard errors, for input into assessments of the resource. The results also caution against the danger of scientists reading more information into data than is really there. Keywords: anchovy, effective sample size, length distribution, ...

  19. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  20. Learning Structure Illuminates Black Boxes: an Introduction into Estimation of Distribution Algorithms

    NARCIS (Netherlands)

    J. Grahl; S. Minner; P.A.N. Bosman (Peter); Z. Michalewicz; P. Siarry

    2008-01-01

    htmlabstractThis chapter serves as an introduction to estimation of distribution algorithms (EDAs). Estimation of distribution algorithms are a new paradigm in evolutionary computation. They combine statistical learning with population-based search in order to automatically identify and exploit

  1. Recursive Estimation of π-Line Parameters for Electric Power Distribution Grids

    DEFF Research Database (Denmark)

    Prostejovsky, Alexander; Gehrke, Oliver; Kosek, Anna Magdalena

    2016-01-01

    Electrical models of power distribution grids are used in applications such as state estimation and Optimal Power Flow (OPF), the reliability of which depends on the accuracy of the model. This work presents an approach for estimating distribution line parameters from Remote Terminal Unit (RTU...

  2. On Robustness of the Normal-Theory Based Asymptotic Distributions of Three Reliability Coefficient Estimates.

    Science.gov (United States)

    Yuan, Ke-Hai; Bentler, Peter M.

    2002-01-01

    Examined the asymptotic distributions of three reliability coefficient estimates: (1) sample coefficient alpha; (2) reliability estimate of a composite score following factor analysis; and (3) maximal reliability of a linear combination of item scores after factor analysis. Findings show that normal theory based asymptotic distributions for these…

  3. Skew Generalized Extreme Value Distribution: Probability Weighted Moments Estimation and Application to Block Maxima Procedure

    OpenAIRE

    Ribereau, Pierre; Masiello, Esterina; Naveau, Philippe

    2014-01-01

    International audience; Following the work of Azzalini ([2] and [3]) on the skew normal distribution, we propose an extension of the Generalized Extreme Value (GEV) distribution, the SGEV. This new distribution allows for a better t of maxima and can be interpreted as both the distribution of maxima when maxima are taken on dependent data and when maxima are taken over a random block size. We propose to estimate the parameters of the SGEV distribution via the Probability Weighted Moments meth...

  4. Estimating biomass, fishing mortality, and “total allowable discards” for surveyed non-target fish

    OpenAIRE

    Shephard, S.; Reid, D G; Gerritsen, H. D.; Farnsworth, K. D.

    2014-01-01

    Demersal fisheries targeting a few high-value species often catch and discard other “non-target” species. It is difficult to quantify the impact of this incidental mortality when population biomass of a non-target species is unknown. We calculate biomass for 14 demersal fish species in ICES Area VIIg (Celtic Sea) by applying species- and length-based catchability corrections to catch records from the Irish Groundfish Survey (IGFS). We then combine these biomass estimates with records of comme...

  5. Multiple Moving Targets Detection and Parameters Estimation in Strong Reverberation Environments

    Directory of Open Access Journals (Sweden)

    Ge Yu

    2016-01-01

    Full Text Available This paper considers the problem of multiple moving targets detection and parameters estimation (direction of arrival and range in strong reverberation environments. As reverberation has a strong correlation with target echo, the performance of target detection and parameters estimation is significantly degraded in practical underwater environments. In this paper, we utilize two uniform circular arrays to receive plane wave of the linear frequency modulation signal reflected from far-field targets. On the basis of received signal, we build a variance matrix of multiple beams by using modal decomposition, conventional beamforming, and fractional Fourier transform (FrFT. We then propose a novel detection method and an estimation method of parameters based on the constructed image. A significant feature of the proposed methods is that our design does not involve any a priori knowledge about targets number and parameters of marine environments. Finally, we demonstrate via numerical simulation examples that the detection probability and the accuracy of estimated parameters of the proposed method are higher than the existing methods in both low signal-to-reverberation ratio and signal-to-noise ratio environment.

  6. Alternating Markov Chains for Distribution Estimation in the Presence of Errors

    CERN Document Server

    Farnoud, Farzad; Milenkovic, Olgica

    2012-01-01

    We consider a class of small-sample distribution estimators over noisy channels. Our estimators are designed for repetition channels, and rely on properties of the runs of the observed sequences. These runs are modeled via a special type of Markov chains, termed alternating Markov chains. We show that alternating chains have redundancy that scales sub-linearly with the lengths of the sequences, and describe how to use a distribution estimator for alternating chains for the purpose of distribution estimation over repetition channels.

  7. Entropy-Based Parameter Estimation for the Four-Parameter Exponential Gamma Distribution

    Directory of Open Access Journals (Sweden)

    Songbai Song

    2017-04-01

    Full Text Available Two methods based on the principle of maximum entropy (POME, the ordinary entropy method (ENT and the parameter space expansion method (PSEM, are developed for estimating the parameters of a four-parameter exponential gamma distribution. Using six data sets for annual precipitation at the Weihe River basin in China, the PSEM was applied for estimating parameters for the four-parameter exponential gamma distribution and was compared to the methods of moments (MOM and of maximum likelihood estimation (MLE. It is shown that PSEM enables the four-parameter exponential distribution to fit the data well, and can further improve the estimation.

  8. Estimation of the Shape Parameter of Ged Distribution for a Small Sample Size

    Directory of Open Access Journals (Sweden)

    Purczyński Jan

    2014-06-01

    Full Text Available In this paper a new method of estimating the shape parameter of generalized error distribution (GED, called ‘approximated moment method’, was proposed. The following estimators were considered: the one obtained through the maximum likelihood method (MLM, approximated fast estimator (AFE, and approximated moment method (AMM. The quality of estimator was evaluated on the basis of the value of the relative mean square error. Computer simulations were conducted using random number generators for the following shape parameters: s = 0.5, s = 1.0 (Laplace distribution s = 2.0 (Gaussian distribution and s = 3.0.

  9. Comparison of Estimation Techniques for the Four Parameter Beta Distribution.

    Science.gov (United States)

    1981-12-01

    that Fisher, the initial developer of the maximum likelihood method, " mathematically proved that the inherent variance of an NM estimator [of the...International Mathematical Statistics Library (IMSL) were widely used. The reader should refer to the IMSL manual (Ref 12) if specific information about these...of Stat- istical Comutation and Simulation, 2: 253-258 (1978). 3. Conover, W. T. Practical NoAnaIAmetric Sttistics (Second Edition). New York: John

  10. Marine biodiversity in the Caribbean: regional estimates and distribution patterns.

    Directory of Open Access Journals (Sweden)

    Patricia Miloslavich

    Full Text Available This paper provides an analysis of the distribution patterns of marine biodiversity and summarizes the major activities of the Census of Marine Life program in the Caribbean region. The coastal Caribbean region is a large marine ecosystem (LME characterized by coral reefs, mangroves, and seagrasses, but including other environments, such as sandy beaches and rocky shores. These tropical ecosystems incorporate a high diversity of associated flora and fauna, and the nations that border the Caribbean collectively encompass a major global marine biodiversity hot spot. We analyze the state of knowledge of marine biodiversity based on the geographic distribution of georeferenced species records and regional taxonomic lists. A total of 12,046 marine species are reported in this paper for the Caribbean region. These include representatives from 31 animal phyla, two plant phyla, one group of Chromista, and three groups of Protoctista. Sampling effort has been greatest in shallow, nearshore waters, where there is relatively good coverage of species records; offshore and deep environments have been less studied. Additionally, we found that the currently accepted classification of marine ecoregions of the Caribbean did not apply for the benthic distributions of five relatively well known taxonomic groups. Coastal species richness tends to concentrate along the Antillean arc (Cuba to the southernmost Antilles and the northern coast of South America (Venezuela-Colombia, while no pattern can be observed in the deep sea with the available data. Several factors make it impossible to determine the extent to which these distribution patterns accurately reflect the true situation for marine biodiversity in general: (1 highly localized concentrations of collecting effort and a lack of collecting in many areas and ecosystems, (2 high variability among collecting methods, (3 limited taxonomic expertise for many groups, and (4 differing levels of activity in the study

  11. Marine Biodiversity in the Caribbean: Regional Estimates and Distribution Patterns

    Science.gov (United States)

    Miloslavich, Patricia; Díaz, Juan Manuel; Klein, Eduardo; Alvarado, Juan José; Díaz, Cristina; Gobin, Judith; Escobar-Briones, Elva; Cruz-Motta, Juan José; Weil, Ernesto; Cortés, Jorge; Bastidas, Ana Carolina; Robertson, Ross; Zapata, Fernando; Martín, Alberto; Castillo, Julio; Kazandjian, Aniuska; Ortiz, Manuel

    2010-01-01

    This paper provides an analysis of the distribution patterns of marine biodiversity and summarizes the major activities of the Census of Marine Life program in the Caribbean region. The coastal Caribbean region is a large marine ecosystem (LME) characterized by coral reefs, mangroves, and seagrasses, but including other environments, such as sandy beaches and rocky shores. These tropical ecosystems incorporate a high diversity of associated flora and fauna, and the nations that border the Caribbean collectively encompass a major global marine biodiversity hot spot. We analyze the state of knowledge of marine biodiversity based on the geographic distribution of georeferenced species records and regional taxonomic lists. A total of 12,046 marine species are reported in this paper for the Caribbean region. These include representatives from 31 animal phyla, two plant phyla, one group of Chromista, and three groups of Protoctista. Sampling effort has been greatest in shallow, nearshore waters, where there is relatively good coverage of species records; offshore and deep environments have been less studied. Additionally, we found that the currently accepted classification of marine ecoregions of the Caribbean did not apply for the benthic distributions of five relatively well known taxonomic groups. Coastal species richness tends to concentrate along the Antillean arc (Cuba to the southernmost Antilles) and the northern coast of South America (Venezuela – Colombia), while no pattern can be observed in the deep sea with the available data. Several factors make it impossible to determine the extent to which these distribution patterns accurately reflect the true situation for marine biodiversity in general: (1) highly localized concentrations of collecting effort and a lack of collecting in many areas and ecosystems, (2) high variability among collecting methods, (3) limited taxonomic expertise for many groups, and (4) differing levels of activity in the study of

  12. Marine biodiversity in the Caribbean: regional estimates and distribution patterns.

    Science.gov (United States)

    Miloslavich, Patricia; Díaz, Juan Manuel; Klein, Eduardo; Alvarado, Juan José; Díaz, Cristina; Gobin, Judith; Escobar-Briones, Elva; Cruz-Motta, Juan José; Weil, Ernesto; Cortés, Jorge; Bastidas, Ana Carolina; Robertson, Ross; Zapata, Fernando; Martín, Alberto; Castillo, Julio; Kazandjian, Aniuska; Ortiz, Manuel

    2010-08-02

    This paper provides an analysis of the distribution patterns of marine biodiversity and summarizes the major activities of the Census of Marine Life program in the Caribbean region. The coastal Caribbean region is a large marine ecosystem (LME) characterized by coral reefs, mangroves, and seagrasses, but including other environments, such as sandy beaches and rocky shores. These tropical ecosystems incorporate a high diversity of associated flora and fauna, and the nations that border the Caribbean collectively encompass a major global marine biodiversity hot spot. We analyze the state of knowledge of marine biodiversity based on the geographic distribution of georeferenced species records and regional taxonomic lists. A total of 12,046 marine species are reported in this paper for the Caribbean region. These include representatives from 31 animal phyla, two plant phyla, one group of Chromista, and three groups of Protoctista. Sampling effort has been greatest in shallow, nearshore waters, where there is relatively good coverage of species records; offshore and deep environments have been less studied. Additionally, we found that the currently accepted classification of marine ecoregions of the Caribbean did not apply for the benthic distributions of five relatively well known taxonomic groups. Coastal species richness tends to concentrate along the Antillean arc (Cuba to the southernmost Antilles) and the northern coast of South America (Venezuela-Colombia), while no pattern can be observed in the deep sea with the available data. Several factors make it impossible to determine the extent to which these distribution patterns accurately reflect the true situation for marine biodiversity in general: (1) highly localized concentrations of collecting effort and a lack of collecting in many areas and ecosystems, (2) high variability among collecting methods, (3) limited taxonomic expertise for many groups, and (4) differing levels of activity in the study of different

  13. Estimation of the location parameter of distributions with known coefficient of variation by record values

    Directory of Open Access Journals (Sweden)

    N. K. Sajeevkumar

    2014-09-01

    Full Text Available In this article, we derived the Best Linear Unbiased Estimator (BLUE of the location parameter of certain distributions with known coefficient of variation by record values. Efficiency comparisons are also made on the proposed estimator with some of the usual estimators. Finally we give a real life data to explain the utility of results developed in this article.

  14. Quantification Model for Estimating Temperature Field Distributions of Apple Fruit

    Science.gov (United States)

    Zhang, Min; Yang, Le; Zhao, Huizhong; Zhang, Leijie; Zhong, Zhiyou; Liu, Yanling; Chen, Jianhua

    A quantification model of transient heat conduction was provided to simulate apple fruit temperature distribution in the cooling process. The model was based on the energy variation of apple fruit of different points. It took into account, heat exchange of representative elemental volume, metabolism heat and external heat. The following conclusions could be obtained: first, the quantification model can satisfactorily describe the tendency of apple fruit temperature distribution in the cooling process. Then there was obvious difference between apple fruit temperature and environment temperature. Compared to the change of environment temperature, a long hysteresis phenomenon happened to the temperature of apple fruit body. That is to say, there was a significant temperature change of apple fruit body in a period of time after environment temperature dropping. And then the change of temerature of apple fruit body in the cooling process became slower and slower. This can explain the time delay phenomenon of biology. After that, the temperature differences of every layer increased from centre to surface of apple fruit gradually. That is to say, the minimum temperature differences closed to centre of apple fruit body and the maximum temperature differences closed to the surface of apple fruit body. Finally, the temperature of every part of apple fruit body will tend to consistent and be near to the environment temperature in the cooling process. It was related to the metabolism heat of plant body at any time.

  15. Order Quantity Distributions: Estimating an Adequate Aggregation Horizon

    Directory of Open Access Journals (Sweden)

    Eriksen Poul Svante

    2016-09-01

    Full Text Available In this paper an investigation into the demand, faced by a company in the form of customer orders, is performed both from an explorative numerical and analytical perspective. The aim of the research is to establish the behavior of customer orders in first-come-first-serve (FCFS systems and the impact of order quantity variation on the planning environment. A discussion of assumptions regarding demand from various planning and control perspectives underlines that most planning methods are based on the assumption that demand in the form of customer orders are independently identically distributed and stem from symmetrical distributions. To investigate and illustrate the need to aggregate demand to live up to these assumptions, a simple methodological framework to investigate the validity of the assumptions and for analyzing the behavior of orders is developed. The paper also presents an analytical approach to identify the aggregation horizon needed to achieve a stable demand. Furthermore, a case study application of the presented framework is presented and concluded on.

  16. Real-time measurements and their effects on state estimation of distribution power system

    DEFF Research Database (Denmark)

    Han, Xue; You, Shi; Thordarson, Fannar

    2013-01-01

    This paper aims at analyzing the potential value of using different real-time metering and measuring instruments applied in the low voltage distribution networks for state-estimation. An algorithm is presented to evaluate different combinations of metering data using a tailored state estimator....... It is followed by a case study based on the proposed algorithm. A real distribution grid feeder with different types of meters installed either in the cabinets or at the customer side is selected for simulation and analysis. Standard load templates are used to initiate the state estimation. The deviations...... between the estimated values (voltage and injected power) and the measurements are applied to evaluate the accuracy of the estimated grid states. Eventually, some suggestions are provided for the distribution grid operators on placing the real-time meters in the distribution grid....

  17. Studies on Properties and Estimation Problems for Modified Extension of Exponential Distribution

    Science.gov (United States)

    El-Damcese, M. A.; Dina., A.

    2015-09-01

    The present paper considers modified extension of the exponential distribution with three parameters. We study the main properties of this new distribution, with special emphasis on its median, mode and moments function and some characteristics related to reliability studies. For Modified- extension exponential distribution (MEXED) we have obtained the Bayes Estimators of scale and shape parameters using Lindley's approximation (L-approximation) under squared error loss function. But, through this approximation technique it is not possible to compute the interval estimates of the parameters. Therefore, we also propose Gibbs sampling method to generate sample from the posterior distribution. On the basis of generated posterior sample we computed the Bayes estimates of the unknown parameters and constructed 95 % highest posterior density credible intervals. A Monte Carlo simulation study is carried out to compare the performance of Bayes estimators with the corresponding classical estimators in terms of their simulated risk. A real data set has been considered for illustrative purpose of the study.

  18. Drug combinatorics and side effect estimation on the signed human drug-target network.

    Science.gov (United States)

    Torres, Núria Ballber; Altafini, Claudio

    2016-08-15

    The mode of action of a drug on its targets can often be classified as being positive (activator, potentiator, agonist, etc.) or negative (inhibitor, blocker, antagonist, etc.). The signed edges of a drug-target network can be used to investigate the combined mechanisms of action of multiple drugs on the ensemble of common targets. In this paper it is shown that for the signed human drug-target network the majority of drug pairs tend to have synergistic effects on the common targets, i.e., drug pairs tend to have modes of action with the same sign on most of the shared targets, especially for the principal pharmacological targets of a drug. Methods are proposed to compute this synergism, as well as to estimate the influence of the drugs on the side effect of another drug. Enriching a drug-target network with information of functional nature like the sign of the interactions allows to explore in a systematic way a series of network properties of key importance in the context of computational drug combinatorics.

  19. Iterative Diffusion-Based Distributed Cubature Gaussian Mixture Filter for Multisensor Estimation.

    Science.gov (United States)

    Jia, Bin; Sun, Tao; Xin, Ming

    2016-10-20

    In this paper, a distributed cubature Gaussian mixture filter (DCGMF) based on an iterative diffusion strategy (DCGMF-ID) is proposed for multisensor estimation and information fusion. The uncertainties are represented as Gaussian mixtures at each sensor node. A high-degree cubature Kalman filter provides accurate estimation of each Gaussian mixture component. An iterative diffusion scheme is utilized to fuse the mean and covariance of each Gaussian component obtained from each sensor node. The DCGMF-ID extends the conventional diffusion-based fusion strategy by using multiple iterative information exchanges among neighboring sensor nodes. The convergence property of the iterative diffusion is analyzed. In addition, it is shown that the convergence of the iterative diffusion can be interpreted from the information-theoretic perspective as minimization of the Kullback-Leibler divergence. The performance of the DCGMF-ID is compared with the DCGMF based on the average consensus (DCGMF-AC) and the DCGMF based on the iterative covariance intersection (DCGMF-ICI) via a maneuvering target-tracking problem using multiple sensors. The simulation results show that the DCGMF-ID has better performance than the DCGMF based on noniterative diffusion, which validates the benefit of iterative information exchanges. In addition, the DCGMF-ID outperforms the DCGMF-ICI and DCGMF-AC when the number of iterations is limited.

  20. Iterative Diffusion-Based Distributed Cubature Gaussian Mixture Filter for Multisensor Estimation

    Directory of Open Access Journals (Sweden)

    Bin Jia

    2016-10-01

    Full Text Available In this paper, a distributed cubature Gaussian mixture filter (DCGMF based on an iterative diffusion strategy (DCGMF-ID is proposed for multisensor estimation and information fusion. The uncertainties are represented as Gaussian mixtures at each sensor node. A high-degree cubature Kalman filter provides accurate estimation of each Gaussian mixture component. An iterative diffusion scheme is utilized to fuse the mean and covariance of each Gaussian component obtained from each sensor node. The DCGMF-ID extends the conventional diffusion-based fusion strategy by using multiple iterative information exchanges among neighboring sensor nodes. The convergence property of the iterative diffusion is analyzed. In addition, it is shown that the convergence of the iterative diffusion can be interpreted from the information-theoretic perspective as minimization of the Kullback–Leibler divergence. The performance of the DCGMF-ID is compared with the DCGMF based on the average consensus (DCGMF-AC and the DCGMF based on the iterative covariance intersection (DCGMF-ICI via a maneuvering target-tracking problem using multiple sensors. The simulation results show that the DCGMF-ID has better performance than the DCGMF based on noniterative diffusion, which validates the benefit of iterative information exchanges. In addition, the DCGMF-ID outperforms the DCGMF-ICI and DCGMF-AC when the number of iterations is limited.

  1. A simplified approach to estimating the distribution of occasionally-consumed dietary components, applied to alcohol intake

    Directory of Open Access Journals (Sweden)

    Julia Chernova

    2016-07-01

    Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the

  2. High-Speed Target Identification System Based on the Plume’s Spectral Distribution

    Directory of Open Access Journals (Sweden)

    Wenjie Lang

    2015-01-01

    Full Text Available In order to recognize the target of high speed quickly and accurately, an identification system was designed based on analysis of the distribution characteristics of the plume spectrum. In the system, the target was aligned with visible light tracking module, and the spectral analysis of the target’s plume radiation was achieved by interference module. The distinguishing factor recognition algorithm was designed on basis of ratio of multifeature band peaks and valley mean values. Effective recognition of the high speed moving target could be achieved after partition of the active region and the influence of target motion on spectral acquisition was analyzed. In the experiment the small rocket combustion was used as the target. The spectral detection experiment was conducted at different speeds 2.0 km away from the detection system. Experimental results showed that spectral distribution had significant spectral offset in the same sampling period for the target with different speeds, but the spectral distribution was basically consistent. Through calculation of the inclusion relationship between distinguishing factor and distinction interval of the peak value and the valley value at the corresponding wave-bands, effective identification of target could be achieved.

  3. Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target

    Directory of Open Access Journals (Sweden)

    Chang-jian Ru

    2015-01-01

    Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.

  4. Estimation of Backward Impedance on Low-Voltage Distribution System using Measured Resonant Current

    Science.gov (United States)

    Miki, Toru; Konishi, Kazuki; Morimoto, Koji; Nagaoka, Naoto; Ametani, Akihiro

    Two estimation methods for a backward impedance of a power distribution system are proposed in this paper. A frequency response of a transient current flowing into a capacitor, which connected to a distribution line, has information of the backward impedance. The impedance is obtained from an attenuation constant and a resonance frequency determined by the capacitance and the impedance of power distribution system. These parameters are stably obtained from a frequency response of the transient current using a least square method. The accuracy of the method heavily depends on the origin on the time-axis for Fourier transform. Additional time-origin estimation is required for an accurate estimation of the backward impedance. The second method estimates the backward impedance using two transient current waveforms obtained by connecting alternately different capacitors to a distribution line. The backward impedance can be represented as a function of the frequency responses of these current. This method is suitable for an automatic measurement of the backward impedance because the method is independent from the time-origin. The proposed methods are applicable to an estimation of a harmonic current of the distribution system. In this paper, a harmonic current flowing through a distribution wire is estimated from the estimated backward impedance and the measured results of harmonic voltages obtained by an instrument developed by the authors.

  5. Target Centroid Position Estimation of Phase-Path Volume Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Fengjun Hu

    2016-01-01

    Full Text Available For the problem of easily losing track target when obstacles appear in intelligent robot target tracking, this paper proposes a target tracking algorithm integrating reduced dimension optimal Kalman filtering algorithm based on phase-path volume integral with Camshift algorithm. After analyzing the defects of Camshift algorithm, compare the performance with the SIFT algorithm and Mean Shift algorithm, and Kalman filtering algorithm is used for fusion optimization aiming at the defects. Then aiming at the increasing amount of calculation in integrated algorithm, reduce dimension with the phase-path volume integral instead of the Gaussian integral in Kalman algorithm and reduce the number of sampling points in the filtering process without influencing the operational precision of the original algorithm. Finally set the target centroid position from the Camshift algorithm iteration as the observation value of the improved Kalman filtering algorithm to fix predictive value; thus to make optimal estimation of target centroid position and keep the target tracking so that the robot can understand the environmental scene and react in time correctly according to the changes. The experiments show that the improved algorithm proposed in this paper shows good performance in target tracking with obstructions and reduces the computational complexity of the algorithm through the dimension reduction.

  6. A laser speckle sensor to measure the distribution of static torsion angles of twisted targets

    DEFF Research Database (Denmark)

    Rose, B.; Imam, H.; Hanson, Steen Grüner

    1998-01-01

    A novel method for measuring the distribution of static torsion angles of twisted targets is presented. The method is based on Fourier transforming the scattered field in the direction perpendicular to the twist axis, while performing an imaging operation in the direction parallel to the axis....... A cylindrical lens serves to image the closely spaced lateral positions of the target along the twist axis onto corresponding lines of the two dimensional image sensor. Thus, every single line of the image sensor measures the torsion angle of the corresponding surface position along the twist axis of the target....... Experimentally, we measure the distribution of torsion angles in both uniform and non-uniform deformation zones. It is demonstrated both theoretically and experimentally that the measurements are insensitive to object shape and target distance if the image sensor is placed in the Fourier plane. A straightforward...

  7. Estimation of direction of arrival of a moving target using subspace based approaches

    Science.gov (United States)

    Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2016-05-01

    In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.

  8. Distributions of secondary particles around various targets exposed to 50 MeV protons

    CERN Document Server

    Fassò, A

    1976-01-01

    The particle production in thick targets of carbon, aluminium, and copper, hit by 50 MeV protons, has been studied with activation detectors. The measured angular distribution of secondaries around the targets was compared with calculations making use of semi-empirical formulae proposed by Alsmiller et. al. In general, the agreement between experiment and theory is good, except for large angles where the experimentally found production of secondaries is greater than predicted by the calculation. (17 refs).

  9. Distribution of Young Forests and Estimated Stand Age across Russia, 2012

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides the distribution of young forests (forests less than 27 years of age) and their estimated stand ages across the full extent of Russia at 500-m...

  10. Distributed Formation State Estimation Algorithms Under Resource and Multi-Tasking Constraints Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Recent work on distributed multi-spacecraft systems has resulted in a number of architectures and algorithms for accurate estimation of spacecraft and formation...

  11. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    Science.gov (United States)

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  12. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  13. Modelling complete particle-size distributions from operator estimates of particle-size

    Science.gov (United States)

    Roberson, Sam; Weltje, Gert Jan

    2014-05-01

    Estimates of particle-size made by operators in the field and laboratory represent a vast and relatively untapped data archive. The wide spatial distribution of particle-size estimates makes them ideal for constructing geological models and soil maps. This study uses a large data set from the Netherlands (n = 4837) containing both operator estimates of particle size and complete particle-size distributions measured by laser granulometry. This study introduces a logit-based constrained-cubic-spline (CCS) algorithm to interpolate complete particle-size distributions from operator estimates. The CCS model is compared to four other models: (i) a linear interpolation; (ii) a log-hyperbolic interpolation; (iii) an empirical logistic function; and (iv) an empirical arctan function. Operator estimates were found to be both inaccurate and imprecise; only 14% of samples were successfully classified using the Dutch classification scheme for fine sediment. Operator estimates of sediment particle-size encompass the same range of values as particle-size distributions measured by laser analysis. However, the distributions measured by laser analysis show that most of the sand percentage values lie between zero and one, so the majority of the variability in the data is lost because operator estimates are made to the nearest 1% at best, and more frequently to the nearest 5%. A method for constructing complete particle-size distributions from operator estimates of sediment texture using a logit constrained cubit spline (CCS) interpolation algorithm is presented. This model and four other previously published methods are compared to establish the best approach to modelling particle-size distributions. The logit-CCS model is the most accurate method, although both logit-linear and log-linear interpolation models provide reasonable alternatives. Models based on empirical distribution functions are less accurate than interpolation algorithms for modelling particle-size distributions in

  14. Square-Root Sigma-Point Information Consensus Filters for Distributed Nonlinear Estimation

    OpenAIRE

    Guoliang Liu; Guohui Tian

    2017-01-01

    This paper focuses on the convergence rate and numerical characteristics of the nonlinear information consensus filter for object tracking using a distributed sensor network. To avoid the Jacobian calculation, improve the numerical characteristic and achieve more accurate estimation results for nonlinear distributed estimation, we introduce square-root extensions of derivative-free information weighted consensus filters (IWCFs), which employ square-root versions of unscented transform, Stirli...

  15. Load Modeling and State Estimation Methods for Power Distribution Systems: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Tom McDermott

    2010-05-07

    The project objective was to provide robust state estimation for distribution systems, comparable to what has been available on transmission systems for decades. This project used an algorithm called Branch Current State Estimation (BCSE), which is more effective than classical methods because it decouples the three phases of a distribution system, and uses branch current instead of node voltage as a state variable, which is a better match to current measurement.

  16. Approximate Bayes Estimators of the Logistic Distribution Parameters Based on Progressive Type-II Censoring Scheme

    Directory of Open Access Journals (Sweden)

    Mohamed Mahmoud Mohamed

    2016-09-01

    Full Text Available In this paper we develop approximate Bayes estimators of the parameters,reliability, and hazard rate functions of the Logistic distribution by using Lindley’sapproximation, based on progressively type-II censoring samples. Noninformativeprior distributions are used for the parameters. Quadratic, linexand general Entropy loss functions are used. The statistical performances of theBayes estimates relative to quadratic, linex and general entropy loss functionsare compared to those of the maximum likelihood based on simulation study.

  17. Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates.

    Directory of Open Access Journals (Sweden)

    Caroline A Curtis

    Full Text Available Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance.We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the 'plant characteristics' information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN, and tested whether ΔCN was influenced by growth form or range size.Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001. The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation.Our results show that distribution data are consistently broader than USDA PLANTS experts' knowledge and likely provide more robust estimates of climatic tolerance, especially for

  18. Distributed Estimation in Sensor Networks with Imperfect Model Information: An Adaptive Learning-Based Approach

    Science.gov (United States)

    2012-05-01

    in particular, the mean- squared error (MSE) blows up with the SNR. Other than being inaccurate, since the SNR is unknown apriori , the estimate...requires per- fect knowledge of a, which is unknown apriori . In Section 3, we will introduce a learning-based distributed estimation procedure, the MDE

  19. ESTIMATIONS OF THE PARAMETERS OF THE WEIBULL DISTRIBUTION WITH PROGRESSIVELY CENSORED DATA

    OpenAIRE

    Shuo-Jye, Wu; Department of Statistics, Tamkang University

    2002-01-01

    We obtained estimation results concerning a progressively type-II censored sample from a two-parameter Weibull distribution. The maximum likelihood method is used to derive the point estimators of the parameters. An exact confidence interval and an exact joint confidence region for the parameters are constructed. A numerical example is presented to illustrate the methods proposed here.

  20. Nonparametric estimation of the stationary M/G/1 workload distribution function

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted

    In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary...

  1. Nonparametric estimation of the stationary M/G/1 workload distribution function

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted

    2005-01-01

    In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary...

  2. The Spatial Distribution of Poverty in Vietnam and the Potential for Targeting

    OpenAIRE

    Minot, Nicholas; Baulch, Bob

    2002-01-01

    The authors combine household survey and census data to construct a provincial poverty map of Vietnam and evaluate the accuracy of geographically targeted antipoverty programs. First, they estimate per capita expenditure as a function of selected household and geographic characteristics using the 1998 Vietnam Living Standards Survey. Next, they combine the results with data on the same hou...

  3. Experimental verification of NOVICE transport code predictions of electron distributions from targets

    Energy Technology Data Exchange (ETDEWEB)

    Kronenberg, S.; Brucker, G.J.; Jordan, T.; Bechtel, E.; Gentner, F.; Groeber, E

    2002-04-01

    This paper reports the results of experiments that were designed to check the validity of the NOVICE Adjoint Monte Carlo Transport code in predicting emission-electron distributions from irradiated targets. Previous work demonstrated that the code accurately calculated total electron yields from irradiated targets. In this investigation, a gold target was irradiated by X-rays with effective quantum energies of 79, 127, 174, 216, and 250 keV. Spectra of electrons from the target were measured for an incident photon angle of 45 deg., an emission-electron polar angle of 45 deg., azimuthal angles of 0 deg. and 180 deg., and in both the forward and backward directions. NOVICE was used to predict those electron-energy-distributions for the same set of experimental conditions. The agreement in shape of the theoretical and experimental distributions was good, whereas the absolute agreement in amplitude was within about a factor of 2 over most of the energy range of the spectra. Previous experimental and theoretical comparisons together with these results show that the code can be used to simulate the generation physics of those distributions.

  4. Comparison of "E-Rater"[R] Automated Essay Scoring Model Calibration Methods Based on Distributional Targets

    Science.gov (United States)

    Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine

    2012-01-01

    This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…

  5. Application of the Unbounded Probability Distribution of the Johnson System for Floods Estimation

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2015-09-01

    Full Text Available Floods designs constitute a key to estimate the sizing of new water works and to review the hydrological security of existing ones. The most reliable method for estimating their magnitudes associated with certain return periods is to fit a probabilistic model to available records of maximum annual flows. Since such model is at first unknown, several models need to be tested in order to select the most appropriate one according to an arbitrary statistical index, commonly the standard error of fit. Several probability distributions have shown versatility and consistency of results when processing floods records and therefore, its application has been established as a norm or precept. The Johnson System has three families of distributions, one of which is the Log–Normal model with three parameters of fit, which is also the border between the bounded distributions and those with no upper limit. These families of distributions have four adjustment parameters and converge to the standard normal distribution, so that their predictions are obtained with such a model. Having contrasted the three probability distributions established by precept in 31 historical records of hydrological events, the Johnson system is applied to such data. The results of the unbounded distribution of the Johnson system (SJU are compared to the optimal results from the three distributions. It was found that the predictions of the SJU distribution are similar to those obtained with the other models in the low return periods ( 1000 years. Because of its theoretical support, the SJU model is recommended in flood estimation.

  6. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain

    2017-01-09

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  7. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  8. A Time Delay Estimation Method Based on Wavelet Transform and Speech Envelope for Distributed Microphone Arrays

    Directory of Open Access Journals (Sweden)

    YIN, F.

    2013-08-01

    Full Text Available A time delay estimation method based on wavelet transform and speech envelope is proposed for distributed microphone arrays. This method first extracts the speech envelopes of the signals processed with multi-level discrete wavelet transform, and then makes use of the speech envelopes to estimate a coarse time delay. Finally it searches for the accurate time delay near the coarse time delay by the cross-correlation function calculated in time domain. The simulation results illustrate that the proposed method can accurately estimate the time delay between two distributed microphone array signals.

  9. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    Science.gov (United States)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  10. Reduced complexity FFT-based DOA and DOD estimation for moving target in bistatic MIMO radar

    KAUST Repository

    Ali, Hussain

    2016-06-24

    In this paper, we consider a bistatic multiple-input multiple-output (MIMO) radar. We propose a reduced complexity algorithm to estimate the direction-of-arrival (DOA) and direction-of-departure (DOD) for moving target. We show that the calculation of parameter estimation can be expressed in terms of one-dimensional fast-Fourier-transforms which drastically reduces the complexity of the optimization algorithm. The performance of the proposed algorithm is compared with the two-dimension multiple signal classification (2D-MUSIC) and reduced-dimension MUSIC (RD-MUSIC) algorithms. It is shown by simulations, our proposed algorithm has better estimation performance and lower computational complexity compared to the 2D-MUSIC and RD-MUSIC algorithms. Moreover, simulation results also show that the proposed algorithm achieves the Cramer-Rao lower bound. © 2016 IEEE.

  11. Square-Root Sigma-Point Information Consensus Filters for Distributed Nonlinear Estimation.

    Science.gov (United States)

    Liu, Guoliang; Tian, Guohui

    2017-04-08

    This paper focuses on the convergence rate and numerical characteristics of the nonlinear information consensus filter for object tracking using a distributed sensor network. To avoid the Jacobian calculation, improve the numerical characteristic and achieve more accurate estimation results for nonlinear distributed estimation, we introduce square-root extensions of derivative-free information weighted consensus filters (IWCFs), which employ square-root versions of unscented transform, Stirling's interpolation and cubature rules to linearize nonlinear models, respectively. In addition, to improve the convergence rate, we introduce the square-root dynamic hybrid consensus filters (DHCFs), which use an estimated factor to weight the information contributions and shows a faster convergence rate when the number of consensus iterations is limited. Finally, compared to the state of the art, the simulation shows that the proposed methods can improve the estimation results in the scenario of distributed camera networks.

  12. Square-Root Sigma-Point Information Consensus Filters for Distributed Nonlinear Estimation

    Directory of Open Access Journals (Sweden)

    Guoliang Liu

    2017-04-01

    Full Text Available This paper focuses on the convergence rate and numerical characteristics of the nonlinear information consensus filter for object tracking using a distributed sensor network. To avoid the Jacobian calculation, improve the numerical characteristic and achieve more accurate estimation results for nonlinear distributed estimation, we introduce square-root extensions of derivative-free information weighted consensus filters (IWCFs, which employ square-root versions of unscented transform, Stirling’s interpolation and cubature rules to linearize nonlinear models, respectively. In addition, to improve the convergence rate, we introduce the square-root dynamic hybrid consensus filters (DHCFs, which use an estimated factor to weight the information contributions and shows a faster convergence rate when the number of consensus iterations is limited. Finally, compared to the state of the art, the simulation shows that the proposed methods can improve the estimation results in the scenario of distributed camera networks.

  13. Hybrid fuzzy charged system search algorithm based state estimation in distribution networks

    Directory of Open Access Journals (Sweden)

    Sachidananda Prasad

    2017-06-01

    Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.

  14. Parallel Interdigitated Distributed Networks within the Individual Estimated by Intrinsic Functional Connectivity.

    Science.gov (United States)

    Braga, Rodrigo M; Buckner, Randy L

    2017-07-19

    Certain organizational features of brain networks present in the individual are lost when central tendencies are examined in the group. Here we investigated the detailed network organization of four individuals each scanned 24 times using MRI. We discovered that the distributed network known as the default network is comprised of two separate networks possessing adjacent regions in eight or more cortical zones. A distinction between the networks is that one is coupled to the hippocampal formation while the other is not. Further exploration revealed that these two networks were juxtaposed with additional networks that themselves fractionate group-defined networks. The collective networks display a repeating spatial progression in multiple cortical zones, suggesting that they are embedded within a broad macroscale gradient. Regions contributing to the newly defined networks are spatially variable across individuals and adjacent to distinct networks, raising issues for network estimation in group-averaged data and applied endeavors, including targeted neuromodulation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Effects of Experimental Conditions on Estimation Uncertainty of Weibull Distribution: Applications for Crack Initiation Testing

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jae Phil; Bahn, Chi Bum [Pusan National University, Busan (Korea, Republic of)

    2016-10-15

    It is well known that stress corrosion cracking (SCC) is one of the main material-related issues in operating nuclear reactors. To predict the initiation time of SCC, the Weibull distribution is widely used as a statistical model representing SCC reliability. The typical experimental procedure of an SCC initiation test involves an interval-censored cracking test with several specimens. From the result of the test, the experimenters can estimate the parameters of Weibull distribution by maximum likelihood estimation (MLE) or median rank regression (MRR). However, in order to obtain the sufficient accuracy of the Weibull estimators, it is hard for experimenters to determine the proper number of test specimens and censoring intervals. Therefore, in this work, the effects of some experimental conditions on estimation uncertainties of Weibull distribution were studied through the Monte Carlo simulation. The main goal of this work is to suggest quantitative estimation uncertainties for experimenters who want to develop probabilistic SCC initiation model by a cracking test. Widely used MRR and MLE are considered as estimation methods of Weibull distribution. By using a Monte Carlo simulation, uncertainties of MRR and ML estimators were quantified in various experimental cases. And we compared the uncertainties between the TDCI and TICI cases.

  16. Estimation of pore size distribution using concentric double pulsed-field gradient NMR.

    Science.gov (United States)

    Benjamini, Dan; Nevo, Uri

    2013-05-01

    Estimation of pore size distribution of well calibrated phantoms using NMR is demonstrated here for the first time. Porous materials are a central constituent in fields as diverse as biology, geology, and oil drilling. Noninvasive characterization of monodisperse porous samples using conventional pulsed-field gradient (PFG) NMR is a well-established method. However, estimation of pore size distribution of heterogeneous polydisperse systems, which comprise most of the materials found in nature, remains extremely challenging. Concentric double pulsed-field gradient (CDPFG) is a 2-D technique where both q (the amplitude of the diffusion gradient) and φ (the relative angle between the gradient pairs) are varied. A recent prediction indicates this method should produce a more accurate and robust estimation of pore size distribution than its conventional 1-D versions. Five well defined size distribution phantoms, consisting of 1-5 different pore sizes in the range of 5-25 μm were used. The estimated pore size distributions were all in good agreement with the known theoretical size distributions, and were obtained without any a priori assumption on the size distribution model. These findings support that in addition to its theoretical benefits, the CDPFG method is experimentally reliable. Furthermore, by adding the angle parameter, sensitivity to small compartment sizes is increased without the use of strong gradients, thus making CDPFG safe for biological applications. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Quantitative estimation of plum pox virus targets acquired and transmitted by a single Myzus persicae.

    Science.gov (United States)

    Moreno, Aranzazu; Fereres, Alberto; Cambra, Mariano

    2009-01-01

    The viral charge acquired and inoculated by single aphids in a non-circulative transmission is estimated using plum pox virus (PPV). A combination of electrical penetration graph and TaqMan real-time RT-PCR techniques was used to establish the average number of PPV RNA targets inoculated by an aphid in a single probe (26,750), approximately half of the acquired ones. This number of PPV targets is responsible for a systemic infection of 20% on the inoculated receptor plants. No significant differences were found between the number of PPV RNA targets acquired after one and after five intracellular punctures (pd), but the frequency of infected receptor plants was higher after 5 pd. The percentage of PPV-positive leaf discs after just 1 pd of inoculation probe (28%/4,603 targets) was lower than after 5 pd (45.8%/135 x 10(6) targets). The methodology employed could be easily extended to other virus-vector-host combinations to improve the accuracy of models used in virus epidemiology.

  18. Estimating time and spatial distribution of snow water equivalent in the Hakusan area

    Science.gov (United States)

    Tanaka, K.; Matsui, Y.; Touge, Y.

    2015-12-01

    In the Sousei program, on-going Japanese research program for risk information on climate change, assessing the impact of climate change on water resources is attempted using the integrated water resources model which consists of land surface model, irrigation model, river routing model, reservoir operation model, and crop growth model. Due to climate change, reduction of snowfall amount, reduction of snow cover and change in snowmelt timing, change in river discharge are of increasing concern. So, the evaluation of snow water amount is crucial for assessing the impact of climate change on water resources in Japan. To validate the snow simulation of the land surface model, time and spatial distribution of the snow water equivalent was estimated using the observed surface meteorological data and RAP (Radar Analysis Precipitation) data. Target area is Hakusan. Hakusan means 'white mountain' in Japanese. Water balance of the Tedori River Dam catchment was checked with daily inflow data. Analyzed runoff was generally well for the period from 2010 to 2012. From the result for 2010-2011 winter, maximum snow water equivalent in the headwater area of the Tedori River dam reached more than 2000mm in early April. On the other hand, due to the underestimation of RAP data, analyzed runoff was under estimated from 2006 to 2009. This underestimation is probably not from the lack of land surface model, but from the quality of input precipitation data. In the original RAP, only the rain gauge data of JMA (Japan Meteorological Agency) were used in the analysis. Recently, other rain gauge data of MLIT (Ministry of Land, Infrastructure, Transport and Tourism) and local government have been added in the analysis. So, the quality of the RAP data especially in the mountain region has been greatly improved. "Reanalysis" of the RAP precipitation is strongly recommended using all the available off-line rain gauges information. High quality precipitation data will contribute to validate

  19. Acoustic Estimates of Distribution and Biomass of Different Acoustic Scattering Types Between the New England Shelf Break and Slope Waters

    KAUST Repository

    McLaren, Alexander

    2011-11-01

    Due to their great ecological significance, mesopelagic fishes are attracting a wider audience on account of the large biomass they represent. Data from the National Marine Fisheries Service (NMFS) provided the opportunity to explore an unknown region of the North-West Atlantic, adjacent to one of the most productive fisheries in the world. Acoustic data collected during the cruise required the identification of acoustically distinct scattering types to make inferences on the migrations, distributions and biomass of mesopelagic scattering layers. Six scattering types were identified by the proposed method in our data and traces their migrations and distributions in the top 200m of the water column. This method was able to detect and trace the movements of three scattering types to 1000m depth, two of which can be further subdivided. This process of identification enabled the development of three physically-derived target-strength models adapted to traceable acoustic scattering types for the analysis of biomass and length distribution to 1000m depth. The abundance and distribution of acoustic targets varied closely in relation to varying physical environments associated with a warm core ring in the New England continental Shelf break region. The continental shelf break produces biomass density estimates that are twice as high as the warm core ring and the surrounding continental slope waters are an order of magnitude lower than either estimate. Biomass associated with distinct layers is assessed and any benefits brought about by upwelling at the edge of the warm core ring are shown not to result in higher abundance of deepwater species. Finally, asymmetric diurnal migrations in shelf break waters contrasts markedly with the symmetry of migrating layers within the warm ring, both in structure and density estimates, supporting a theory of predatorial and nutritional constraints to migrating pelagic species.

  20. Targeted Maximum Likelihood Estimation for Dynamic and Static Longitudinal Marginal Structural Working Models.

    Science.gov (United States)

    Petersen, Maya; Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark

    2014-06-18

    This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the

  1. Methods to estimate distribution and range extent of grizzly bears in the Greater Yellowstone Ecosystem

    Science.gov (United States)

    Haroldson, Mark A.; Schwartz, Charles C.; , Daniel D. Bjornlie; , Daniel J. Thompson; , Kerry A. Gunther; , Steven L. Cain; , Daniel B. Tyers; Frey, Kevin L.; Aber, Bryan C.

    2014-01-01

    The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.

  2. The current duration design for estimating the time to pregnancy distribution

    DEFF Research Database (Denmark)

    Gasbarra, Dario; Arjas, Elja; Vehtari, Aki

    2015-01-01

    times are only rarely selected into the sample of current durations, and this renders their estimation unstable. We introduce here a Bayesian method for this estimation problem, prove its asymptotic consistency, and compare the method to some variants of the non-parametric maximum likelihood estimators......This paper was inspired by the studies of Niels Keiding and co-authors on estimating the waiting time-to-pregnancy (TTP) distribution, and in particular on using the current duration design in that context. In this design, a cross-sectional sample of women is collected from those who are currently...... attempting to become pregnant, and then by recording from each the time she has been attempting. Our aim here is to study the identifiability and the estimation of the waiting time distribution on the basis of current duration data. The main difficulty in this stems from the fact that very short waiting...

  3. Estimation of current density distribution of PAFC by analysis of cell exhaust gas

    Energy Technology Data Exchange (ETDEWEB)

    Kato, S.; Seya, A. [Fuji Electric Co., Ltd., Ichihara-shi (Japan); Asano, A. [Fuji Electric Corporate, Ltd., Yokosuka-shi (Japan)

    1996-12-31

    To estimate distributions of Current densities, voltages, gas concentrations, etc., in phosphoric acid fuel cell (PAFC) stacks, is very important for getting fuel cells with higher quality. In this work, we leave developed a numerical simulation tool to map out the distribution in a PAFC stack. And especially to Study Current density distribution in the reaction area of the cell, we analyzed gas composition in several positions inside a gas outlet manifold of the PAFC stack. Comparing these measured data with calculated data, the current density distribution in a cell plane calculated by the simulation, was certified.

  4. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants.

    Science.gov (United States)

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.

  5. Experimental verification of NOVICE transport code predictions of electron distributions from targets

    CERN Document Server

    Kronenberg, S; Jordan, T; Bechtel, E; Gentner, F; Groeber, E

    2002-01-01

    This paper reports the results of experiments that were designed to check the validity of the NOVICE Adjoint Monte Carlo Transport code in predicting emission-electron distributions from irradiated targets. Previous work demonstrated that the code accurately calculated total electron yields from irradiated targets. In this investigation, a gold target was irradiated by X-rays with effective quantum energies of 79, 127, 174, 216, and 250 keV. Spectra of electrons from the target were measured for an incident photon angle of 45 deg., an emission-electron polar angle of 45 deg., azimuthal angles of 0 deg. and 180 deg., and in both the forward and backward directions. NOVICE was used to predict those electron-energy-distributions for the same set of experimental conditions. The agreement in shape of the theoretical and experimental distributions was good, whereas the absolute agreement in amplitude was within about a factor of 2 over most of the energy range of the spectra. Previous experimental and theoretical c...

  6. Non-parametric estimation and model checking procedures for marginal gap time distributions for recurrent events.

    Science.gov (United States)

    Kvist, Kajsa; Gerster, Mette; Andersen, Per Kragh; Kessing, Lars Vedel

    2007-12-30

    For recurrent events there is evidence that misspecification of the frailty distribution can cause severe bias in estimated regression coefficients (Am. J. Epidemiol 1998; 149:404-411; Statist. Med. 2006; 25:1672-1684). In this paper we adapt a procedure originally suggested in (Biometrika 1999; 86:381-393) for parallel data for checking the gamma frailty to recurrent events. To apply the model checking procedure, a consistent non-parametric estimator for the marginal gap time distributions is needed. This is in general not possible due to induced dependent censoring in the recurrent events setting, however, in (Biometrika 1999; 86:59-70) a non-parametric estimator for the joint gap time distributions based on the principle of inverse probability of censoring weights is suggested. Here, we attempt to apply this estimator in the model checking procedure and the performance of the method is investigated with simulations and applied to Danish registry data. The method is further investigated using the usual Kaplan-Meier estimator and a marginalized estimator for the marginal gap time distributions. We conclude that the procedure only works when the recurrent event is common and when the intra-individual association between gap times is weak. Copyright (c) 2007 John Wiley & Sons, Ltd.

  7. Scan statistics with local vote for target detection in distributed system

    Science.gov (United States)

    Luo, Junhai; Wu, Qi

    2017-12-01

    Target detection has occupied a pivotal position in distributed system. Scan statistics, as one of the most efficient detection methods, has been applied to a variety of anomaly detection problems and significantly improves the probability of detection. However, scan statistics cannot achieve the expected performance when the noise intensity is strong, or the signal emitted by the target is weak. The local vote algorithm can also achieve higher target detection rate. After the local vote, the counting rule is always adopted for decision fusion. The counting rule does not use the information about the contiguity of sensors but takes all sensors' data into consideration, which makes the result undesirable. In this paper, we propose a scan statistics with local vote (SSLV) method. This method combines scan statistics with local vote decision. Before scan statistics, each sensor executes local vote decision according to the data of its neighbors and its own. By combining the advantages of both, our method can obtain higher detection rate in low signal-to-noise ratio environment than the scan statistics. After the local vote decision, the distribution of sensors which have detected the target becomes more intensive. To make full use of local vote decision, we introduce a variable-step-parameter for the SSLV. It significantly shortens the scan period especially when the target is absent. Analysis and simulations are presented to demonstrate the performance of our method.

  8. Estimation of T-cell repertoire diversity and clonal size distribution by Poisson abundance models.

    Science.gov (United States)

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Carneiro, Jorge

    2010-02-28

    The answer to many fundamental questions in Immunology requires the quantitative characterization of the T-cell repertoire, namely T cell receptor (TCR) diversity and clonal size distribution. An increasing number of repertoire studies are based on sequencing of the TCR variable regions in T-cell samples from which one tries to estimate the diversity of the original T-cell populations. Hitherto, estimation of TCR diversity was tackled either by a "standard" method that assumes a homogeneous clonal size distribution, or by non-parametric methods, such as the abundance-coverage and incidence-coverage estimators. However, both methods show caveats. On the one hand, the samples exhibit clonal size distributions with heavy right tails, a feature that is incompatible with the assumption of an equal frequency of every TCR sequence in the repertoire. Thus, this "standard" method produces inaccurate estimates. On the other hand, non-parametric estimators are robust in a wide range of situations, but per se provide no information about the clonal size distribution. This paper redeploys Poisson abundance models from Ecology to overcome the limitations of the above inferential procedures. These models assume that each TCR variant is sampled according to a Poisson distribution with a specific sampling rate, itself varying according to some Exponential, Gamma, or Lognormal distribution, or still an appropriate mixture of Exponential distributions. With these models, one can estimate the clonal size distribution in addition to TCR diversity of the repertoire. A procedure is suggested to evaluate robustness of diversity estimates with respect to the most abundant sampled TCR sequences. For illustrative purposes, previously published data on mice with limited TCR diversity are analyzed. Two of the presented models are more consistent with the data and give the most robust TCR diversity estimates. They suggest that clonal sizes follow either a Lognormal or an appropriate mixture of

  9. Target-object integration, attention distribution, and object orientation interactively modulate object-based selection.

    Science.gov (United States)

    Al-Janabi, Shahd; Greenberg, Adam S

    2016-10-01

    The representational basis of attentional selection can be object-based. Various studies have suggested, however, that object-based selection is less robust than spatial selection across experimental paradigms. We sought to examine the manner by which the following factors might explain this variation: Target-Object Integration (targets 'on' vs. part 'of' an object), Attention Distribution (narrow vs. wide), and Object Orientation (horizontal vs. vertical). In Experiment 1, participants discriminated between two targets presented 'on' an object in one session, or presented as a change 'of' an object in another session. There was no spatial cue-thus, attention was initially focused widely-and the objects were horizontal or vertical. We found evidence of object-based selection only when targets constituted a change 'of' an object. Additionally, object orientation modulated the sign of object-based selection: We observed a same-object advantage for horizontal objects, but a same-object cost for vertical objects. In Experiment 2, an informative cue preceded a single target presented 'on' an object or as a change 'of' an object (thus, attention was initially focused narrowly). Unlike in Experiment 1, we found evidence of object-based selection independent of target-object integration. We again found that the sign of selection was modulated by the objects' orientation. This result may reflect a meridian effect, which emerged due to anisotropies in the cortical representations when attention is oriented endogenously. Experiment 3 revealed that object orientation did not modulate object-based selection when attention was oriented exogenously. Our findings suggest that target-object integration, attention distribution, and object orientation modulate object-based selection, but only in combination.

  10. Robust modeling in screening studies: estimation of sensitivity and preclinical sojourn time distribution.

    Science.gov (United States)

    Shen, Yu; Zelen, Marvin

    2005-10-01

    In early-detection clinical trials, quantities such as the sensitivity of the screening modality and the preclinical duration of the disease are important to describe the natural history of the disease and its interaction with a screening program. Assume that the schedule of a screening program is periodic and that the sojourn time in the preclinical state has a piecewise density function. Modeling the preclinical sojourn time distribution as a piecewise density function results in robust estimation of the distribution function. Our aim is to estimate the piecewise density function and the examination sensitivity using both generalized least squares and maximum likelihood methods. We carried out extensive simulations to evaluate the performance of the methods of estimation. The different estimation methods provide complimentary tools to obtain the unknown parameters. The methods are applied to three breast cancer early-detection trials.

  11. ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD

    Directory of Open Access Journals (Sweden)

    Yuri Gulbin

    2011-05-01

    Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.

  12. Estimation of the Binominal Distribution Parameters Using the Method of Moments and Its Asymptotic Properties

    Directory of Open Access Journals (Sweden)

    A.N. Safiullina

    2016-06-01

    Full Text Available The problem of estimating the parameters m and p of the binomial distribution for a sample having the fixed volume n with the help of the method of moments is considered in this paper. Using the delta method, the joint asymptotic normality of the estimates is established and the parameters of the limit distribution are calculated. The moment estimates of the parameters m and p do not have averages and variance. An explanation is offered for the asymptotic normality parameters in terms of characteristics of the accuracy properties of the estimates. On the basis of the data of statistical modelling, the accuracy properties of the estimates by the delta-method and their modifications which do not have initial defects of the estimates (the values of the estimates of p are below zero and those of m are smaller than the greatest value in the sample are explored. An example of estimating the parameters m and p according to the observations of the number of responses in the experiment with nervous synapse (m is the number of vesicles with acetylcholine in the vicinity of the synapse, p is the probability of acetylcholine release by each vesicle is provided.

  13. A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2018-02-01

    Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.

  14. Study of QTL Effects Distribution on Accuracy of Genomic Breeding values Estimated Using Bayesian Method

    Directory of Open Access Journals (Sweden)

    nazanin mahmoudi

    2016-04-01

    Full Text Available Introduction Genetic evaluation and estimation of breeding value are one of the most fundamental elements of breeding programmes for genetic improvement. Recently, genomic selection has become an efficient method to approach this aim. The accuracy of estimated Genomic breeding value is the most important factor in genomic selection. Different studies have been performed addressing the factors affecting the accuracy of estimated Genomic breeding value. The aim of this study was to evaluate the effect of beta and gamma distributions on the accuracy of genetic evaluation. Materials and Methods A genome consisted of 10 chromosomes with 200 cm length was simulated. Markers were spaced on 0.2 cm intervals and different numbers of QTL with random distribution were simulated. Only additive gene effects were considered. The base population was simulated with an effective size of 100 animals and this structure continued up to generation 50 to creating linkage disequilibrium between the markers and QTL. The population size was increased to 1000 animals in generation 51 (reference generation. Marker effects were calculated from the genomic and phenotypic information. Genomic breeding value was computed in generations 52 to 57 (training generation. Effects of gamma 1 distribution (shape=0.4, scale=1.66, gamma 2 distribution (shape=0.4, scale=1 and beta distribution (shape1=3.11, shape2=1.16 were studied in the reference and training groups. The heritability values were 0.2 and 0.05. Results and Discussion The results showed that accuracy of genomic breeding value reduced with passing generation (from 51 to 57 for two gamma distributions and beta distribution; this decrease may be due to two factors: recombination has negative impact on accuracy of genomic breeding value and selection reduces genetic variance as the number of generations increases. Accuracy of genomic estimated breeding value increased as the heritability increased so that the high

  15. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  16. Matched, mismatched, and robust scatter matrix estimation and hypothesis testing in complex t-distributed data

    Science.gov (United States)

    Fortunati, Stefano; Gini, Fulvio; Greco, Maria S.

    2016-12-01

    Scatter matrix estimation and hypothesis testing are fundamental inference problems in a wide variety of signal processing applications. In this paper, we investigate and compare the matched, mismatched, and robust approaches to solve these problems in the context of the complex elliptically symmetric (CES) distributions. The matched approach is when the estimation and detection algorithms are tailored on the correct data distribution, whereas the mismatched approach refers to the case when the scatter matrix estimator and the decision rule are derived under a model assumption that is not correct. The robust approach aims at providing good estimation and detection performance, even if suboptimal, over a large set of possible data models, irrespective of the actual data distribution. Specifically, due to its central importance in both the statistical and engineering applications, we assume for the input data a complex t-distribution. We analyze scatter matrix estimators derived under the three different approaches and compare their mean square error (MSE) with the constrained Cramér-Rao bound (CCRB) and the constrained misspecified Cramér-Rao bound (CMCRB). In addition, the detection performance and false alarm rate (FAR) of the various detection algorithms are compared with that of the clairvoyant optimum detector.

  17. Distributed Estimation of Oscillations in Power Systems: an Extended Kalman Filtering Approach

    OpenAIRE

    Yu, Zhe; Shi, Di; Wang, Zhiwei; Zhang, Qibing; Huang, Junhui; Pan, Sen

    2017-01-01

    Online estimation of electromechanical oscillation parameters provides essential information to prevent system instability and blackout and helps to identify event categories and locations. We formulate the problem as a state space model and employ the extended Kalman filter to estimate oscillation frequencies and damping factors directly based on data from phasor measurement units. Due to considerations of communication burdens and privacy concerns, a fully distributed algorithm is proposed ...

  18. Space-time Coordinated Distributed Sensing Algorithms for Resource Efficient Narrowband Target Localization and Tracking

    OpenAIRE

    Shashi Phoha; John Koch; Eric Grele; Christopher Griffin; Bharat Madan

    2005-01-01

    Distributed sensing has been used for enhancing signal to noise ratios for space-time localization and tracking of remote objects using phased array antennas, sonar, and radio signals. The use of these technologies in identifying mobile targets in a field, emitting acoustic signals, using a network of low-cost narrow band acoustic micro-sensing devices randomly dispersed over the region of interest, presents unique challenges. The effects of wind, turbulence, and temperature gradients and oth...

  19. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution.

    Science.gov (United States)

    Han, Fang; Liu, Han

    2017-02-01

    Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.

  20. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  1. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    Science.gov (United States)

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems.

  2. Distributed Detection of Randomly Located Targets in Mobility-Assisted Sensor Networks with Node Mobility Management

    Directory of Open Access Journals (Sweden)

    Jayaweera SudharmanK

    2010-01-01

    Full Text Available Performance gain achieved by adding mobile nodes to a stationary sensor network for target detection depends on factors such as the number of mobile nodes deployed, mobility patterns, speed and energy constraints of mobile nodes, and the nature of the target locations (deterministic or random. In this paper, we address the problem of distributed detection of a randomly located target by a hybrid sensor network. Specifically, we develop two decision-fusion architectures for detection where in the first one, impact of node mobility is taken into account for decisions updating at the fusion center, while in the second model the impact of node mobility is taken at the node level decision updating. The cost of deploying mobile nodes is analyzed in terms of the minimum fraction of mobile nodes required to achieve the desired performance level within a desired delay constraint. Moreover, we consider managing node mobility under given constraints.

  3. Empirical Bayes Gaussian likelihood estimation of exposure distributions from pooled samples in human biomonitoring.

    Science.gov (United States)

    Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng

    2014-12-10

    Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Method to estimate position, motion and trajectory of a target with a single x-ray imager

    DEFF Research Database (Denmark)

    2010-01-01

    component along at least one imager axis of the target using a spatial probability density. The present invention provides a probability-based method for accurate estimation of the mean position, motion magnitude, motion correlation, and trajectory of a tumor from CBCT projections. The applicability......The present invention provides a method for estimation of retrospective and real-time 3D target position by a single imager. The invention includes imaging a target on at least one 2D plane to determine 2D position and/or position components of the target, and resolving a position and/or position...

  5. Optimal Meter Placement for Distribution Network State Estimation: A Circuit Representation Based MILP Approach

    DEFF Research Database (Denmark)

    Chen, Xiaoshuang; Lin, Jin; Wan, Can

    2016-01-01

    State estimation (SE) in distribution networks is not as accurate as that in transmission networks. Traditionally, distribution networks (DNs) are lack of direct measurements due to the limitations of investments and the difficulties of maintenance. Therefore, it is critical to improve the accuracy...... of SE in distribution networks by placing additional physical meters. For state-of-the-art SE models, it is difficult to clearly quantify measurements' influences on SE errors, so the problems of optimal meter placement for reducing SE errors are mostly solved by heuristic or suboptimal algorithms...

  6. Estimation of Bimodal Urban Link Travel Time Distribution and Its Applications in Traffic Analysis

    Directory of Open Access Journals (Sweden)

    Yuxiong Ji

    2015-01-01

    Full Text Available Vehicles travelling on urban streets are heavily influenced by traffic signal controls, pedestrian crossings, and conflicting traffic from cross streets, which would result in bimodal travel time distributions, with one mode corresponding to travels without delays and the other travels with delays. A hierarchical Bayesian bimodal travel time model is proposed to capture the interrupted nature of urban traffic flows. The travel time distributions obtained from the proposed model are then considered to analyze traffic operations and estimate travel time distribution in real time. The advantage of the proposed bimodal model is demonstrated using empirical data, and the results are encouraging.

  7. Estimation of the variation in target strength of objects in the air

    Science.gov (United States)

    Gudra, Tadeusz; Opielinski, Krzysztof J.; Jankowski, Jakub

    2010-01-01

    Target strength is one of the key values for sonar systems. The parameter is very useful in activities related to detection and estimation of marine organisms, object identification and sonar calibration. The essential difference between detection and ranging performed in water and air results from characteristics of those media. A common feature for both water and air is the fact that only longitudinal wave propagates in them. Ultrasonic wave propagation velocity in air changes when the medium's physical conditions alter (e.g. temperature, humidity, pressure, presence of other gases or pollution and gas medium heterogeneity). In water environment not all solid state objects can be assumed to be rigid and motionless-this approximation works better in case of air. This aspect is especially vital when analyzing waves penetrating objects reflecting ultrasonic wave. This paper presents calculation and measurement results of target strength of objects, shaped as spheres and cylinders of infinite and finite length, placed in air medium. The difference between the calculated and measured target strength was analyzed and escalation of difference in case of heterogeneous objects was pointed out.

  8. Estimation of subsurface dielectric target depth for GPR planetary exploration: Laboratory measurements and modeling

    Science.gov (United States)

    Lauro, Sebastian Emanuel; Mattei, Elisabetta; Barone, Pier Matteo; Pettinelli, Elena; Vannaroni, Giuliano; Valerio, Guido; Comite, Davide; Galli, Alessandro

    2013-06-01

    In order to test the accuracy of Ground Penetrating Radar (GPR) in the detection of subsurface targets for planetary exploration, a laboratory scale experiment is performed based on a 'sand box' setup using two different bistatic GPR commercial instruments. Specific attention is paid to the challenging case of buried dielectric scatterers whose location and dimensions are of the same order of magnitude of the GPR antenna separation and signal wavelengths. The target depth is evaluated by using the wave propagation velocity measured with Time Domain Reflectometry (TDR). By means of a proper modeling of the different wave-propagation contributions to the gathered signal, the position of buried targets is correctly estimated with both GPRs even for rather shallow and small-size scatterers in near-field conditions. In this frame, relevant results for a basalt block buried in a silica soil are discussed. The experimental configuration is also simulated with an ad-hoc numerical code, whose synthetic radar sections fully confirm the measured results. The acquired information is of paramount importance for the analysis of various scenarios involving GPR on-site application in future space missions.

  9. Distributional impact of rotavirus vaccination in 25 GAVI countries: estimating disparities in benefits and cost-effectiveness.

    Science.gov (United States)

    Rheingans, Richard; Atherly, Deborah; Anderson, John

    2012-04-27

    Other studies have demonstrated that the impact and cost effectiveness of rotavirus vaccination differs among countries, with greater mortality reduction benefits and lower cost-effectiveness ratios in low-income and high-mortality countries. This analysis combines the results of a country level model of rotavirus vaccination published elsewhere with data from Demographic and Health Surveys on within-country patterns of vaccine coverage and diarrhea mortality risk factors to estimate within-country distributional effects of rotavirus vaccination. The study examined 25 countries eligible for funding through the GAVI Alliance. For each country we estimate the benefits and cost-effectiveness of vaccination for each wealth quintile assuming current vaccination patterns and for a scenario where vaccine coverage is equalized to the highest quintile's coverage. In the case of India, variations in coverage and risk proxies by state were modeled to estimate geographic distributional effects. In all countries, rates of vaccination were highest and risks of mortality were lowest in the top two wealth quintiles. However countries differ greatly in the relative inequities in these two underlying variables. Similarly, in all countries examined, the cost-effectiveness ratio for vaccination ($/Disability-Adjusted Life Year averted, DALY) is substantially greater in the higher quintiles (ranging from 2-10 times higher). In all countries, the greatest potential benefit of vaccination was in the poorest quintiles. However, due to reduced vaccination coverage, projected benefits for these quintiles were often lower. Equitable coverage was estimated to result in an 89% increase in mortality reduction for the poorest quintile and a 38% increase overall. Rotavirus vaccination is most cost-effective in low-income groups and regions. However in many countries, simply adding new vaccines to existing systems targets investments to higher income children, due to disparities in vaccination

  10. Text-pose estimation in 3D using edge-direction distributions

    NARCIS (Netherlands)

    Bulacu, M.L.; Schomaker, L.R.B.; Kamel, M.; Campilho, A.

    2005-01-01

    This paper presents a method for estimating the orientation of planar text surfaces using the edge-direction distribution (EDD) extracted from the image as input to a neural network. We consider canonical rotations and we developed a mathematical model to analyze how the EDD changes with the

  11. Local distributed estimation. [for flexible spacecraft vibration mode optimal feedback control

    Science.gov (United States)

    Schaechter, D. B.

    1980-01-01

    Based on partial differential equations of motion the closed form solution for the optimal estimation of a spatially continuous state vector is derived, using a continuously distributed sensor. Local control is shown to be the feedback that minimizes a quadratic performance index of sensor and process disturbances. A detailed example of the control of a string in tension is presented.

  12. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    Directory of Open Access Journals (Sweden)

    AMAD HAMZA

    2016-10-01

    Full Text Available In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time.For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation. Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods

  13. Inequalities in cancer distribution in Tehran; A disaggregated estimation of 2007 incidencea by 22 districts

    Directory of Open Access Journals (Sweden)

    Marzieh Rohani Rasaf

    2012-01-01

    Conclusion: This report provides an appropriate guide to estimate the cancer distribution within the districts of Tehran. Higher ASR in districts 6, 1, 2, and 3, warrant further research, to obtain robust population-based incidence data and also to investigate the background predisposing factors in the specified districts.

  14. Estimating Traveler Populations at Airport and Cruise Terminals for Population Distribution and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Jochem, Warren C [ORNL; Sims, Kelly M [ORNL; Bright, Eddie A [ORNL; Urban, Marie L [ORNL; Rose, Amy N [ORNL; Coleman, Phil R [ORNL; Bhaduri, Budhendra L [ORNL

    2013-01-01

    In recent years, uses of high-resolution population distribution databases are increasing steadily for environmental, socioeconomic, public health, and disaster-related research and operations. With the development of daytime population distribution, temporal resolution of such databases has been improved. However, the lack of incorporation of transitional population, namely business and leisure travelers, leaves a significant population unaccounted for within the critical infrastructure networks, such as at transportation hubs. This paper presents two general methodologies for estimating passenger populations in airport and cruise port terminals at a high temporal resolution which can be incorporated into existing population distribution models. The methodologies are geographically scalable and are based on, and demonstrate how, two different transportation hubs with disparate temporal population dynamics can be modeled utilizing publicly available databases including novel data sources of flight activity from the Internet which are updated in near-real time. The airport population estimation model shows great potential for rapid implementation for a large collection of airports on a national scale, and the results suggest reasonable accuracy in the estimated passenger traffic. By incorporating population dynamics at high temporal resolutions into population distribution models, we hope to improve the estimates of populations exposed to or at risk to disasters, thereby improving emergency planning and response, and leading to more informed policy decisions.

  15. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    Science.gov (United States)

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  16. Spatial factor analysis: a new tool for estimating joint species distributions and correlations in species range

    DEFF Research Database (Denmark)

    Thorson, James T.; Scheuerell, Mark D.; Shelton, Andrew O.

    2015-01-01

    be imprecise for species with low densities or few observations. Additionally, simple geostatistical methods fail to account for correlations in distribution among species and generally estimate such cross-correlations as a post hoc exercise. 2. We therefore present spatial factor analysis (SFA), a spatial...

  17. Estimation and implications of random errors in whole-body dosimetry for targeted radionuclide therapy

    Science.gov (United States)

    Flux, Glenn D.; Guy, Matthew J.; Beddows, Ruth; Pryor, Matthew; Flower, Maggie A.

    2002-09-01

    For targeted radionuclide therapy, the level of activity to be administered is often determined from whole-body dosimetry performed on a pre-therapy tracer study. The largest potential source of error in this method is due to inconsistent or inaccurate activity retention measurements. The main aim of this study was to develop a simple method to quantify the uncertainty in the absorbed dose due to these inaccuracies. A secondary aim was to assess the effect of error propagation from the results of the tracer study to predictive absorbed dose estimates for the therapy as a result of using different radionuclides for each. Standard error analysis was applied to the MIRD schema for absorbed dose calculations. An equation was derived to describe the uncertainty in the absorbed dose estimate due solely to random errors in activity-time data, requiring only these data as input. Two illustrative examples are given. It is also shown that any errors present in the dosimetry calculations following the tracer study will propagate to errors in predictions made for the therapy study according to the ratio of the respective effective half-lives. If the therapy isotope has a much longer physical half-life than the tracer isotope (as is the case, for example, when using 123I as a tracer for 131I therapy) the propagation of errors can be significant. The equations derived provide a simple means to estimate two potentially large sources of error in whole-body absorbed dose calculations.

  18. A modified weighted function method for parameter estimation of Pearson type three distribution

    Science.gov (United States)

    Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo

    2014-04-01

    In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.

  19. Site-occupancy distribution modeling to correct population-trend estimates derived from opportunistic observations

    Science.gov (United States)

    Kery, M.; Royle, J. Andrew; Schmid, Hans; Schaub, M.; Volet, B.; Hafliger, G.; Zbinden, N.

    2010-01-01

    Species' assessments must frequently be derived from opportunistic observations made by volunteers (i.e., citizen scientists). Interpretation of the resulting data to estimate population trends is plagued with problems, including teasing apart genuine population trends from variations in observation effort. We devised a way to correct for annual variation in effort when estimating trends in occupancy (species distribution) from faunal or floral databases of opportunistic observations. First, for all surveyed sites, detection histories (i.e., strings of detection-nondetection records) are generated. Within-season replicate surveys provide information on the detectability of an occupied site. Detectability directly represents observation effort; hence, estimating detectablity means correcting for observation effort. Second, site-occupancy models are applied directly to the detection-history data set (i.e., without aggregation by site and year) to estimate detectability and species distribution (occupancy, i.e., the true proportion of sites where a species occurs). Site-occupancy models also provide unbiased estimators of components of distributional change (i.e., colonization and extinction rates). We illustrate our method with data from a large citizen-science project in Switzerland in which field ornithologists record opportunistic observations. We analyzed data collected on four species: the widespread Kingfisher (Alcedo atthis. ) and Sparrowhawk (Accipiter nisus. ) and the scarce Rock Thrush (Monticola saxatilis. ) and Wallcreeper (Tichodroma muraria. ). Our method requires that all observed species are recorded. Detectability was Biology.

  20. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  1. Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation.

    Science.gov (United States)

    Zhao, Wei; Wang, Han

    2016-06-28

    Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages.

  2. System effectiveness of a targeted free mass distribution of long lasting insecticidal nets in Zanzibar, Tanzania

    Directory of Open Access Journals (Sweden)

    Abass Ali K

    2010-06-01

    Full Text Available Abstract Background Insecticide-treated nets (ITN and long-lasting insecticidal treated nets (LLIN are important means of malaria prevention. Although there is consensus regarding their importance, there is uncertainty as to which delivery strategies are optimal for dispensing these life saving interventions. A targeted mass distribution of free LLINs to children under five and pregnant women was implemented in Zanzibar between August 2005 and January 2006. The outcomes of this distribution among children under five were evaluated, four to nine months after implementation. Methods Two cross-sectional surveys were conducted in May 2006 in two districts of Zanzibar: Micheweni (MI on Pemba Island and North A (NA on Unguja Island. Household interviews were conducted with 509 caretakers of under-five children, who were surveyed for socio-economic status, the net distribution process, perceptions and use of bed nets. Each step in the distribution process was assessed in all children one to five years of age for unconditional and conditional proportion of success. System effectiveness (the accumulated proportion of success and equity effectiveness were calculated, and predictors for LLIN use were identified. Results The overall proportion of children under five sleeping under any type of treated net was 83.7% (318/380 in MI and 91.8% (357/389 in NA. The LLIN usage was 56.8% (216/380 in MI and 86.9% (338/389 in NA. Overall system effectiveness was 49% in MI and 87% in NA, and equity was found in the distribution scale-up in NA. In both districts, the predicting factor of a child sleeping under an LLIN was caretakers thinking that LLINs are better than conventional nets (OR = 2.8, p = 0.005 in MI and 2.5, p = 0.041 in NA, in addition to receiving an LLIN (OR = 4.9, p Conclusions Targeted free mass distribution of LLINs can result in high and equitable bed net coverage among children under five. However, in order to sustain high effective coverage, there

  3. System effectiveness of a targeted free mass distribution of long lasting insecticidal nets in Zanzibar, Tanzania.

    Science.gov (United States)

    Beer, Netta; Ali, Abdullah S; de Savigny, Don; Al-Mafazy, Abdul-Wahiyd H; Ramsan, Mahdi; Abass, Ali K; Omari, Rahila S; Björkman, Anders; Källander, Karin

    2010-06-18

    Insecticide-treated nets (ITN) and long-lasting insecticidal treated nets (LLIN) are important means of malaria prevention. Although there is consensus regarding their importance, there is uncertainty as to which delivery strategies are optimal for dispensing these life saving interventions. A targeted mass distribution of free LLINs to children under five and pregnant women was implemented in Zanzibar between August 2005 and January 2006. The outcomes of this distribution among children under five were evaluated, four to nine months after implementation. Two cross-sectional surveys were conducted in May 2006 in two districts of Zanzibar: Micheweni (MI) on Pemba Island and North A (NA) on Unguja Island. Household interviews were conducted with 509 caretakers of under-five children, who were surveyed for socio-economic status, the net distribution process, perceptions and use of bed nets. Each step in the distribution process was assessed in all children one to five years of age for unconditional and conditional proportion of success. System effectiveness (the accumulated proportion of success) and equity effectiveness were calculated, and predictors for LLIN use were identified. The overall proportion of children under five sleeping under any type of treated net was 83.7% (318/380) in MI and 91.8% (357/389) in NA. The LLIN usage was 56.8% (216/380) in MI and 86.9% (338/389) in NA. Overall system effectiveness was 49% in MI and 87% in NA, and equity was found in the distribution scale-up in NA. In both districts, the predicting factor of a child sleeping under an LLIN was caretakers thinking that LLINs are better than conventional nets (OR = 2.8, p = 0.005 in MI and 2.5, p = 0.041 in NA), in addition to receiving an LLIN (OR = 4.9, p < 0.001 in MI and in OR = 30.1, p = 0.001 in NA). Targeted free mass distribution of LLINs can result in high and equitable bed net coverage among children under five. However, in order to sustain high effective coverage, there is need

  4. Decoupled Estimation of 2D DOA for Coherently Distributed Sources Using 3D Matrix Pencil Method

    Directory of Open Access Journals (Sweden)

    Tang Bin

    2008-08-01

    Full Text Available A new 2D DOA estimation method for coherently distributed (CD source is proposed. CD sources model is constructed by using Taylor approximation to the generalized steering vector (GSV, whereas the angular and angular spread are separated from signal pattern. The angular information is in the phase part of the GSV, and the angular spread information is in the module part of the GSV, thus enabling to decouple the estimation of 2D DOA from that of the angular spread. The array received data is used to construct three-dimensional (3D enhanced data matrix. The 2D DOA for coherently distributed sources could be estimated from the enhanced matrix by using 3D matrix pencil method. Computer simulation validated the efficiency of the algorithm.

  5. Estimation of fatigue and extreme load distributions from limited data with application to wind energy systems.

    Energy Technology Data Exchange (ETDEWEB)

    Fitzwater, LeRoy M. (Stanford University, Stanford, CA)

    2004-01-01

    An estimate of the distribution of fatigue ranges or extreme loads for wind turbines may be obtained by separating the problem into two uncoupled parts, (1) a turbine specific portion, independent of the site and (2) a site-specific description of environmental variables. We consider contextually appropriate probability models to describe the turbine specific response for extreme loads or fatigue. The site-specific portion is described by a joint probability distribution of a vector of environmental variables, which characterize the wind process at the hub-height of the wind turbine. Several approaches are considered for combining the two portions to obtain an estimate of the extreme load, e.g., 50-year loads or fatigue damage. We assess the efficacy of these models to obtain accurate estimates, including various levels of epistemic uncertainty, of the turbine response.

  6. Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dexin; Yang, Liuqing; Florita, Anthony; Alam, S.M. Shafiul; Elgindy, Tarek; Hodge, Bri-Mathias

    2016-08-01

    The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the help of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.

  7. Multivariate analysis for the estimation of target localization errors in fiducial marker-based radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Takamiya, Masanori [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501, Japan and Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Nakamura, Mitsuhiro, E-mail: m-nkmr@kuhp.kyoto-u.ac.jp; Akimoto, Mami; Ueki, Nami; Yamada, Masahiro; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Tanabe, Hiroaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047 (Japan); Kokubo, Masaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047, Japan and Department of Radiation Oncology, Kobe City Medical Center General Hospital, Kobe 650-0047 (Japan); Itoh, Akio [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501 (Japan)

    2016-04-15

    Purpose: To assess the target localization error (TLE) in terms of the distance between the target and the localization point estimated from the surrogates (|TMD|), the average of respiratory motion for the surrogates and the target (|aRM|), and the number of fiducial markers used for estimating the target (n). Methods: This study enrolled 17 lung cancer patients who subsequently underwent four fractions of real-time tumor tracking irradiation. Four or five fiducial markers were implanted around the lung tumor. The three-dimensional (3D) distance between the tumor and markers was at maximum 58.7 mm. One of the markers was used as the target (P{sub t}), and those markers with a 3D |TMD{sub n}| ≤ 58.7 mm at end-exhalation were then selected. The estimated target position (P{sub e}) was calculated from a localization point consisting of one to three markers except P{sub t}. Respiratory motion for P{sub t} and P{sub e} was defined as the root mean square of each displacement, and |aRM| was calculated from the mean value. TLE was defined as the root mean square of each difference between P{sub t} and P{sub e} during the monitoring of each fraction. These procedures were performed repeatedly using the remaining markers. To provide the best guidance on the answer with n and |TMD|, fiducial markers with a 3D |aRM ≥ 10 mm were selected. Finally, a total of 205, 282, and 76 TLEs that fulfilled the 3D |TMD| and 3D |aRM| criteria were obtained for n = 1, 2, and 3, respectively. Multiple regression analysis (MRA) was used to evaluate TLE as a function of |TMD| and |aRM| in each n. Results: |TMD| for n = 1 was larger than that for n = 3. Moreover, |aRM| was almost constant for all n, indicating a similar scale for the marker’s motion near the lung tumor. MRA showed that |aRM| in the left–right direction was the major cause of TLE; however, the contribution made little difference to the 3D TLE because of the small amount of motion in the left–right direction. The TLE

  8. Diffusion-based EM algorithm for distributed estimation of Gaussian mixtures in wireless sensor networks.

    Science.gov (United States)

    Weng, Yang; Xiao, Wendong; Xie, Lihua

    2011-01-01

    Distributed estimation of Gaussian mixtures has many applications in wireless sensor network (WSN), and its energy-efficient solution is still challenging. This paper presents a novel diffusion-based EM algorithm for this problem. A diffusion strategy is introduced for acquiring the global statistics in EM algorithm in which each sensor node only needs to communicate its local statistics to its neighboring nodes at each iteration. This improves the existing consensus-based distributed EM algorithm which may need much more communication overhead for consensus, especially in large scale networks. The robustness and scalability of the proposed approach can be achieved by distributed processing in the networks. In addition, we show that the proposed approach can be considered as a stochastic approximation method to find the maximum likelihood estimation for Gaussian mixtures. Simulation results show the efficiency of this approach.

  9. An ML-Based Radial Velocity Estimation Algorithm for Moving Targets in Spaceborne High-Resolution and Wide-Swath SAR Systems

    Directory of Open Access Journals (Sweden)

    Tingting Jin

    2017-04-01

    Full Text Available Multichannel synthetic aperture radar (SAR is a significant breakthrough to the inherent limitation between high-resolution and wide-swath (HRWS compared with conventional SAR. Moving target indication (MTI is an important application of spaceborne HRWS SAR systems. In contrast to previous studies of SAR MTI, the HRWS SAR mainly faces the problem of under-sampled data of each channel, causing single-channel imaging and processing to be infeasible. In this study, the estimation of velocity is equivalent to the estimation of the cone angle according to their relationship. The maximum likelihood (ML based algorithm is proposed to estimate the radial velocity in the existence of Doppler ambiguities. After that, the signal reconstruction and compensation for the phase offset caused by radial velocity are processed for a moving target. Finally, the traditional imaging algorithm is applied to obtain a focused moving target image. Experiments are conducted to evaluate the accuracy and effectiveness of the estimator under different signal-to-noise ratios (SNR. Furthermore, the performance is analyzed with respect to the motion ship that experiences interference due to different distributions of sea clutter. The results verify that the proposed algorithm is accurate and efficient with low computational complexity. This paper aims at providing a solution to the velocity estimation problem in the future HRWS SAR systems with multiple receive channels.

  10. Estimating distribution of hidden objects with drones: from tennis balls to manatees.

    Directory of Open Access Journals (Sweden)

    Julien Martin

    Full Text Available Unmanned aerial vehicles (UAV, or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection. UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348. The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.

  11. Estimating distribution of hidden objects with drones: from tennis balls to manatees.

    Science.gov (United States)

    Martin, Julien; Edwards, Holly H; Burgess, Matthew A; Percival, H Franklin; Fagan, Daniel E; Gardner, Beth E; Ortega-Ortiz, Joel G; Ifju, Peter G; Evers, Brandon S; Rambo, Thomas J

    2012-01-01

    Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.

  12. Estimating the flood frequency distribution at seasonal and annual time scales

    Science.gov (United States)

    Baratti, E.; Montanari, A.; Castellarin, A.; Salinas, J. L.; Viglione, A.; Bezzi, A.

    2012-12-01

    We propose an original approach to infer the flood frequency distribution at seasonal and annual time scale. Our purpose is to estimate the peak flow that is expected for an assigned return period T, independently of the season in which it occurs (i.e. annual flood frequency regime), as well as in different selected sub-yearly periods (i.e. seasonal flood frequency regime). While a huge literature exists on annual flood frequency analysis, few studies have focused on the estimation of seasonal flood frequencies despite the relevance of the issue, for instance when scheduling along the months of the year the construction phases of river engineering works directly interacting with the active river bed, like for instance dams. An approximate method for joint frequency analysis is presented here that guarantees consistency between fitted annual and seasonal distributions, i.e. the annual cumulative distribution is the product of the seasonal cumulative distribution functions, under the assumption of independence among floods in different seasons. In our method the parameters of the seasonal frequency distributions are fitted by maximising an objective function that accounts for the likelihoods of both seasonal and annual peaks. In contrast to previous studies, our procedure is conceived to allow the users to introduce subjective weights to the components of the objective function in order to emphasize the fitting of specific seasons or of the annual peak flow distribution. An application to the time series of the Blue Nile daily flows at the Sudan-Ethiopia border is presented.

  13. Estimating the flood frequency distribution at seasonal and annual time scale

    Science.gov (United States)

    Baratti, E.; Montanari, A.; Castellarin, A.; Salinas, J. L.; Viglione, A.; Bezzi, A.

    2012-06-01

    We propose an original approach to infer the flood frequency distribution at seasonal and annual time scale. Our purpose is to estimate the peak flow that is expected for an assigned return period T, independently of the season in which it occurs (i.e. annual flood frequency regime), as well as in different selected sub-yearly periods (i.e. seasonal flood frequency regime). While a huge literature exists on annual flood frequency analysis, few studies have focused on the estimation of seasonal flood frequencies despite the relevance of the issue, for instance when scheduling along the months of the year the construction phases of river engineering works directly interacting with the active river bed, like for instance dams. An approximate method for joint frequency analysis is presented here that guarantees consistency between fitted annual and seasonal distributions, i.e. the annual cumulative distribution is the product of the seasonal cumulative distribution functions, under the assumption of independence among floods in different seasons. In our method the parameters of the seasonal frequency distributions are fitted by maximising an objective function that accounts for the likelihoods of both seasonal and annual peaks. Differently from previous studies, our procedure is conceived to allow the users to introduce subjective weights to the components of the objective function in order to emphasize the fitting of specific seasons or of the annual peak flow distribution. An application to the time series of the Blue Nile daily flows at Sudan-Ethiopia border is presented.

  14. Residence time dispersion as a general measure of drug distribution kinetics: estimation and physiological interpretation.

    Science.gov (United States)

    Weiss, Michael

    2007-11-01

    To evaluate distribution kinetics of drugs by the relative dispersion of disposition residence time and demonstrate its uses, interpretation and limitations. The relative dispersion was estimated from drug disposition data of inulin and digoxin fitted by three-exponential functions, and calculated from compartmental parameters published for fentanyl and alfentanil. An interpretation is given in terms of a lumped organs model and the distributional equilibration process in a noneliminating system. As a measure of the deviation from mono-exponential disposition (one-compartment behavior), the relative dispersion provides information on the distribution kinetics of drugs, i.e., diffusion-limited distribution or slow tissue binding, without assuming a specific structural model. It also defines the total distribution clearance which has a clear physical meaning. The residence time dispersion is a model-independent measure that can be used to characterize the distribution kinetics of drugs and to reveal the influence of disease states. It can be estimated with high precision from drug disposition data.

  15. Exchange rate and interest rate distribution and volatility under the Portuguese target zone

    Directory of Open Access Journals (Sweden)

    Portugal Duarte António

    2010-01-01

    Full Text Available The aim of this study is to analyse the exchange rate and interest rate distribution and volatility under the participation of the Portuguese economy in the Exchange Rate Mechanism (ERM of the European Monetary System (EMS based on some of the main predictions of the target zone literature. Portugal adopted this exchange rate target zone from April 6 1992 until December 31 1998. During this period, the exchange rate distribution reveals that the majority of the observations lie close to the central parity, thus rejecting one of the key predictions of the Paul Krugman (1991 model. The analysis of the data also shows that exchange rate volatility tended to increase as the exchange rate approached the edges of the band, contrary to the predictions of the basic model. Interest rate differential volatility, on the other hand, seemed to behave in line with theoretical predictions. This suggests an increase in the credibility of monetary policy, allowing us to conclude that the adoption of a target zone has contributed decisively to the creation of the macroeconomic stability conditions necessary for the participation in the European Monetary Union (EMU. The Portuguese integration process should therefore be considered as an example to be followed by other small open economies in transition to the euro area.

  16. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    Directory of Open Access Journals (Sweden)

    Álvaro Gutiérrez

    2011-11-01

    Full Text Available Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA, previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce.

  17. A revival of the autoregressive distributed lag model in estimating energy demand relationships

    Energy Technology Data Exchange (ETDEWEB)

    Bentzen, J.; Engsted, T.

    1999-07-01

    The findings in the recent energy economics literature that energy economic variables are non-stationary, have led to an implicit or explicit dismissal of the standard autoregressive distribution lag (ARDL) model in estimating energy demand relationships. However, Pesaran and Shin (1997) show that the ARDL model remains valid when the underlying variables are non-stationary, provided the variables are co-integrated. In this paper we use the ARDL approach to estimate a demand relationship for Danish residential energy consumption, and the ARDL estimates are compared to the estimates obtained using co-integration techniques and error-correction models (ECM's). It turns out that both quantitatively and qualitatively, the ARDL approach and the co-integration/ECM approach give very similar results. (au)

  18. Mathematical optimization approach for estimating the quantum yield distribution of a photochromic reaction in a polymer

    Directory of Open Access Journals (Sweden)

    Mirai Tanaka

    2017-01-01

    Full Text Available The convolution of a series of events is often observed for a variety of phenomena such as the oscillation of a string. A photochemical reaction of a molecule is characterized by a time constant, but materials in the real world contain several molecules with different time constants. Therefore, the kinetics of photochemical reactions of the materials are usually observed with a complexity comparable with those of theoretical kinetic equations. Analysis of the components of the kinetics is quite important for the development of advanced materials. However, with a limited number of exceptions, deconvolution of the observed kinetics has not yet been mathematically solved. In this study, we propose a mathematical optimization approach for estimating the quantum yield distribution of a photochromic reaction in a polymer. In the proposed approach, time-series data of absorbances are acquired and an estimate of the quantum yield distribution is obtained. To estimate the distribution, we solve a mathematical optimization problem to minimize the difference between the input data and a model. This optimization problem involves a differential equation constrained on a functional space as the variable lies in the space of probability distribution functions and the constraints arise from reaction rate equations. This problem can be reformulated as a convex quadratic optimization problem and can be efficiently solved by discretization. Numerical results are also reported here, and they verify the effectiveness of our approach.

  19. Parametric distributions of underdiagnosis parameters used to estimate annual burden of illness for five foodborne pathogens.

    Science.gov (United States)

    Ebel, Eric D; Williams, Michael S; Schlosser, Wayne D

    2012-04-01

    Estimates of the burden of bacterial foodborne illness are used in applications ranging from determining economic losses due to a particular pathogenic organism to improving our understanding of the effects of antimicrobial resistance or changes in pathogen serotype. Estimates of the total number of illnesses can be derived by multiplying the number of observed illnesses, as reported by a specific active surveillance system, by an underdiagnosis factor that describes the relationship between observed and unobserved cases. The underdiagnosis factor can be a fixed value, but recent research efforts have focused on characterizing the inherent uncertainty in the surveillance system with a computer simulation. Although the inclusion of uncertainty is beneficial, re-creating the simulation results for every application can be burdensome. An alternative approach is to describe the underdiagnosis factor and its uncertainty with a parametric distribution. The use of such a distribution simplifies analyses by providing a closed-form definition of the underdiagnosis factor and allows this factor to be easily incorporated into Bayesian models. In this article, we propose and estimate parametric distributions for the underdiagnosis multipliers developed for the FoodNet surveillance systems in the United States. Distributions are provided for the five foodborne pathogens deemed most relevant to meat and poultry.

  20. Mathematical optimization approach for estimating the quantum yield distribution of a photochromic reaction in a polymer

    Science.gov (United States)

    Tanaka, Mirai; Yamashita, Takashi; Sano, Natsuki; Ishigaki, Aya; Suzuki, Tomomichi

    2017-01-01

    The convolution of a series of events is often observed for a variety of phenomena such as the oscillation of a string. A photochemical reaction of a molecule is characterized by a time constant, but materials in the real world contain several molecules with different time constants. Therefore, the kinetics of photochemical reactions of the materials are usually observed with a complexity comparable with those of theoretical kinetic equations. Analysis of the components of the kinetics is quite important for the development of advanced materials. However, with a limited number of exceptions, deconvolution of the observed kinetics has not yet been mathematically solved. In this study, we propose a mathematical optimization approach for estimating the quantum yield distribution of a photochromic reaction in a polymer. In the proposed approach, time-series data of absorbances are acquired and an estimate of the quantum yield distribution is obtained. To estimate the distribution, we solve a mathematical optimization problem to minimize the difference between the input data and a model. This optimization problem involves a differential equation constrained on a functional space as the variable lies in the space of probability distribution functions and the constraints arise from reaction rate equations. This problem can be reformulated as a convex quadratic optimization problem and can be efficiently solved by discretization. Numerical results are also reported here, and they verify the effectiveness of our approach.

  1. Release the BEESTS: Bayesian Estimation of Ex-Gaussian STop-Signal Reaction Time Distributions

    Directory of Open Access Journals (Sweden)

    Dora eMatzke

    2013-12-01

    Full Text Available The stop-signal paradigm is frequently used to study response inhibition. Inthis paradigm, participants perform a two-choice response time task wherethe primary task is occasionally interrupted by a stop-signal that promptsparticipants to withhold their response. The primary goal is to estimatethe latency of the unobservable stop response (stop signal reaction timeor SSRT. Recently, Matzke, Dolan, Logan, Brown, and Wagenmakers (inpress have developed a Bayesian parametric approach that allows for theestimation of the entire distribution of SSRTs. The Bayesian parametricapproach assumes that SSRTs are ex-Gaussian distributed and uses Markovchain Monte Carlo sampling to estimate the parameters of the SSRT distri-bution. Here we present an efficient and user-friendly software implementa-tion of the Bayesian parametric approach —BEESTS— that can be appliedto individual as well as hierarchical stop-signal data. BEESTS comes withan easy-to-use graphical user interface and provides users with summarystatistics of the posterior distribution of the parameters as well various diag-nostic tools to assess the quality of the parameter estimates. The softwareis open source and runs on Windows and OS X operating systems. In sum,BEESTS allows experimental and clinical psychologists to estimate entiredistributions of SSRTs and hence facilitates the more rigorous analysis ofstop-signal data.

  2. Deuteration distribution estimation with improved sequence coverage for HX/MS experiments.

    Science.gov (United States)

    Lou, Xinghua; Kirchner, Marc; Renard, Bernhard Y; Köthe, Ullrich; Boppel, Sebastian; Graf, Christian; Lee, Chung-Tien; Steen, Judith A J; Steen, Hanno; Mayer, Matthias P; Hamprecht, Fred A

    2010-06-15

    Time-resolved hydrogen exchange (HX) followed by mass spectrometry (MS) is a key technology for studying protein structure, dynamics and interactions. HX experiments deliver a time-dependent distribution of deuteration levels of peptide sequences of the protein of interest. The robust and complete estimation of this distribution for as many peptide fragments as possible is instrumental to understanding dynamic protein-level HX behavior. Currently, this data interpretation step still is a bottleneck in the overall HX/MS workflow. We propose HeXicon, a novel algorithmic workflow for automatic deuteration distribution estimation at increased sequence coverage. Based on an L(1)-regularized feature extraction routine, HeXicon extracts the full deuteration distribution, which allows insight into possible bimodal exchange behavior of proteins, rather than just an average deuteration for each time point. Further, it is capable of addressing ill-posed estimation problems, yielding sparse and physically reasonable results. HeXicon makes use of existing peptide sequence information, which is augmented by an inferred list of peptide candidates derived from a known protein sequence. In conjunction with a supervised classification procedure that balances sensitivity and specificity, HeXicon can deliver results with increased sequence coverage. The entire HeXicon workflow has been implemented in C++ and includes a graphical user interface. It is available at http://hci.iwr.uni-heidelberg.de/software.php. Supplementary data are available at Bioinformatics online.

  3. Thermophysical Property Estimation by Transient Experiments: The Effect of a Biased Initial Temperature Distribution

    Directory of Open Access Journals (Sweden)

    Federico Scarpa

    2015-01-01

    Full Text Available The identification of thermophysical properties of materials in dynamic experiments can be conveniently performed by the inverse solution of the associated heat conduction problem (IHCP. The inverse technique demands the knowledge of the initial temperature distribution within the material. As only a limited number of temperature sensors (or no sensor at all are arranged inside the test specimen, the knowledge of the initial temperature distribution is affected by some uncertainty. This uncertainty, together with other possible sources of bias in the experimental procedure, will propagate in the estimation process and the accuracy of the reconstructed thermophysical property values could deteriorate. In this work the effect on the estimated thermophysical properties due to errors in the initial temperature distribution is investigated along with a practical method to quantify this effect. Furthermore, a technique for compensating this kind of bias is proposed. The method consists in including the initial temperature distribution among the unknown functions to be estimated. In this way the effect of the initial bias is removed and the accuracy of the identified thermophysical property values is highly improved.

  4. Research on Key Technologies of Network Centric System Distributed Target Track Fusion

    Directory of Open Access Journals (Sweden)

    Yi Mao

    2017-01-01

    Full Text Available To realize common tactical picture in network-centered system, this paper proposes a layered architecture for distributed information processing and a method for distributed track fusion on the basis of analyzing the characteristics of network-centered systems. Basing on the noncorrelation of three-dimensional measurement of surveillance and reconnaissance sensors under polar coordinates, it also puts forward an algorithm for evaluating track quality (TQ using statistical decision theory. According to simulation results, the TQ value is associated with the measurement accuracy of sensors and the motion state of targets, which is well matched with the convergence process of tracking filters. Besides, the proposed algorithm has good reliability and timeliness in track quality evaluation.

  5. Geographic differences in the target-controlled infusion estimated concentration of propofol: bispectral index response curves.

    Science.gov (United States)

    Dahaba, Ashraf A; Zhong, Taidi; Lu, Hui Shun; Bornemann, Helmar; Liebmann, Markus; Wilfinger, Georg; Reibnegger, Gilbert; Metzler, Helfried

    2011-04-01

    Variability in drug responses could result from both genetic and environmental factors. Thus, drug effect could depend on geographic location, although regional variation is not generally acknowledged as a basis for stratification. There is evidence that the pharmacokinetic set developed in a European population for the target-controlled infusion (TCI) of propofol does not apply in Chinese patients; however, we are not aware of previous studies comparing the estimated concentration-bispectral index (BIS) response of Caucasian patients in Europe with that of Chinese patients in China. The Diprifusor™ TCI pump, incorporating the pharmacokinetic model proposed by Marsh et al., was applied to 30 Caucasian patients in Austria and 30 Chinese patients in China. The estimated plasma concentration (C(p)) of propofol for the two groups was set at 1 μg·mL(-1) and increased by 1 μg·mL(-1) every minute to gradually reach 5 μg·mL(-1) after 5 min. The BIS values were fitted against the estimated C(p) and the predicted effect-site concentration (C(e)) in a sigmoid E(max) model. The sigmoid E(max) curves were shifted significantly to the left in the Chinese group compared with the Austrian group. After 5 min, the BIS value in the Chinese group was lower than in the Austrian group (mean ± standard deviation [SD], 47.2 ± 3.6 vs 63.6 ± 5.4, respectively; P = 0.0006). The estimated C(p) at loss of consciousness (LOC), predicted C(e) at LOC, and time to LOC, were lower in the Chinese group than in the Austrian group (3.3 ± 0.8 μg·mL(-1), 1.6 ± 0.4 μg·mL(-1), 2.8 ± 0.6 min, respectively, vs 4.6 ± 2.8 μg·mL(-1), 2.4 ± 1.5 μg·mL(-1), 3.9 ± 0.5 min, respectively; P < 0.0001). When propofol is given using the same TCI protocol, Chinese patients in China lost consciousness faster and at a lower estimated plasma concentration than Caucasians in Austria. Larger studies are needed to map geographically appropriate TCI infusion models.

  6. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  7. Estimation of Inflationary Expectations and the Effectiveness of Inflation Targeting Strategy

    Directory of Open Access Journals (Sweden)

    Amalia CRISTESCU

    2011-02-01

    Full Text Available The credibility and accountability of a central bank, acting in an inflation targeting regime, are essential because they allow a sustainable anchoring of the inflationary anticipation of economic agents. Their decisions and behavior will increasingly be grounded on information provided by the central bank, especially if it shows transparency in the process of communicating with the public. Thus, inflationary anticipations are one of the most important channels through which the monetary policy affects the economic activity. They are crucial in the formation of the consumer prices among producers and traders, especially since it is relatively expensive for the economic agents to adjust their prices at short intervals. That is why many central banks use response functions containing inflationary anticipations, in their inflation targeting models. The most frequently problem in relation to these anticipations is that they are based on the assumption of optimal forecasts of future inflation, which are, implicitly, rational anticipations. In fact, the economic agents’ inflationary anticipations are most often adaptive or even irrational. Thus, rational anticipations cannot be used to estimate equations for the Romanian economy because the agents who form their expectations do not have sufficient information and an inflationary environment stable enough to fully anticipate the inflation evolution. The inflation evolution in the Romanian economy helps to calculate adaptive forecasts for which the weight of the "forward looking" component has to be rather important. The economic agents form their inflation expectations for periods of time that, usually, coincide with a production cycle (one year and consider the official and unofficial inflation forecasts present on the market in order to make strategic decisions. Thus, in recent research on inflation modeling, actual inflationary anticipations of economic agents which are revealed based on national

  8. Multiobjective Memetic Estimation of Distribution Algorithm Based on an Incremental Tournament Local Searcher

    Directory of Open Access Journals (Sweden)

    Kaifeng Yang

    2014-01-01

    Full Text Available A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.

  9. Estimation of two-dimensional velocity distribution profile using General Index Entropy in open channels

    Science.gov (United States)

    Shojaeezadeh, Shahab Aldin; Amiri, Seyyed Mehrab

    2018-02-01

    Estimation of velocity distribution profile is a challenging subject of open channel hydraulics. In this study, an entropy-based method is used to derive two-dimensional velocity distribution profile. The General Index Entropy (GIE) can be considered as the generalized form of Shannon entropy which is suitable to combine with the different form of Cumulative Distribution Function (CDF). Using the principle of maximum entropy (POME), the velocity distribution is defined by maximizing the GIE by treating the velocity as a random variable. The combination of GIE and a CDF proposed by Marini et al. (2011) was utilized to introduce an efficient entropy model whose results are comparable with several well-known experimental and field data. Consequently, in spite of less sensitivity of the related parameters of the model to flow conditions and less complexity in application of the model compared with other entropy-based methods, more accuracy is obtained in estimating velocity distribution profile either near the boundaries or the free surface of the flow.

  10. Multiobjective memetic estimation of distribution algorithm based on an incremental tournament local searcher.

    Science.gov (United States)

    Yang, Kaifeng; Mu, Li; Yang, Dongdong; Zou, Feng; Wang, Lei; Jiang, Qiaoyong

    2014-01-01

    A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.

  11. Method for Estimating the Charge Density Distribution on a Dielectric Surface.

    Science.gov (United States)

    Nakashima, Takuya; Suhara, Hiroyuki; Murata, Hidekazu; Shimoyama, Hiroshi

    2017-06-01

    High-quality color output from digital photocopiers and laser printers is in strong demand, motivating attempts to achieve fine dot reproducibility and stability. The resolution of a digital photocopier depends on the charge density distribution on the organic photoconductor surface; however, directly measuring the charge density distribution is impossible. In this study, we propose a new electron optical instrument that can rapidly measure the electrostatic latent image on an organic photoconductor surface, which is a dielectric surface, as well as a novel method to quantitatively estimate the charge density distribution on a dielectric surface by combining experimental data obtained from the apparatus via a computer simulation. In the computer simulation, an improved three-dimensional boundary charge density method (BCM) is used for electric field analysis in the vicinity of the dielectric material with a charge density distribution. This method enables us to estimate the profile and quantity of the charge density distribution on a dielectric surface with a resolution of the order of microns. Furthermore, the surface potential on the dielectric surface can be immediately calculated using the obtained charge density. This method enables the relation between the charge pattern on the organic photoconductor surface and toner particle behavior to be studied; an understanding regarding the same may lead to the development of a new generation of higher resolution photocopiers.

  12. Sensitivity of quantitative groundwater recharge estimates to volumetric and distribution uncertainty in rainfall forcing products

    Science.gov (United States)

    Werner, Micha; Westerhoff, Rogier; Moore, Catherine

    2017-04-01

    Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model

  13. Parameter estimation for 3-parameter log-logistic distribution (LLD3) by Pome

    Science.gov (United States)

    Singh, V. P.; Guo, H.; Yu, F. X.

    1993-09-01

    The principle of maximum entropy (POME) was employed to derive a new method of parameter estimation for the 3-parameter log-logistic distribution (LLD3). Monte Carlo simulated data were used to evaluate this method and compare it with the methods of moments (MOM), probability weighted moments (PWM), and maximum likelihood estimation (MLE). Simulation results showed that POME's performance was superior in predicting quantiles of large recurrence intervals when population skew was greater than or equal to 2.0. In all other cases, POME's performance was comparable to other methods.

  14. Interval Estimation of Stress-Strength Reliability Based on Lower Record Values from Inverse Rayleigh Distribution

    Directory of Open Access Journals (Sweden)

    Bahman Tarvirdizade

    2014-01-01

    Full Text Available We consider the estimation of stress-strength reliability based on lower record values when X and Y are independently but not identically inverse Rayleigh distributed random variables. The maximum likelihood, Bayes, and empirical Bayes estimators of R are obtained and their properties are studied. Confidence intervals, exact and approximate, as well as the Bayesian credible sets for R are obtained. A real example is presented in order to illustrate the inferences discussed in the previous sections. A simulation study is conducted to investigate and compare the performance of the intervals presented in this paper and some bootstrap intervals.

  15. Fitting statistical distributions to sea duck count data: implications for survey design and abundance estimation

    Science.gov (United States)

    Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.

    2014-01-01

    Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are

  16. Application of the Junge- and Pankow-equation for estimating indoor gas/particle distribution and exposure to SVOCs

    Science.gov (United States)

    Salthammer, Tunga; Schripp, Tobias

    2015-04-01

    In the indoor environment, distribution and dynamics of an organic compound between gas phase, particle phase and settled dust must be known for estimating human exposure. This, however, requires a detailed understanding of the environmentally important compound parameters, their interrelation and of the algorithms for calculating partitioning coefficients. The parameters of major concern are: (I) saturation vapor pressure (PS) (of the subcooled liquid); (II) Henry's law constant (H); (III) octanol/water partition coefficient (KOW); (IV) octanol/air partition coefficient (KOA); (V) air/water partition coefficient (KAW) and (VI) settled dust properties like density and organic content. For most of the relevant compounds reliable experimental data are not available and calculated gas/particle distributions can widely differ due to the uncertainty in predicted Ps and KOA values. This is not a big problem if the target compound is of low (10-2 Pa) volatility, but in the intermediate region even small changes in Ps or KOA will have a strong impact on the result. Moreover, the related physical processes might bear large uncertainties. The KOA value can only be used for particle absorption from the gas phase if the organic portion of the particle or dust is high. The Junge- and Pankow-equation for calculating the gas/particle distribution coefficient KP do not consider the physical and chemical properties of the particle surface area. It is demonstrated by error propagation theory and Monte-Carlo simulations that parameter uncertainties from estimation methods for molecular properties and variations of indoor conditions might strongly influence the calculated distribution behavior of compounds in the indoor environment.

  17. Linking occupancy surveys with habitat characteristics to estimate abundance and distribution in an endangered cryptic bird

    Science.gov (United States)

    Crampton, Lisa H.; Brinck, Kevin W.; Pias, Kyle E.; Heindl, Barbara A. P.; Savre, Thomas; Diegmann, Julia S.; Paxton, Eben H.

    2017-01-01

    Accurate estimates of the distribution and abundance of endangered species are crucial to determine their status and plan recovery options, but such estimates are often difficult to obtain for species with low detection probabilities or that occur in inaccessible habitats. The Puaiohi (Myadestes palmeri) is a cryptic species endemic to Kauaʻi, Hawai‘i, and restricted to high elevation ravines that are largely inaccessible. To improve current population estimates, we developed an approach to model distribution and abundance of Puaiohi across their range by linking occupancy surveys to habitat characteristics, territory density, and landscape attributes. Occupancy per station ranged from 0.17 to 0.82, and was best predicted by the number and vertical extent of cliffs, cliff slope, stream width, and elevation. To link occupancy estimates with abundance, we used territory mapping data to estimate the average number of territories per survey station (0.44 and 0.66 territories per station in low and high occupancy streams, respectively), and the average number of individuals per territory (1.9). We then modeled Puaiohi occupancy as a function of two remote-sensed measures of habitat (stream sinuosity and elevation) to predict occupancy across its entire range. We combined predicted occupancy with estimates of birds per station to produce a global population estimate of 494 (95% CI 414–580) individuals. Our approach is a model for using multiple independent sources of information to accurately track population trends, and we discuss future directions for modeling abundance of this, and other, rare species.

  18. Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Raket, Lars Lau; Brites, Catarina

    2014-01-01

    Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and c...... flow. The proposed SI generation algorithm allows for RD improvements up to 10% (Bjøntegaard) in bit-rate savings, when compared with block-based SI generation algorithms leveraging temporal and inter-view redundancies....... and compensation techniques. In a multiview scenario, the correlation between views can also be exploited to further enhance the overall Rate-Distortion (RD) performance. Thus, to generate SI in a multiview distributed coding scenario, a joint disparity and motion estimation technique is proposed, based on optical...

  19. Estimation of Tendon Force Distribution in Prestressed Concrete Girders Using Smart Strand

    Directory of Open Access Journals (Sweden)

    Keunhee Cho

    2017-12-01

    Full Text Available The recently developed smart strand offers the possibility of measuring the prestress force of the tendon from jacking and all along its service life. In the present study, a method estimating the force distribution in all the tendons of a prestressed concrete (PSC girder installed with one smart strand is proposed. The force distribution in the prestressed tendons is formulated by the friction and the anchorage slip, and is obtained through an optimization process with respect to the compatibility conditions and equilibrium of the forces in the section of the PSC girder. The validation of the proposed method through a numerical example and experiment shows that it can be used to estimate the force developed in the tendon.

  20. Assessment of the safety, targeting, and distribution characteristics of a novel pH-sensitive hydrogel.

    Science.gov (United States)

    Dong, Kai; Dong, Yalin; You, Cuiyu; Xu, Wei; Huang, Xiaoyan; Yan, Yan; Zhang, Lu; Wang, Ke; Xing, Jianfeng

    2014-11-01

    In our previous study, we synthesized a pH-sensitive hydrogel based on poly (ɛ-caprolactone) (PCL), Pluronic (Poloxamer) and methacrylic acid (MAA) using UV-initiated free-radical polymerization. In the present study, we evaluated the safety of the obtained GMA-PCFC-GMA copolymer and a P(CFC-MAA-MEG) hydrogel both in vitro and in vivo. The pharmacokinetics study and distribution characteristics of dexamethasone in rat blood and mouse colon were investigated in detail. The in vitro toxicity of the GMA-PCFC-GMA copolymer was evaluated using a cell viability assay with HEK293 cells. An acute oral toxicity test was conducted by orally administering mice with a total of 10,000 mg/kg body weight of the P(CFC-MAA-MEG) hydrogel. The mice were then observed continuously for 14 days. After which, they were sacrificed and their blood collected for routine blood and serum chemistry tests. Pharmacokinetic and colonic tissue distribution studies were conducted using high-performance liquid chromatography to detect the concentration of dexamethasone in rat blood and mouse colon tissue. All of the results indicated that both the GMA-PCFC-GMA copolymer and P(CFC-MAA-MEG) hydrogel were nontoxic. Moreover, the hydrogel significantly enhanced the colon-targeting behavior of dexamethasone. These results suggested that the novel hydrogel has great potential in colon-targeted drug delivery. Copyright © 2014. Published by Elsevier B.V.

  1. Estimation of a fluorescent lamp spectral distribution for color image in machine vision

    OpenAIRE

    Corzo, Luis Galo; Penaranda, Jose Antonio; Peer, Peter

    2014-01-01

    We present a technique to quickly estimate the Illumination Spectral Distribution (ISD) in an image illuminated by a fluorescent lamp. It is assumed that the object colors are a set of colors for which spectral reflectances are available (in our experiments we use spectral measurements of 12 colors checker chart), the sensitivities of the camera sensors are known and the camera response is linear. Thus, the ISD can be approximated by a finite linear combinations of a small number of basis fun...

  2. Estimating fin whale distribution from ambient noise spectra using Bayesian inversion

    OpenAIRE

    Menze, Sebastian

    2015-01-01

    Passive acoustic monitoring is increasingly used to study the distribution and migration of marine mammals. Marine mammal vocalizations are transient sounds, but the combined sound energy of a population continuously repeating a vocalization, adds up to a quasi-continuous chorus. Marine mammal choruses can be identified as peaks in ocean ambient noise spectra. In the North Atlantic, the fin whale chorus is commonly observed as peak at 20 Hz. This thesis proposes a method to estimate the distr...

  3. Robust Minimum Distance Estimation of the Four-Parameter Generalized Gamma Distribution.

    Science.gov (United States)

    1982-09-01

    thesis students in the field of parameter estimation (2; 4; 7; 8; 10; 14). 3 I CHAPTER I FOUR-PARAMETER GENERALIZED GAMNA DISTRIBUTION Generalized Gamma...closed form (19: 352). Solutions can be found by iteration; and the iterative technique developed by Harter will be used to solve these equations for the...William, Richard L. Scheaffer, and Dennis D. Wackerly . Mathematical Statistics with Applications. 2nd ed. Boston: Duxbury Press, 1982. 13. Mihram, G. A

  4. Decomposable Problems, Niching, and Scalability of Multiobjective Estimation of Distribution Algorithms

    OpenAIRE

    Sastry, Kumara; Pelikan, Martin; Goldberg, David E.

    2005-01-01

    The paper analyzes the scalability of multiobjective estimation of distribution algorithms (MOEDAs) on a class of boundedly-difficult additively-separable multiobjective optimization problems. The paper illustrates that even if the linkage is correctly identified, massive multimodality of the search problems can easily overwhelm the nicher and lead to exponential scale-up. Facetwise models are subsequently used to propose a growth rate of the number of differing substructures between the two ...

  5. Estimation of Shallow Groundwater Recharge Using a Gis-Based Distributed Water Balance Model

    OpenAIRE

    Graf Renata; Przybyłek Jan

    2014-01-01

    In the paper we present the results of shallow groundwater recharge estimation using the WetSpass GISbased distributed water balance model. By taking into account WetSpass, which stands for Water an Energy Transfer between Soil, Plants and Atmosphere under quasi-Steady State, for average conditions during the period 1961–2000, we assessed the spatial conditions of the groundwater infiltration recharge process of shallow circulation systems in the Poznan Plateau area (the Great Pol...

  6. Admissible and Minimax Estimators of a Lower Bounded Scale Parameter of a Gamma Distribution under the Entropy Loss Function

    Directory of Open Access Journals (Sweden)

    M. Nasr Esfahani

    2009-03-01

    Full Text Available This paper is concerned with admissible and minimax estimation of scale parameter θ of a gamma distribution under the entropy loss function, when it is known that θ > a for some known a > 0. An admissible minimax estimator of θ, which is the pointwise limit of a sequence of Bayes estimators, is derived. Also, the admissible estimators and the only minimax estimator of θ in the class of truncated linear estimators are obtained. Finally, the results are extended to a subclass of scale parameter exponential family and the family of transformed chi-square distributions

  7. Equations for hydraulic conductivity estimation from particle size distribution: A dimensional analysis

    Science.gov (United States)

    Wang, Ji-Peng; François, Bertrand; Lambert, Pierre

    2017-09-01

    Estimating hydraulic conductivity from particle size distribution (PSD) is an important issue for various engineering problems. Classical models such as Hazen model, Beyer model, and Kozeny-Carman model usually regard the grain diameter at 10% passing (d10) as an effective grain size and the effects of particle size uniformity (in Beyer model) or porosity (in Kozeny-Carman model) are sometimes embedded. This technical note applies the dimensional analysis (Buckingham's ∏ theorem) to analyze the relationship between hydraulic conductivity and particle size distribution (PSD). The porosity is regarded as a dependent variable on the grain size distribution in unconsolidated conditions. It indicates that the coefficient of grain size uniformity and a dimensionless group representing the gravity effect, which is proportional to the mean grain volume, are the main two determinative parameters for estimating hydraulic conductivity. Regression analysis is then carried out on a database comprising 431 samples collected from different depositional environments and new equations are developed for hydraulic conductivity estimation. The new equation, validated in specimens beyond the database, shows an improved prediction comparing to using the classic models.

  8. Cable Overheating Risk Warning Method Based on Impedance Parameter Estimation in Distribution Network

    Science.gov (United States)

    Yu, Zhang; Xiaohui, Song; Jianfang, Li; Fei, Gao

    2017-05-01

    Cable overheating will lead to the cable insulation level reducing, speed up the cable insulation aging, even easy to cause short circuit faults. Cable overheating risk identification and warning is nessesary for distribution network operators. Cable overheating risk warning method based on impedance parameter estimation is proposed in the paper to improve the safty and reliability operation of distribution network. Firstly, cable impedance estimation model is established by using least square method based on the data from distribiton SCADA system to improve the impedance parameter estimation accuracy. Secondly, calculate the threshold value of cable impedance based on the historical data and the forecast value of cable impedance based on the forecasting data in future from distribiton SCADA system. Thirdly, establish risks warning rules library of cable overheating, calculate the cable impedance forecast value and analysis the change rate of impedance, and then warn the overheating risk of cable line based on the overheating risk warning rules library according to the variation relationship between impedance and line temperature rise. Overheating risk warning method is simulated in the paper. The simulation results shows that the method can identify the imedance and forecast the temperature rise of cable line in distribution network accurately. The result of overheating risk warning can provide decision basis for operation maintenance and repair.

  9. Two-Dimensional DOA Estimation for Coherently Distributed Sources with Symmetric Properties in Crossed Arrays.

    Science.gov (United States)

    Dai, Zhengliang; Cui, Weijia; Ba, Bin; Wang, Daming; Sun, Youming

    2017-06-06

    In this paper, a novel algorithm is proposed for the two-dimensional (2D) central direction-of-arrival (DOA) estimation of coherently distributed (CD) sources. Specifically, we focus on a centro-symmetric crossed array consisting of two uniform linear arrays (ULAs). Unlike the conventional low-complexity methods using the one-order Taylor series approximation to obtain the approximate rotational invariance relation, we first prove the symmetric property of angular signal distributed weight vectors of the CD source for an arbitrary centrosymmetric array, and then use this property to establish two generalized rotational invariance relations inside the array manifolds in the two ULAs. Making use of such relations, the central elevation and azimuth DOAs are obtained by employing a polynomial-root-based search-free approach, respectively. Finally, simple parameter matching is accomplished by searching for the minimums of the cost function of the estimated 2D angular parameters. When compared with the existing low-complexity methods, the proposed algorithm can greatly improve estimation accuracy without significant increment in computation complexity. Moreover, it performs independently of the deterministic angular distributed function. Simulation results are presented to illustrate the performance of the proposed algorithm.

  10. Adaptive distributed Kalman filtering with wind estimation for astronomical adaptive optics.

    Science.gov (United States)

    Massioni, Paolo; Gilles, Luc; Ellerbroek, Brent

    2015-12-01

    In the framework of adaptive optics (AO) for astronomy, it is a common assumption to consider the atmospheric turbulent layers as "frozen flows" sliding according to the wind velocity profile. For this reason, having knowledge of such a velocity profile is beneficial in terms of AO control system performance. In this paper we show that it is possible to exploit the phase estimate from a Kalman filter running on an AO system in order to estimate wind velocity. This allows the update of the Kalman filter itself with such knowledge, making it adaptive. We have implemented such an adaptive controller based on the distributed version of the Kalman filter, for a realistic simulation of a multi-conjugate AO system with laser guide stars on a 30 m telescope. Simulation results show that this approach is effective and promising and the additional computational cost with respect to the distributed filter is negligible. Comparisons with a previously published slope detection and ranging wind profiler are made and the impact of turbulence profile quantization is assessed. One of the main findings of the paper is that all flavors of the adaptive distributed Kalman filter are impacted more significantly by turbulence profile quantization than the static minimum mean square estimator which does not incorporate wind profile information.

  11. Focal length estimation guided with object distribution on FocaLens dataset

    Science.gov (United States)

    Yan, Han; Zhang, Yu; Zhang, Shunli; Zhao, Sicong; Zhang, Li

    2017-05-01

    The focal length information of an image is indispensable for many computer vision tasks. In general, focal length can be obtained via camera calibration using specific planner patterns. However, for images taken by an unknown device, focal length can only be estimated based on the image itself. Currently, most of the single-image focal length estimation methods make use of predefined geometric cues (such as vanishing points or parallel lines) to infer focal length, which constrains their applications mainly on manmade scenes. The machine learning algorithms have demonstrated great performance in many computer vision tasks, but these methods are seldom used in the focal length estimation task, partially due to the shortage of labeled images for training the model. To bridge this gap, we first introduce a large-scale dataset FocaLens, which is especially designed for single-image focal length estimation. Taking advantage of the FocaLens dataset, we also propose a new focal length estimation model, which exploits the multiscale detection architecture to encode object distributions in images to assist focal length estimation. Additionally, an online focal transformation approach is proposed to further promote the model's generalization ability. Experimental results demonstrate that the proposed model trained on FocaLens can not only achieve state-of-the-art results on the scenes with distinct geometric cues but also obtain comparable results on the scenes even without distinct geometric cues.

  12. Distributed Fusion Estimation for Multisensor Multirate Systems with Stochastic Observation Multiplicative Noises

    Directory of Open Access Journals (Sweden)

    Peng Fangfang

    2014-01-01

    Full Text Available This paper studies the fusion estimation problem of a class of multisensor multirate systems with observation multiplicative noises. The dynamic system is sampled uniformly. Sampling period of each sensor is uniform and the integer multiple of the state update period. Moreover, different sensors have the different sampling rates and observations of sensors are subject to the stochastic uncertainties of multiplicative noises. At first, local filters at the observation sampling points are obtained based on the observations of each sensor. Further, local estimators at the state update points are obtained by predictions of local filters at the observation sampling points. They have the reduced computational cost and a good real-time property. Then, the cross-covariance matrices between any two local estimators are derived at the state update points. At last, using the matrix weighted optimal fusion estimation algorithm in the linear minimum variance sense, the distributed optimal fusion estimator is obtained based on the local estimators and the cross-covariance matrices. An example shows the effectiveness of the proposed algorithms.

  13. Sampling-based correlation estimation for distributed source coding under rate and complexity constraints.

    Science.gov (United States)

    Cheung, Ngai-Man; Wang, Huisheng; Ortega, Antonio

    2008-11-01

    In many practical distributed source coding (DSC) applications, correlation information has to be estimated at the encoder in order to determine the encoding rate. Coding efficiency depends strongly on the accuracy of this correlation estimation. While error in estimation is inevitable, the impact of estimation error on compression efficiency has not been sufficiently studied for the DSC problem. In this paper,we study correlation estimation subject to rate and complexity constraints, and its impact on coding efficiency in a DSC framework for practical distributed image and video applications. We focus on, in particular, applications where binary correlation models are exploited for Slepian-Wolf coding and sampling techniques are used to estimate the correlation, while extensions to other correlation models would also be briefly discussed. In the first part of this paper, we investigate the compression of binary data. We first propose a model to characterize the relationship between the number of samples used in estimation and the coding rate penalty, in the case of encoding of a single binary source. The model is then extended to scenarios where multiple binary sources are compressed, and based on the model we propose an algorithm to determine the number of samples allocated to different sources so that the overall rate penalty can be minimized, subject to a constraint on the total number of samples. The second part of this paper studies compression of continuous valued data. We propose a model-based estimation for the particular but important situations where binary bit-planes are extracted from a continuous-valued input source, and each bit-plane is compressed using DSC. The proposed model-based method first estimates the source and correlation noise models using continuous valued samples, and then uses the models to derive the bit-plane statistics analytically. We also extend the model-based estimation to the cases when bit-planes are extracted based on the

  14. Re-estimation of Motion and Reconstruction for Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Raket, Lars Lau; Forchhammer, Søren

    2014-01-01

    Transform domain Wyner-Ziv (TDWZ) video coding is an efficient approach to distributed video coding (DVC), which provides low complexity encoding by exploiting the source statistics at the decoder side. The DVC coding efficiency depends mainly on side information and noise modeling. This paper...... proposes a motion re-estimation technique based on optical flow to improve side information and noise residual frames by taking partially decoded information into account. To improve noise modeling, a noise residual motion re-estimation technique is proposed. Residual motion compensation with motion...... re-estimation (MORE) are integrated in the SING TDWZ codec, which uses side information and noise learning. For Wyner-Ziv frames using GOP size 2, the MORE codec significantly improves the TDWZ coding efficiency with an average (Bjøntegaard) PSNR improvement of 2.5 dB and up to 6 dB improvement...

  15. On the effect of correlated measurements on the performance of distributed estimation

    KAUST Repository

    Ahmed, Mohammed

    2013-06-01

    We address the distributed estimation of an unknown scalar parameter in Wireless Sensor Networks (WSNs). Sensor nodes transmit their noisy observations over multiple access channel to a Fusion Center (FC) that reconstructs the source parameter. The received signal is corrupted by noise and channel fading, so that the FC objective is to minimize the Mean-Square Error (MSE) of the estimate. In this paper, we assume sensor node observations to be correlated with the source signal and correlated with each other as well. The correlation coefficient between two observations is exponentially decaying with the distance separation. The effect of the distance-based correlation on the estimation quality is demonstrated and compared with the case of unity correlated observations. Moreover, a closed-form expression for the outage probability is derived and its dependency on the correlation coefficients is investigated. Numerical simulations are provided to verify our analytic results. © 2013 IEEE.

  16. An estimator-based distributed voltage-predictive control strategy for ac islanded microgrids

    DEFF Research Database (Denmark)

    Wang, Yanbo; Chen, Zhe; Wang, Xiongfei

    2015-01-01

    distributed generator, where the voltage estimator serves as an essential tool to obtain network voltages response without using communication links, while the voltage predictive controller is able to implement offset-free voltage control for a specified bus. The dynamic performance of the proposed voltage......This paper presents an estimator-based voltage predictive control strategy for AC islanded microgrids, which is able to perform voltage control without any communication facilities. The proposed control strategy is composed of a network voltage estimator and a voltage predictive controller for each...... perturbation, load parameters variation, different disturbance locations, LC filters perturbation, output impedances perturbation and DG unit fault. The simulation and experimental results show that the proposed control approach is able to perform offset-free voltage control without any communication links...

  17. Distributed Space-Time Block Coded Transmission with Imperfect Channel Estimation: Achievable Rate and Power Allocation

    Directory of Open Access Journals (Sweden)

    Sonia Aïssa

    2008-05-01

    Full Text Available This paper investigates the effects of channel estimation error at the receiver on the achievable rate of distributed space-time block coded transmission. We consider that multiple transmitters cooperate to send the signal to the receiver and derive lower and upper bounds on the mutual information of distributed space-time block codes (D-STBCs when the channel gains and channel estimation error variances pertaining to different transmitter-receiver links are unequal. Then, assessing the gap between these two bounds, we provide a limiting value that upper bounds the latter at any input transmit powers, and also show that the gap is minimum if the receiver can estimate the channels of different transmitters with the same accuracy. We further investigate positioning the receiving node such that the mutual information bounds of D-STBCs and their robustness to the variations of the subchannel gains are maximum, as long as the summation of these gains is constant. Furthermore, we derive the optimum power transmission strategy to achieve the outage capacity lower bound of D-STBCs under arbitrary numbers of transmit and receive antennas, and provide closed-form expressions for this capacity metric. Numerical simulations are conducted to corroborate our analysis and quantify the effects of imperfect channel estimation.

  18. Parameter Estimation in Rainfall-Runoff Modelling Using Distributed Versions of Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Michala Jakubcová

    2015-01-01

    Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.

  19. Estimation of dose distribution in occupationally exposed individuals to FDG-{sup 18}F

    Energy Technology Data Exchange (ETDEWEB)

    Lacerda, Isabelle V. Batista de; Cabral, Manuela O. Monteiro; Vieira, Jose Wilson, E-mail: ilacerda.bolsista@cnen.gov.br, E-mail: manuela.omc@gmail.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Oliveira, Mercia Liane de; Andrade Lima, Fernando R. de, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2014-07-01

    The use of unsealed radiation sources in nuclear medicine can lead to important incorporation of radionuclides, especially for occupationally exposed individuals (OEIs) during production and handling of radiopharmaceuticals. In this study, computer simulation was proposed as an alternative methodology for evaluation of the absorbed dose distribution and for the effective dose value in OEIs. For this purpose, the Exposure Computational Model (ECM) which is named as FSUP (Female Adult Mesh - supine) were used. This ECM is composed of: voxel phantom FASH (Female Adult MeSH) in the supine position, the MC code EGSnrc and an algorithm simulator of general internal source. This algorithm was modified to adapt to specific needs of the positron emission from FDG-{sup 18}F. The obtained results are presented as absorbed dose/accumulated activity. To obtain the absorbed dose distribution it was necessary to use accumulative activity data from the in vivo bioassay. The absorbed dose distribution and the value of estimated effective dose in this study did not exceed the limits for occupational exposure. Therefore, the creation of a database with the distribution of accumulated activity is suggested in order to estimate the absorbed dose in radiosensitive organs and the effective dose for OEI in similar environment. (author)

  20. Estimation of parameters in logistic and log-logistic distribution with grouped data.

    Science.gov (United States)

    Zhou, Yan Yan; Mi, Jie; Guo, Shengru

    2007-09-01

    In many situations, instead of a complete sample, data are available only in grouped form. For example, grouped failure time data occur in studies in which subjects are monitored periodically to determine whether failure has occurred in the predetermined intervals. Here the model under consideration is the log-logistic distribution. This paper demonstrates the existence and uniqueness of the MLEs of the parameters of the logistic distribution under mild conditions with grouped data. The times with the maximum failure rate and the mode of the p.d.f. of the log-logistic distribution are also estimated based on the MLEs. The methodology is further studied with simulations and exemplified with a data set with artificially introduced grouping from a locomotive life test study.

  1. New method to estimate the sample size for calculation of a proportion assuming binomial distribution.

    Science.gov (United States)

    Vallejo, Adriana; Muniesa, Ana; Ferreira, Chelo; de Blas, Ignacio

    2013-10-01

    Nowadays the formula to calculate the sample size for estimate a proportion (as prevalence) is based on the Normal distribution, however it would be based on a Binomial distribution which confidence interval was possible to be calculated using the Wilson Score method. By comparing the two formulae (Normal and Binomial distributions), the variation of the amplitude of the confidence intervals is relevant in the tails and the center of the curves. In order to calculate the needed sample size we have simulated an iterative sampling procedure, which shows an underestimation of the sample size for values of prevalence closed to 0 or 1, and also an overestimation for values closed to 0.5. Attending to these results we proposed an algorithm based on Wilson Score method that provides similar values for the sample size than empirically obtained by simulation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. DMPDS: A Fast Motion Estimation Algorithm Targeting High Resolution Videos and Its FPGA Implementation

    Directory of Open Access Journals (Sweden)

    Gustavo Sanchez

    2012-01-01

    Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.

  3. Estimation of demographic measures for India, 1881-1961, based on census age distributions.

    Science.gov (United States)

    Das Gupta, P

    1971-11-01

    Abstract India is one of the very few developing countries which have a relatively long history of population censuses. The first census was taken in 1872, the second in 1881 and since then there has been a census every ten years, the latest in 1971. Yet the registration of births and deaths in India, even at the present time, is too inadequate to be of much help in estimating fertility and mortality conditions in the country. From time to time Indian census actuaries have indirectly constructed life tables by comparing one census age distribution with the preceding one. Official life tables are available for all the decades from 1872-1881 to 1951-1961, except for 1911-1921 and 1931-1941. Kingsley Davis(1) filled in the gap by constructing life tables for the latter two decades. He also estimated the birth and death rates ofIndia for the decades from 1881-1891 to 1931-1941. Estimates of these rates for the following two decades, 1941-1951 and 1951-1961, were made by Indian census actuaries. The birth rates of Davis and the Indian actuaries were obtained basically by the reverse survival method from the age distribution and the computed life table of the population. Coale and Hoover(2), however, estimated the birth and death rates and the life table of the Indian population in 1951 by applying stable population theory. The most recent estimates of the birth rate and death rate for 1963-1964 are based on the results of the National Sample Survey. All these estimates are presented in summary form in Table 1.

  4. Green sturgeon distribution in the Pacific Ocean estimated from modeled oceanographic features and migration behavior.

    Science.gov (United States)

    Huff, David D; Lindley, Steven T; Wells, Brian K; Chai, Fei

    2012-01-01

    The green sturgeon (Acipenser medirostris), which is found in the eastern Pacific Ocean from Baja California to the Bering Sea, tends to be highly migratory, moving long distances among estuaries, spawning rivers, and distant coastal regions. Factors that determine the oceanic distribution of green sturgeon are unclear, but broad-scale physical conditions interacting with migration behavior may play an important role. We estimated the distribution of green sturgeon by modeling species-environment relationships using oceanographic and migration behavior covariates with maximum entropy modeling (MaxEnt) of species geographic distributions. The primary concentration of green sturgeon was estimated from approximately 41-51.5° N latitude in the coastal waters of Washington, Oregon, and Vancouver Island and in the vicinity of San Francisco and Monterey Bays from 36-37° N latitude. Unsuitably cold water temperatures in the far north and energetic efficiencies associated with prevailing water currents may provide the best explanation for the range-wide marine distribution of green sturgeon. Independent trawl records, fisheries observer records, and tagging studies corroborated our findings. However, our model also delineated patchily distributed habitat south of Monterey Bay, though there are few records of green sturgeon from this region. Green sturgeon are likely influenced by countervailing pressures governing their dispersal. They are behaviorally directed to revisit natal freshwater spawning rivers and persistent overwintering grounds in coastal marine habitats, yet they are likely physiologically bounded by abiotic and biotic environmental features. Impacts of human activities on green sturgeon or their habitat in coastal waters, such as bottom-disturbing trawl fisheries, may be minimized through marine spatial planning that makes use of high-quality species distribution information.

  5. Estimating the flood frequency distribution at seasonal and annual time scales

    Directory of Open Access Journals (Sweden)

    E. Baratti

    2012-12-01

    Full Text Available We propose an original approach to infer the flood frequency distribution at seasonal and annual time scale. Our purpose is to estimate the peak flow that is expected for an assigned return period T, independently of the season in which it occurs (i.e. annual flood frequency regime, as well as in different selected sub-yearly periods (i.e. seasonal flood frequency regime. While a huge literature exists on annual flood frequency analysis, few studies have focused on the estimation of seasonal flood frequencies despite the relevance of the issue, for instance when scheduling along the months of the year the construction phases of river engineering works directly interacting with the active river bed, like for instance dams. An approximate method for joint frequency analysis is presented here that guarantees consistency between fitted annual and seasonal distributions, i.e. the annual cumulative distribution is the product of the seasonal cumulative distribution functions, under the assumption of independence among floods in different seasons. In our method the parameters of the seasonal frequency distributions are fitted by maximising an objective function that accounts for the likelihoods of both seasonal and annual peaks. In contrast to previous studies, our procedure is conceived to allow the users to introduce subjective weights to the components of the objective function in order to emphasize the fitting of specific seasons or of the annual peak flow distribution. An application to the time series of the Blue Nile daily flows at the Sudan–Ethiopia border is presented.

  6. Estimating bisphenol A exposure levels using a questionnaire targeting known sources of exposure.

    Science.gov (United States)

    Nomura, Sarah Oppeneer; Harnack, Lisa; Robien, Kim

    2016-03-01

    To develop a BPA Exposure Assessment Module (BEAM) for use in large observational studies and to evaluate the ability of the BEAM to estimate bisphenol A (BPA) exposure levels. The BEAM was designed by modifying an FFQ with questions targeting known sources of BPA exposure. Frequency of intake of known dietary sources of BPA was assessed using the BEAM and three 24 h food records as a reference diet measurement tool. Urinary BPA (uBPA) levels were measured as the criterion tool in a pooled urine sample (nine spot samples per participant). Spearman correlations, linear regression and weighted kappa analysis were used to evaluate the ability of the BEAM and food records to estimate BPA exposure levels. Minneapolis/Saint Paul, MN, USA. Sixty-eight healthy adult (20-59 years) volunteers. Dietary BPA intake assessed by the BEAM was not associated with uBPA levels and was unable to predict participants' rank by uBPA levels. BEAM models with all a priori predictors explained 25 % of the variability in uBPA levels. Canned food intake assessed by food records was associated with uBPA levels, but was unable to rank participants by uBPA levels. Multivariable-adjusted food record models with a priori predictors explained 41 % of the variability in uBPA levels. Known dietary sources of BPA exposure explained less than half the variability in uBPA levels, regardless of diet assessment method. Findings suggest that a questionnaire approach may be insufficient for ranking BPA exposure level and additional important sources of BPA exposure likely exist.

  7. Use of motion tracking in stereotactic body radiotherapy: Evaluation of uncertainty in off-target dose distribution and optimization strategies.

    Science.gov (United States)

    Casamassima, F; Cavedon, C; Francescon, P; Stancanello, J; Avanzo, M; Cora, S; Scalchi, P

    2006-01-01

    Spatial accuracy in extracranial radiosurgery is affected by organ motion. Motion tracking systems may be able to avoid PTV enlargement while preserving treatment times, however special attention is needed when fiducial markers are used to identify the target can move with respect to organs at risk (OARs). Ten patients treated by means of the Synchrony system were taken into account. Sparing of irradiated volume and of complication probability were estimated by calculating treatment plans with a motion tracking system (Cyberknife Synchrony, Sunnyvale, CA, USA) and a PTV-enlargement strategy for ten patients. Six patients were also evaluated for possible inaccuracy of estimation of dose to OARs due to relative movement between PTV and OAR during respiration. Dose volume histograms (DVH) and Equivalent Uniform Dose (EUD) were calculated for the organs at risk. In the cases for which the target moved closer to the OAR (three cases of six), a small but significant increase was detected in the DVH and EUD of the OAR. In three other cases no significant variation was detected. Mean reduction in PTV volume was 38% for liver cases, 44% for lung cases and 8.5% for pancreas cases. NTCP for liver reduced from 23.1 to 14.5% on average, for lung it reduced from 2.5 to 0.1% on average. Significant uncertainty may arise from the use of a motion-tracking device in determination of dose to organs at risk due to the relative motion between PTV and OAR. However, it is possible to limit this uncertainty. The breathing phase in which the OAR is closer to the PTV should be selected for planning. A full understanding of the dose distribution would only be possible by means of a complete 4D-CT representation.

  8. Use of motion tracking in stereotactic body radiotherapy: Evaluation of uncertainty in off-target dose distribution and optimization strategies

    Energy Technology Data Exchange (ETDEWEB)

    Casamassima, F. [Univ. di Firenze, Florence (Italy). Dept. di Fisiopatologia Clinica; Cavedon, C.; Francescon, P.; Stancanello, J.; Avanzo, M.; Cora, S.; Scalchi, P. [Ospedale S.Bortolo, Vicenza (Italy). Servizio di Fisica Sanitaria

    2006-09-15

    Spatial accuracy in extracranial radiosurgery is affected by organ motion. Motion tracking systems may be able to avoid PTV enlargement while preserving treatment times, however special attention is needed when fiducial markers are used to identify the target can move with respect to organs at risk (OARs). Ten patients treated by means of the Synchrony system were taken into account. Sparing of irradiated volume and of complication probability were estimated by calculating treatment plans with a motion tracking system (Cyberknife Synchrony, Sunnyvale, CA (US). ) and a PTV-enlargement strategy for ten patients. Six patients were also evaluated for possible inaccuracy of estimation of dose to OARs due to relative movement between PTV and OAR during respiration. Dose volume histograms (DVH) and Equivalent Uniform Dose (EUD) were calculated for the organs at risk. In the cases for which the target moved closer to the OAR (three cases of six), a small but significant increase was detected in the DVH and EUD of the OAR. In three other cases no significant variation was detected. Mean reduction in PTV volume was 38% for liver cases, 44% for lung cases and 8.5% for pancreas cases. NTCP for liver reduced from 23.1 to 14.5% on average, for lung it reduced from 2.5 to 0.1% on average. Significant uncertainty may arise from the use of a motion-tracking device in determination of dose to organs at risk due to the relative motion between PTV and OAR. However, it is possible to limit this uncertainty. The breathing phase in which the OAR is closer to the PTV should be selected for planning. A full understanding of the dose distribution would only be possible by means of a complete 4D-CT representation.

  9. Moving-Target Position Estimation Using GPU-Based Particle Filter for IoT Sensing Applications

    Directory of Open Access Journals (Sweden)

    Seongseop Kim

    2017-11-01

    Full Text Available A particle filter (PF has been introduced for effective position estimation of moving targets for non-Gaussian and nonlinear systems. The time difference of arrival (TDOA method using acoustic sensor array has normally been used to for estimation by concealing the location of a moving target, especially underwater. In this paper, we propose a GPU -based acceleration of target position estimation using a PF and propose an efficient system and software architecture. The proposed graphic processing unit (GPU-based algorithm has more advantages in applying PF signal processing to a target system, which consists of large-scale Internet of Things (IoT-driven sensors because of the parallelization which is scalable. For the TDOA measurement from the acoustic sensor array, we use the generalized cross correlation phase transform (GCC-PHAT method to obtain the correlation coefficient of the signal using Fast Fourier Transform (FFT, and we try to accelerate the calculations of GCC-PHAT based TDOA measurements using FFT with GPU compute unified device architecture (CUDA. The proposed approach utilizes a parallelization method in the target position estimation algorithm using GPU-based PF processing. In addition, it could efficiently estimate sudden movement change of the target using GPU-based parallel computing which also can be used for multiple target tracking. It also provides scalability in extending the detection algorithm according to the increase of the number of sensors. Therefore, the proposed architecture can be applied in IoT sensing applications with a large number of sensors. The target estimation algorithm was verified using MATLAB and implemented using GPU CUDA. We implemented the proposed signal processing acceleration system using target GPU to analyze in terms of execution time. The execution time of the algorithm is reduced by 55% from to the CPU standalone operation in target embedded board, NVIDIA Jetson TX1. Also, to apply large

  10. Target height estimation in children with idiopathic short stature who are referred to the growth clinic.

    Science.gov (United States)

    Poyrazoglu, Sukran; Darendeliler, Feyza; Bas, Firdevs; Bundak, Ruveyde; Saka, Nurcin; Darcan, Sukran; Wit, Jan M; Gunoz, Hulya

    2009-01-01

    It was the aim of this study to evaluate adult height (AH) and different methods used for estimation of target height (TH) in children with idiopathic short stature (ISS). Eighty-five ISS children (36 female, 49 male) were followed until AH was evaluated retrospectively. TH was calculated according to the following 4 methods: (1) as +/-6.5 cm to the mean parental heights for boys or girls, respectively, (2) as the mean standard deviation score (SDS) of the parents' heights, (3) as the sum of the SDS of the parents' heights divided by 1.61, and (4) as the mean SDS of the parents' heights multiplied by 0.72. ISS was classified as familial short stature (FSS) if the height was within the TH range and as nonfamilial short stature (NFSS) if it was below the TH range. The number of FSS and NFSS children differed by the method chosen. The mean AH SDS was lower than the TH SDS in FSS in all methods, except in method 3. NFSS children did not attain their TH in either of the methods. Classification of ISS depends on the method of the TH range chosen. ISS children reach a mean AH SDS lower than the mean TH SDS. Only FSS children classified by method 3 reached a mean AH SDS close to the mean TH SDS. Copyright 2009 S. Karger AG, Basel.

  11. Fast neutron distributions from Be and C thick targets bombarded with 80 and 160 MeV deuterons

    Energy Technology Data Exchange (ETDEWEB)

    Pauwels, N.; Laurent, H.; Clapier, F. [Institut de Physique Nucleaire, (IN2P3/CNRS) 91 - Orsay (France); Brandenburg, S.; Beijers, J.P.M.; Zegers, R.G.T. [Kernfysisch Versneller Institute, Groningen (Netherlands); Lebreton, L. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium); Mirea, M. [Institute of Nuclear Physics and Engineering, Bucarest (Romania); Saint-Laurent, M.G. [Grand Accelerateur National d' Ions Lourds (GANIL), 14 - Caen (France)

    2000-07-01

    Measured angular and energy distributions of neutrons obtained by bombarding Be and C thick targets with deuterons at 80 and 160 MeV incident energies are reported. The data were obtained using the time-of-flight method. The experimental values are compared with a modelization based on stripping formalization extended for thick targets. (authors)

  12. DOA Estimation of Low Altitude Target Based on Adaptive Step Glowworm Swarm Optimization-multiple Signal Classification Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Hao

    2015-06-01

    Full Text Available The traditional MUltiple SIgnal Classification (MUSIC algorithm requires significant computational effort and can not be employed for the Direction Of Arrival (DOA estimation of targets in a low-altitude multipath environment. As such, a novel MUSIC approach is proposed on the basis of the algorithm of Adaptive Step Glowworm Swarm Optimization (ASGSO. The virtual spatial smoothing of the matrix formed by each snapshot is used to realize the decorrelation of the multipath signal and the establishment of a fullorder correlation matrix. ASGSO optimizes the function and estimates the elevation of the target. The simulation results suggest that the proposed method can overcome the low altitude multipath effect and estimate the DOA of target readily and precisely without radar effective aperture loss.

  13. Kullback-Leibler Divergence for fault estimation and isolation : Application to Gamma distributed data

    Science.gov (United States)

    Delpha, Claude; Diallo, Demba; Youssef, Abdulrahman

    2017-09-01

    In this paper we develop a fault detection, isolation and estimation method based on data-driven approach. Data-driven methods are effective for feature extraction and feature analysis using statistical techniques. In the proposal, the Principal Component Analysis (PCA) method is used to extract the features and to reduce the data dimension. Then, the Kullback-Leibler Divergence (KLD) is used to detect the fault occurrence by comparing the Probability Density Function of the latent scores. To estimate the fault amplitude in case of Gamma distributed data, we have developed an analytical model that links the KLD to the fault severity, including the environmental noise conditions. In the Principal Component Analysis framework, the proposed model of the KLD has been analysed and compared to an estimated value of the KLD using the Monte-Carlo estimator. The results show that for incipient faults ( 40 dB), the fault amplitude estimation is accurate enough with a relative error less than 1%. The proposed approach is experimentally verified with vibration signals used for monitoring bearings in electrical machines.

  14. Distributed Channel Estimation and Pilot Contamination Analysis for Massive MIMO-OFDM Systems

    KAUST Repository

    Zaib, Alam

    2016-07-22

    By virtue of large antenna arrays, massive MIMO systems have a potential to yield higher spectral and energy efficiency in comparison with the conventional MIMO systems. This paper addresses uplink channel estimation in massive MIMO-OFDM systems with frequency selective channels. We propose an efficient distributed minimum mean square error (MMSE) algorithm that can achieve near optimal channel estimates at low complexity by exploiting the strong spatial correlation among antenna array elements. The proposed method involves solving a reduced dimensional MMSE problem at each antenna followed by a repetitive sharing of information through collaboration among neighboring array elements. To further enhance the channel estimates and/or reduce the number of reserved pilot tones, we propose a data-aided estimation technique that relies on finding a set of most reliable data carriers. Furthermore, we use stochastic geometry to quantify the pilot contamination, and in turn use this information to analyze the effect of pilot contamination on channel MSE. The simulation results validate our analysis and show near optimal performance of the proposed estimation algorithms.

  15. Estimating interevent time distributions from finite observation periods in communication networks

    Science.gov (United States)

    Kivelä, Mikko; Porter, Mason A.

    2015-11-01

    A diverse variety of processes—including recurrent disease episodes, neuron firing, and communication patterns among humans—can be described using interevent time (IET) distributions. Many such processes are ongoing, although event sequences are only available during a finite observation window. Because the observation time window is more likely to begin or end during long IETs than during short ones, the analysis of such data is susceptible to a bias induced by the finite observation period. In this paper, we illustrate how this length bias is born and how it can be corrected without assuming any particular shape for the IET distribution. To do this, we model event sequences using stationary renewal processes, and we formulate simple heuristics for determining the severity of the bias. To illustrate our results, we focus on the example of empirical communication networks, which are temporal networks that are constructed from communication events. The IET distributions of such systems guide efforts to build models of human behavior, and the variance of IETs is very important for estimating the spreading rate of information in networks of temporal interactions. We analyze several well-known data sets from the literature, and we find that the resulting bias can lead to systematic underestimates of the variance in the IET distributions and that correcting for the bias can lead to qualitatively different results for the tails of the IET distributions.

  16. Estimating the ventilation-perfusion distribution: an ill-posed integral equation problem.

    Science.gov (United States)

    Lim, L L; Whitehead, J

    1992-03-01

    The distribution of ventilation-perfusion ratio over the lung is a useful indicator of the efficiency of lung function. Information about this distribution can be obtained by observing the retention in blood of inert gases passed through the lung. These retentions are related to the ventilation-perfusion distribution through an ill-posed integral equation. An unusual feature of this problem of estimating the ventilation-perfusion distribution is the small amount of data available; typically there are just six data points, as only six gases are used in the experiment. A nonparametric smoothing method is compared to a simpler method that models the distribution as a histogram with five classes. Results from the smoothing method are found to be very unstable. In contrast, the simpler method gives stable solutions with parameters that are physiologically meaningful. It is concluded that while such smoothing methods may be useful for solving some ill-posed integral equation problems, the simpler method is preferable when data are scarce.

  17. Estimating the geographical distribution of the prevalence of the metabolic syndrome in young Mexicans

    Directory of Open Access Journals (Sweden)

    Miguel Murguía-Romero

    2012-09-01

    Full Text Available The geographical distribution of the metabolic syndrome (MetS prevalence in young Mexicans (aged 17-24 years was estimated stepwise starting from its prevalence based on the body mass index (BMI in a study of 3,176 undergraduate students of this age group from Mexico City. To estimate the number of people with MetS by state, we multiplied its prevalence derived from the BMI range found in the Mexico City sample by the BMI proportions (range and state obtained from the Mexico 2006 national survey on health and nutrition. Finally, to estimate the total number of young people with MetS in Mexico, its prevalence by state was multiplied by the share of young population in each state according to the National Population and Housing Census 2010. Based on these figures, we estimated the national prevalence of MetS at 15.8%, the average BMI at 24.1 (standard deviation = 4.2, and the prevalence of overweight people (BMI ≥25 of that age group at 39.0%. These results imply that 2,588,414 young Mexicans suffered from MetS in 2010. The Yucatan peninsula in the south and the Sonora state in the north showed the highest rates of MetS prevalence. The calculation of the MetS prevalence by BMI range in a sample of the population, and extrapolating it using the BMI proportions by range of the total population, was found to be a useful approach. We conclude that the BMI is a valuable public health tool to estimate MetS prevalence in the whole country, including its geographical distribution.

  18. An Ensemble Generator for Quantitative Precipitation Estimation Based on Censored Shifted Gamma Distributions

    Science.gov (United States)

    Wright, D.; Kirschbaum, D.; Yatheendradas, S.

    2016-12-01

    The considerable uncertainties associated with quantitative precipitation estimates (QPE), whether from satellite platforms, ground-based weather radar, or numerical weather models, suggest that such QPE should be expressed as distributions or ensembles of possible values, rather than as single values. In this research, we borrow a framework from the weather forecast verification community, to "correct" satellite precipitation and generate ensemble QPE. This approach is based on the censored shifted gamma distribution (CSGD). The probability of precipitation, central tendency (i.e. mean), and the uncertainty can be captured by the three parameters of the CSGD. The CSGD can then be applied for simulation of rainfall ensembles using a flexible nonlinear regression framework, whereby the CSGD parameters can be conditioned on one or more reference rainfall datasets and on other time-varying covariates such as modeled or measured estimates of precipitable water and relative humidity. We present the framework and initial results by generating precipitation ensembles based on the Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis (TMPA) dataset, using both NLDAS and PERSIANN-CDR precipitation datasets as references. We also incorporate a number of covariates from MERRA2 reanalysis including model-estimated precipitation, precipitable water, relative humidity, and lifting condensation level. We explore the prospects for applying the framework and other ensemble error models globally, including in regions where high-quality "ground truth" rainfall estimates are lacking. We compare the ensemble outputs against those of an independent rain gage-based ensemble rainfall dataset. "Pooling" of regional rainfall observations is explored as one option for improving ensemble estimates of rainfall extremes. The approach has potential applications in near-realtime, retrospective, and scenario modeling of rainfall-driven hazards such as floods and landslides

  19. Conditional probability distribution (CPD) method in temperature based death time estimation: Error propagation analysis.

    Science.gov (United States)

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2014-05-01

    Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Estimating trends in data from the Weibull and a generalized extreme value distribution

    Science.gov (United States)

    Clarke, Robin T.

    2002-06-01

    Where changes in hydrologic regime occur, whether as a result of change in land use or climate, statistical procedures are needed to test for the existence of trend in hydrological data, particularly those expected to follow extreme value distributions such as annual peak discharges, annual minimum flows, and annual maximum rainfall intensities. Furthermore, where trend is detected, its magnitude must also be estimated. A later paper [Clarke, 2002] will consider the estimation of trends in Gumbel data; the present paper gives results on tests for the significance of trends in annual and minimum discharges, where these can be assumed to follow a Weibull distribution. The statistical procedures, already fully established in the statistical analysis of survival data, convert the problem into one in which a generalized linear model is fitted to a power-transformed variable having Poisson distribution and calculates the trend coefficients (as well as the parameter in the power transform) by maximum likelihood. The methods are used to test for trend in annual minimum flows over a 19-year period in the River Paraguay at Cáceres, Brazil, and in monthly flows at the same site. Extension of the procedure to testing for trend in data following a generalized extreme value distribution is also discussed. Although a test for time trend in Weibull-distributed hydrologic data is the motivation for this paper, the same approach can be applied in the analysis of data sequences that can be regarded as stationary in time, for which the objective is to explore relationships between a Weibull variate and other variables (covariates) that explain its behavior.

  1. A web service and android application for the distribution of rainfall estimates and Earth observation data

    Science.gov (United States)

    Mantas, V. M.; Liu, Z.; Pereira, A. J. S. C.

    2015-04-01

    The full potential of Satellite Rainfall Estimates (SRE) can only be realized if timely access to the datasets is possible. Existing data distribution web portals are often focused on global products and offer limited customization options, especially for the purpose of routine regional monitoring. Furthermore, most online systems are designed to meet the needs of desktop users, limiting the compatibility with mobile devices. In response to the growing demand for SRE and to address the current limitations of available web portals a project was devised to create a set of freely available applications and services, available at a common portal that can: (1) simplify cross-platform access to Tropical Rainfall Measuring Mission Online Visualization and Analysis System (TOVAS) data (including from Android mobile devices), (2) provide customized and continuous monitoring of SRE in response to user demands and (3) combine data from different online data distribution services, including rainfall estimates, river gauge measurements or imagery from Earth Observation missions at a single portal, known as the Tropical Rainfall Measuring Mission (TRMM) Explorer. The TRMM Explorer project suite includes a Python-based web service and Android applications capable of providing SRE and ancillary data in different intuitive formats with the focus on regional and continuous analysis. The outputs include dynamic plots, tables and data files that can also be used to feed downstream applications and services. A case study in Southern Angola is used to describe the potential of the TRMM Explorer for SRE distribution and analysis in the context of ungauged watersheds. The development of a collection of data distribution instances helped to validate the concept and identify the limitations of the program, in a real context and based on user feedback. The TRMM Explorer can successfully supplement existing web portals distributing SRE and provide a cost-efficient resource to small and medium

  2. Estimation of spatial distribution of t-year precipitation with 5 km resolution

    Science.gov (United States)

    Kuzuha, Y.

    2014-12-01

    We estimated the spatial distribution of t-year precipitation such as 100-year precipitation, 50-year precipitation, and so on in Japan. If the return period of t-year precipitation, t, is greater than the data size (number of data of time series of annual maxima), then we use a traditional parametric method by which some probability distributions are used and goodness-of-fit results are mutually compared. The criterion of goodness-of-fit that we used is the Takara-Takasao criterion (1988). The criterion is designated as the standard least squares criterion (SLSC) in Japan. We designate this case as a 'case for a few samples'. However, if the number of data of time series of annual maxima is greater than the return period t, then the case is that for numerous samples. For the case, we used Takara's method (2006), which uses a nonparametric probability distribution.For both cases for a few samples and for numerous samples, the bootstrap method is applied to ascertain the variation of the estimated t-year precipitation obtained using the parametric or nonparametric probability distribution we chose. We emphasize that Monte-Carlo-like simulations are not necessary for the case with numerous samples: a theoretical solution exists for the bootstrap method. We show the theoretical solutions. Furthermore, the data we used are solutions obtained using CGCM (KAKUSHIN-5 km data). Therefore, data with very high spatial resolution of 5 km can be used. Even if sparsely distributed precipitation data are used, high-resolution data can be obtained using CGCM data.

  3. Nonparametric estimation of the distribution of the autoregressive coefficient from panel random-coefficient AR(1) data

    OpenAIRE

    Leipus, Remigijus; Philippe, Anne; Pilipauskaitė, Vytautė; Surgailis, Donatas

    2015-01-01

    We discuss nonparametric estimation of the distribution function $G(x)$ of the autoregressive coefficient $a \\in (-1,1)$ from a panel of $N$ random-coefficient AR(1) data, each of length $n$, by the empirical distribution function of lag 1 sample autocorrelations of individual AR(1) processes. Consistency and asymptotic normality of the empirical distribution function and a class of kernel density estimators is established under some regularity conditions on $G(x)$ as $N$ and $n$ increase to ...

  4. Time-dependent seismic hazard in Bobrek coal mine, Poland, assuming different magnitude distribution estimations

    Science.gov (United States)

    Leptokaropoulos, Konstantinos; Staszek, Monika; Cielesta, Szymon; Urban, Paweł; Olszewska, Dorota; Lizurek, Grzegorz

    2017-06-01

    The purpose of this study is to evaluate seismic hazard parameters in connection with the evolution of mining operations and seismic activity. The time-dependent hazard parameters to be estimated are activity rate, Gutenberg-Richter b-value, mean return period and exceedance probability of a prescribed magnitude for selected time windows related with the advance of the mining front. Four magnitude distribution estimation methods are applied and the results obtained from each one are compared with each other. Those approaches are maximum likelihood using the unbounded and upper bounded Gutenberg-Richter law and the non-parametric unbounded and non-parametric upper-bounded kernel estimation of magnitude distribution. The method is applied for seismicity occurred in the longwall mining of panel 3 in coal seam 503 in Bobrek colliery in Upper Silesia Coal Basin, Poland, during 2009-2010. Applications are performed in the recently established Web-Platform for Anthropogenic Seismicity Research, available at https://tcs.ah-epos.eu/.

  5. Distributed Input and State Estimation Using Local Information in Heterogeneous Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dzung Tran

    2017-07-01

    Full Text Available A new distributed input and state estimation architecture is introduced and analyzed for heterogeneous sensor networks. Specifically, nodes of a given sensor network are allowed to have heterogeneous information roles in the sense that a subset of nodes can be active (that is, subject to observations of a process of interest and the rest can be passive (that is, subject to no observation. Both fixed and varying active and passive roles of sensor nodes in the network are investigated. In addition, these nodes are allowed to have non-identical sensor modalities under the common underlying assumption that they have complimentary properties distributed over the sensor network to achieve collective observability. The key feature of our framework is that it utilizes local information not only during the execution of the proposed distributed input and state estimation architecture but also in its design in that global uniform ultimate boundedness of error dynamics is guaranteed once each node satisfies given local stability conditions independent from the graph topology and neighboring information of these nodes. As a special case (e.g., when all nodes are active and a positive real condition is satisfied, the asymptotic stability can be achieved with our algorithm. Several illustrative numerical examples are further provided to demonstrate the efficacy of the proposed architecture.

  6. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  7. Inequitable distribution of general practitioners in Australia: estimating need through the Robin Hood Index.

    Science.gov (United States)

    Wilkinson, D; Symon, B

    2000-02-01

    From Census data, to document the distribution of general practitioners in Australia and to estimate the number of general practitioners needed to achieve an equitable distribution accounting for community health need. Data on location of general practitioners, population size and crude mortality by statistical division (SD) were obtained from the Australian Bureau of Statistics. The number of patients per general practitioner by SD was calculated and plotted. Using crude mortality to estimate community health need, a ratio of the number of general practitioners per person: mortality was calculated for all Australia and for each SD (the Robin Hood Index). From this, the number of general practitioners needed to achieve equity was calculated. In all, 26,290 general practitioners were identified in 57 SDs. The mean number of people per general practitioner is 707, ranging from 551 to 1887. Capital city SDs have most favourable ratios. The Robin Hood Index for Australia is 1, and ranges from 0.32 (relatively under-served) to 2.46 (relatively over-served). Twelve SDs (21%) including all capital cities and 65% of all Australians, have a Robin Hood Index > 1. To achieve equity per capita 2489 more general practitioners (10% of the current workforce) are needed. To achieve equity by the Robin Hood Index 3351 (13% of the current workforce) are needed. The distribution of general practitioners in Australia is skewed. Nonmetropolitan areas are relatively underserved. Census data and the Robin Hood Index could provide a simple means of identifying areas of need in Australia.

  8. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Xu

    2014-01-01

    Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  9. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    Science.gov (United States)

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  10. Efficient sampling in fragment-based protein structure prediction using an estimation of distribution algorithm.

    Directory of Open Access Journals (Sweden)

    David Simoncini

    Full Text Available Fragment assembly is a powerful method of protein structure prediction that builds protein models from a pool of candidate fragments taken from known structures. Stochastic sampling is subsequently used to refine the models. The structures are first represented as coarse-grained models and then as all-atom models for computational efficiency. Many models have to be generated independently due to the stochastic nature of the sampling methods used to search for the global minimum in a complex energy landscape. In this paper we present EdaFold(AA, a fragment-based approach which shares information between the generated models and steers the search towards native-like regions. A distribution over fragments is estimated from a pool of low energy all-atom models. This iteratively-refined distribution is used to guide the selection of fragments during the building of models for subsequent rounds of structure prediction. The use of an estimation of distribution algorithm enabled EdaFold(AA to reach lower energy levels and to generate a higher percentage of near-native models. [Formula: see text] uses an all-atom energy function and produces models with atomic resolution. We observed an improvement in energy-driven blind selection of models on a benchmark of EdaFold(AA in comparison with the [Formula: see text] AbInitioRelax protocol.

  11. A literature review on optimum meter placement algorithms for distribution state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Ramesh, L. [Jadavpur Univ., Kolkotta (India); Chowdhury, S.P.; Chowdhury, S.; Gaunt, C.T. [Cape Town Univ., (South Africa)

    2009-07-01

    A literature review of meter placement for the monitoring of power distribution systems was presented. The aim of the study was to compare different algorithms used for solving optimum meter placement. The percentage of algorithms used and the number of studies conducted to determine optimal placement were plotted on graphs in order to determine the performance accuracy for different meter placement algorithms. Measurements used for state estimation were collected through SCADA systems. The data requirements for real time monitoring and the control of distribution systems were identified using a rule-based meter placement method. Rules included placing meters at all switch and fuse locations that require monitoring; placing additional meters along feeder line sections; placing meters on open tie switches that are used for feeder switching. The genetic algorithm technique was used to consider both the investment costs and real-time monitoring capability of the meters. It was concluded that the branch-current-based 3-phase state estimation algorithm can be used to determine optimal meter placements for distribution systems. The method allowed for the placement of fewer meters. 24 refs., 1 tab., 3 figs.

  12. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    Science.gov (United States)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of

  13. Hydraulic Conductivity Estimates from Particle Size Distributions of Sediments from the Los Alamos Chromium Plume

    Science.gov (United States)

    Harris, R.; Reimus, P. W.; Ding, M.

    2015-12-01

    Chromium used in Los Alamos National Laboratory cooling towers was released as effluent onto laboratory property between 1956 and 1972. As a result, the underlying regional aquifer is contaminated with chromium (VI), a toxin and carcinogen. The highest concentration of chromium is ~1 ppm in monitoring well R-42, exceeding the New Mexico drinking water standard of 50 ppb. The chromium plume is currently being investigated to identify an effective remediation method. Geologic heterogeneity within the aquifer causes the hydraulic conductivity within the plume to be spatially variable. This variability, particularly with depth, is crucial for predicting plume transport behavior. Though pump tests are useful for obtaining estimates of site specific hydraulic conductivity, they tend to interrogate hydraulic properties of only the most conductive strata. Variations in particle size distribution as a function of depth can complement pump test data by providing estimates of vertical variations in hydraulic conductivity. Samples were collected from five different sonically-drilled core holes within the chromium plume at depths ranging from 732'-1125' below the surface. To obtain particle size distributions, the samples were sieved into six different fractions from the fine sands to gravel range (>4 mm, 2-4 mm, 1.4-2 mm, 0.355-1.4 mm, 180-355 µm, and smaller than 180 µm). The Kozeny-Carmen equation (k=(δg/µ)(dm2/180)(Φ3/(1-Φ)2)), was used to estimate permeability from the particle size distribution data. Pump tests estimated a hydraulic conductivity varying between 1 and 50 feet per day. The Kozeny-Carmen equation narrowed this estimate down to an average value of 2.635 feet per day for the samples analyzed, with a range of 0.971 ft/day to 6.069 ft/day. The results of this study show that the Kozeny-Carmen equation provides quite specific estimates of hydraulic conductivity in the Los Alamos aquifer. More importantly, it provides pertinent information on the expected

  14. FrFT-CSWSF: Estimating cross-range velocities of ground moving targets using multistatic synthetic aperture radar

    Directory of Open Access Journals (Sweden)

    Li Chenlei

    2014-10-01

    Full Text Available Estimating cross-range velocity is a challenging task for space-borne synthetic aperture radar (SAR, which is important for ground moving target indication (GMTI. Because the velocity of a target is very small compared with that of the satellite, it is difficult to correctly estimate it using a conventional monostatic platform algorithm. To overcome this problem, a novel method employing multistatic SAR is presented in this letter. The proposed hybrid method, which is based on an extended space-time model (ESTIM of the azimuth signal, has two steps: first, a set of finite impulse response (FIR filter banks based on a fractional Fourier transform (FrFT is used to separate multiple targets within a range gate; second, a cross-correlation spectrum weighted subspace fitting (CSWSF algorithm is applied to each of the separated signals in order to estimate their respective parameters. As verified through computer simulation with the constellations of Cartwheel, Pendulum and Helix, this proposed time-frequency-subspace method effectively improves the estimation precision of the cross-range velocities of multiple targets.

  15. Uniform brain tumor distribution and tumor associated macrophage targeting of systemically administered dendrimers.

    Science.gov (United States)

    Zhang, Fan; Mastorakos, Panagiotis; Mishra, Manoj K; Mangraviti, Antonella; Hwang, Lee; Zhou, Jinyuan; Hanes, Justin; Brem, Henry; Olivi, Alessandro; Tyler, Betty; Kannan, Rangaramanujam M

    2015-06-01

    Effective blood-brain tumor barrier penetration and uniform solid tumor distribution can significantly enhance therapeutic delivery to brain tumors. Hydroxyl-functionalized, generation-4 poly(amidoamine) (PAMAM) dendrimers, with their small size, near-neutral surface charge, and the ability to selectively localize in cells associated with neuroinflammation may offer new opportunities to address these challenges. In this study we characterized the intracranial tumor biodistribution of systemically delivered PAMAM dendrimers in an intracranial rodent gliosarcoma model using fluorescence-based quantification methods and high resolution confocal microscopy. We observed selective and homogeneous distribution of dendrimer throughout the solid tumor (∼6 mm) and peritumoral area within fifteen minutes after systemic administration, with subsequent accumulation and retention in tumor associated microglia/macrophages (TAMs). Neuroinflammation and TAMs have important growth promoting and pro-invasive effects in brain tumors. The rapid clearance of systemically administered dendrimers from major organs promises minimal off-target adverse effects of conjugated drugs. Therefore, selective delivery of immunomodulatory molecules to TAM, using hydroxyl PAMAM dendrimers, may hold promise for therapy of glioblastoma. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Estimation of the volatility distribution of organic aerosol combining thermodenuder and isothermal dilution measurements

    Directory of Open Access Journals (Sweden)

    E. E. Louvaris

    2017-10-01

    Full Text Available A method is developed following the work of Grieshop et al. (2009 for the determination of the organic aerosol (OA volatility distribution combining thermodenuder (TD and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS and a scanning mobility particle sizer (SMPS. In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60–75 % of the cooking OA (COA at concentrations around 500 µg m−3 consisted of low-volatility organic compounds (LVOCs, 20–30 % of semivolatile organic compounds (SVOCs, and around 10 % of intermediate-volatility organic compounds (IVOCs. The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol−1 and the effective accommodation coefficient was 0.06–0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.

  17. Estimation of the volatility distribution of organic aerosol combining thermodenuder and isothermal dilution measurements

    Science.gov (United States)

    Louvaris, Evangelos E.; Karnezi, Eleni; Kostenidou, Evangelia; Kaltsonoudis, Christos; Pandis, Spyros N.

    2017-10-01

    A method is developed following the work of Grieshop et al. (2009) for the determination of the organic aerosol (OA) volatility distribution combining thermodenuder (TD) and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA) produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) and a scanning mobility particle sizer (SMPS). In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60-75 % of the cooking OA (COA) at concentrations around 500 µg m-3 consisted of low-volatility organic compounds (LVOCs), 20-30 % of semivolatile organic compounds (SVOCs), and around 10 % of intermediate-volatility organic compounds (IVOCs). The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol-1 and the effective accommodation coefficient was 0.06-0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.

  18. A new method to model thickness distribution and estimate volume and extent of tephra fall deposits

    Science.gov (United States)

    Yang, Q.; Bursik, M. I.

    2016-12-01

    The most straightforward way to understand tephra fall deposits is through isopach maps. Hand-drawn mapping and interpolation are common tools in depicting the thickness distribution. Hand-drawn methods tend to increase the smoothness of the isopachs, while the local variations in the thickness measurements, which may be generated from important but subtle processes during and after eruptions, are neglected. Here we present a GIS-based method for modeling tephra thickness distribution with less subjectivity. This method assumes that under a log-scale transformation, the tephra thickness distribution is the sum of an exponential trend and local variations. The trend assumes a stable wind field during eruption, and is characterized by both distance and a measure of downwind distance, which is used to denote the influence of wind during tephra transport. The local variations are modeled through ordinary kriging, using the residuals from fitting the trend. This method has been applied to the published thickness datasets of Fogo Member A and Bed 1 of North Mono eruption (Fig. 1). The resultant contours and volume estimations are in general consistent with previous studies; differences between results from hand-drawn maps and model highlight inconsistencies in hand-drawing, and provide a quantitative basis for interpretation. Divergences from a stable wind field as reflected in isopach data are readily noticed. In this respect, wind direction was stable during North Mono Bed 1 deposition, and, although weak in the case of Fogo A, was not unidirectional. The multiple lobes of Fogo A are readily distinguished in the model isopachs, suggesting that separate lobes can in general be distinguished given sufficient data. A "plus-one" transformation based on this method is used to estimate fall deposit extent, which should prove useful in hypothesizing where one should find a particular tephra deposit. A limitation is that one must initialize the algorithm with an estimate of

  19. Maximum Likelihood Estimates of Parameters in Various Types of Distribution Fitted to Important Data Cases.

    OpenAIRE

    Hirose, Hideo

    1998-01-01

    TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...

  20. An Integrated Architecture for Distributed Estimation, Navigation and Control of Aerospace Systems

    Science.gov (United States)

    Vu, Thanh

    Distributed autonomous systems have demonstrated many advantageous features over their centralized counterparts. In exchange for their scalability and robustness, there are additional complexities in obtaining a global behavior from local interactions. These complexities stem from a coordination problem while being provided a limited information set. Despite this information constraint, it is still desirable for agents to construct an estimate of the entire system for coordination purposes. This work proposes an architecture for the coordination and control of a multi-agent system while being limited in external communication. This architecture will examine how to internally estimate states, assign goals, and finally execute a suitable control. These techniques are first applied to a simple linear model and then extended to non-linear domains. Specific aerospace applications are explored where there exists a need for accuracy and precision.

  1. The estimation of tree posterior probabilities using conditional clade probability distributions.

    Science.gov (United States)

    Larget, Bret

    2013-07-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.

  2. A variable step-size strategy for distributed estimation over adaptive networks

    Science.gov (United States)

    Bin Saeed, Muhammad O.; Zerguine, Azzedine; Zummo, Salam A.

    2013-12-01

    A lot of work has been done recently to develop algorithms that utilize the distributed structure of an ad hoc wireless sensor network to estimate a certain parameter of interest. One such algorithm is called diffusion least-mean squares (DLMS). This algorithm estimates the parameter of interest using the cooperation between neighboring sensors within the network. The present work proposes an improvement on the DLMS algorithm by using a variable step-size LMS (VSSLMS) algorithm. In this work, first, the well-known variants of VSSLMS algorithms are compared with each other in order to select the most suitable algorithm which provides the best trade-off between performance and complexity. Second, the detailed convergence and steady-state analyses of the selected VSSLMS algorithm are performed. Finally, extensive simulations are carried out to test the robustness of the proposed algorithm under different scenarios. Moreover, the simulation results are found to corroborate the theoretical findings very well.

  3. Flood quantiles estimation based on theoretically derived distributions: regional analysis in Southern Italy

    Directory of Open Access Journals (Sweden)

    V. Iacobellis

    2011-03-01

    Full Text Available A regional probabilistic model for the estimation of medium-high return period flood quantiles is presented. The model is based on the use of theoretically derived probability distributions of annual maximum flood peaks (DDF. The general model is called TCIF (Two-Component IF model and encompasses two different threshold mechanisms associated with ordinary and extraordinary events, respectively. Based on at-site calibration of this model for 33 gauged sites in Southern Italy, a regional analysis is performed obtaining satisfactory results for the estimation of flood quantiles for return periods of technical interest, thus suggesting the use of the proposed methodology for the application to ungauged basins. The model is validated by using a jack-knife cross-validation technique taking all river basins into consideration.

  4. Flood quantiles estimation based on theoretically derived distributions: regional analysis in Southern Italy

    Science.gov (United States)

    Iacobellis, V.; Gioia, A.; Manfreda, S.; Fiorentino, M.

    2011-03-01

    A regional probabilistic model for the estimation of medium-high return period flood quantiles is presented. The model is based on the use of theoretically derived probability distributions of annual maximum flood peaks (DDF). The general model is called TCIF (Two-Component IF model) and encompasses two different threshold mechanisms associated with ordinary and extraordinary events, respectively. Based on at-site calibration of this model for 33 gauged sites in Southern Italy, a regional analysis is performed obtaining satisfactory results for the estimation of flood quantiles for return periods of technical interest, thus suggesting the use of the proposed methodology for the application to ungauged basins. The model is validated by using a jack-knife cross-validation technique taking all river basins into consideration.

  5. Novel receivers for AF relaying with distributed STBC using cascaded and disintegrated channel estimation

    KAUST Repository

    Khan, Fahd Ahmed

    2012-04-01

    New coherent receivers are derived for a pilot-symbol-aided distributed space-time block-coded system with imperfect channel state information which do not perform channel estimation at the destination by using the received pilot signals directly for decoding. The derived receivers are based on new metrics that use distribution of the channels and the noise to achieve improved symbol-error-rate (SER) performance. The SER performance of the derived receivers is further improved by utilizing the decision history in the receivers. The decision history is also incorporated in the existing Euclidean metric to improve its performance. Simulation results show that, for 16-quadrature-amplitude-modulation in a Rayleigh fading channel, a performance gain of up to 2.5 dB can be achieved for the new receivers compared with the conventional mismatched coherent receiver. © 2012 IEEE.

  6. Estimating the Upper Limit of Lifetime Probability Distribution, Based on Data of Japanese Centenarians.

    Science.gov (United States)

    Hanayama, Nobutane; Sibuya, Masaaki

    2016-08-01

    In modern biology, theories of aging fall mainly into two groups: damage theories and programed theories. If programed theories are true, the probability that human beings live beyond a specific age will be zero. In contrast, if damage theories are true, such an age does not exist, and a longevity record will be eventually destroyed. In this article, for examining real state, a special type of binomial model based on the generalized Pareto distribution has been applied to data of Japanese centenarians. From the results, it is concluded that the upper limit of lifetime probability distribution in the Japanese population has been estimated 123 years. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Distributed Baseflow Estimation for a Regional Basin in the Context of the SWOT Hydrology Mission

    Science.gov (United States)

    Baratelli, F.; Flipo, N.; Labarthe, B.

    2016-12-01

    The quantification of aquifer contribution to river discharge is of primary importance to evaluate the impact of climatic and anthropogenic stresses on the availability of water resources. Several baseflow estimation methods require river discharge observations, which can be difficult to obtain at high spatio-temporal resolution for large scale basins. The SWOT mission will provide such observations for large rivers (50 - 100 m wide) even in remote basins. The objective of this work is to develop a methodology that uses the SWOT discharge time series to perform a distributed baseflow estimation for basins at regional or larger scale (> 10000 km2). To this aim, an algorithm based on hydrograph separation by Chapman's filter was developed to automatically estimate the baseflow in a regional river network at a sub-kilometer resolution and daily time step. This algorithm was applied to the Seine River basin (65000 km2, France), where the baseflow was estimated between 1993 and 2015. As a first step, the algorithm was applied using the discharge time series simulated at daily time step by a coupled hydrological-hydrogeological model. The relevance of the methodology with input discharge time series which are coherent with SWOT space and time scales is then discussed.

  8. Milk cow feed intake and milk production and distribution estimates for Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Beck, D.M.; Darwin, R.F.; Erickson, A.R.; Eckert, R.L.

    1992-04-01

    This report provides initial information on milk production and distribution in the Hanford Environmental Dose Reconstruction (HEDR) Project Phase I study area. The Phase I study area consists of eight countries in central Washington and two countries in northern Oregon. The primary objective of the HEDR Project is to develop estimates of the radiation doses populations could have received from Hanford operations. The objective of Phase I of the project was to determine the feasibility of reconstructing data, models, and development of preliminary dose estimates received by people living in the ten countries surrounding Hanford from 1944 to 1947. One of the most important contributors to radiation doses from Hanford during the period of interest was radioactive iodine. Consumption of milk from cows that ate vegetation contaminated with iodine is likely the dominant pathway of human exposure. To estimate the doses people could have received from this pathway, it is necessary to estimate the amount of milk that the people living in the Phase I area consumed, the source of the milk, and the type of feed that the milk cows ate. The objective of the milk model subtask is to identify the sources of milk supplied to residents of each community in the study area as well as the sources of feeds that were fed to the milk cows. In this report, we focus on Grade A cow's milk (fresh milk used for human consumption).

  9. A new estimate of carbon for Bangladesh forest ecosystems with their spatial distribution and REDD+ implications

    DEFF Research Database (Denmark)

    Mukul, Sharif A.; Biswas, Shekhar R.; Rashid, A. Z. M. Manzoor

    2014-01-01

    in forest ecosystems. Using available published data, we provide here a new and more reliable estimate of carbon in Bangladesh forest ecosystems, along with their geo-spatial distribution. Our study reveals great variability in carbon density in different forests and higher carbon stock in the mangrove...... ecosystems, followed by in hill forests and in inland Sal (Shorea robusta) forests in the country. Due to its coverage, degraded nature, and diverse stakeholder engagement, the hill forests of Bangladesh can be used to obtain maximum REDD+ benefits. Further research on carbon and biodiversity in under...

  10. Estimating the spatial distribution of artificial groundwater recharge using multiple tracers.

    Science.gov (United States)

    Moeck, Christian; Radny, Dirk; Auckenthaler, Adrian; Berg, Michael; Hollender, Juliane; Schirmer, Mario

    2017-10-01

    Stable isotopes of water, organic micropollutants and hydrochemistry data are powerful tools for identifying different water types in areas where knowledge of the spatial distribution of different groundwater is critical for water resource management. An important question is how the assessments change if only one or a subset of these tracers is used. In this study, we estimate spatial artificial infiltration along an infiltration system with stage-discharge relationships and classify different water types based on the mentioned hydrochemistry data for a drinking water production area in Switzerland. Managed aquifer recharge via surface water that feeds into the aquifer creates a hydraulic barrier between contaminated groundwater and drinking water wells. We systematically compare the information from the aforementioned tracers and illustrate differences in distribution and mixing ratios. Despite uncertainties in the mixing ratios, we found that the overall spatial distribution of artificial infiltration is very similar for all the tracers. The highest infiltration occurred in the eastern part of the infiltration system, whereas infiltration in the western part was the lowest. More balanced infiltration within the infiltration system could cause the elevated groundwater mound to be distributed more evenly, preventing the natural inflow of contaminated groundwater. Dedicated to Professor Peter Fritz on the occasion of his 80th birthday.

  11. Maximum Entropy Estimation of Probability Distribution of Variables in Higher Dimensions from Lower Dimensional Data.

    Science.gov (United States)

    Das, Jayajit; Mukherjee, Sayak; Hodge, Susan E

    2015-07-01

    A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.

  12. Numerical estimation of heat distribution from the implantable battery system of an undulation pump LVAD.

    Science.gov (United States)

    Okamoto, Eiji; Makino, Tsutomu; Nakamura, Masatoshi; Tanaka, Shuji; Chinzei, Tsuneo; Abe, Yusuke; Isoyama, Takashi; Saito, Itsuro; Mochizuki, Shu-ichi; Imachi, Kou; Inoue, Yusuke; Mitamura, Yoshinori

    2006-01-01

    We have been developing an implantable battery system using three series-connected lithium ion batteries having an energy capacity of 1,800 mAh to drive an undulation pump left ventricular assist device. However, the lithium ion battery undergoes an exothermic reaction during the discharge phase, and the temperature rise of the lithium ion battery is a critical issue for implantation usage. Heat generation in the lithium ion battery depends on the intensity of the discharge current, and we obtained a relationship between the heat flow from the lithium ion battery q(c)(I) and the intensity of the discharge current I as q(c)(I) = 0.63 x I (W) in in vitro experiments. The temperature distribution of the implantable battery system was estimated by means of three-dimentional finite-element method (FEM) heat transfer analysis using the heat flow function q(c)(I), and we also measured the temperature rise of the implantable battery system in in vitro experiments to conduct verification of the estimation. The maximum temperatures of the lithium ion battery and the implantable battery case were measured as 52.2 degrees C and 41.1 degrees C, respectively. The estimated result of temperature distribution of the implantable battery system agreed well with the measured results using thermography. In conclusion, FEM heat transfer analysis is promising as a tool to estimate the temperature of the implantable lithium ion battery system under any pump current without the need for animal experiments, and it is a convenient tool for optimization of heat transfer characteristics of the implantable battery system.

  13. ROV advanced magnetic survey for revealing archaeological targets and estimating medium magnetization

    Science.gov (United States)

    Eppelbaum, Lev

    2013-04-01

    magnetic field for the models of thin bed, thick bed and horizontal circular cylinder; some of these procedures demand performing measurements at two levels over the earth's surface), (6) advanced 3D magnetic-gravity modeling for complex media, and (7) development of 3D physical-archaeological (or magnetic-archaeological) model of the studied area. ROV observations also permit to realize a multimodel approach to magnetic data analysis (Eppelbaum, 2005). Results of performed 3D modeling confirm an effectiveness of the proposed ROV low-altitude survey. Khesin's methodology (Khesin et al., 2006) for estimation of upper geological section magnetization consists of land magnetic observations along a profile disposing under inclined relief with the consequent data processing (this method cannot be applied at flat topography). The improved modification of this approach is based on combination of straight and inclined ROV observations that will help to obtain parameters of the medium magnetization with areas of flat terrain relief. ACKNOWLEDGEMENT This investigation is funding from the Tel Aviv University - the Cyprus Research Institute combined project "Advanced coupled electric-magnetic archaeological prospecting in Cyprus and Israel". REFERENCES Eppelbaum, L.V., 2005. Multilevel observations of magnetic field at archaeological sites as additional interpreting tool. Proceed. of the 6th Conference of Archaeological Prospection, Roma, Italy, 1-4. Eppelbaum, L.V., 2010. Archaeological geophysics in Israel: Past, Present and Future. Advances of Geosciences, 24, 45-68. Eppelbaum, L.V., 2011. Study of magnetic anomalies over archaeological targets in urban conditions. Physics and Chemistry of the Earth, 36, No. 16, 1318-1330. Eppelbaum, L.V., Alperovich, L., Zheludev, V. and Pechersky, A., 2011. Application of informational and wavelet approaches for integrated processing of geophysical data in complex environments. Proceed. of the 2011 SAGEEP Conference, Charleston, South Carolina

  14. Estimation of Corrosion Fatigue Lives Based on the Variations of the Crack Lengths Distributions During Stress Cycling

    OpenAIRE

    Ishihara, Sotomi; Maekawa, Ichiro; Shiozawa, Kazuaki; Miyao, Kazyu

    1985-01-01

    Many small distributed cracks have been observed on the specimen during corrosion fatigue process, and the damage of corrosion fatigue is related to the behaviour of these distributed cracks. The distribution of crack lengths during corrosion fatigue was approximated well by the three parameter Weibull distribution under plane-bending fatigue tests of carbon steel in salt water. A method of estimation of corrosion fatigue lives was proposed. The crack initiation, crack growth behaviour and th...

  15. A Novel Method of Statistical Line Loss Estimation for Distribution Feeders Based on Feeder Cluster and Modified XGBoost

    Directory of Open Access Journals (Sweden)

    Shouxiang Wang

    2017-12-01

    Full Text Available The estimation of losses of distribution feeders plays a crucial guiding role for the planning, design, and operation of a distribution system. This paper proposes a novel estimation method of statistical line loss of distribution feeders using the feeder cluster technique and modified eXtreme Gradient Boosting (XGBoost algorithm that is based on the characteristic data of feeders that are collected in the smart power distribution and utilization system. In order to enhance the applicability and accuracy of the estimation model, k-medoids algorithm with weighting distance for clustering distribution feeders is proposed. Meanwhile, a variable selection method for clustering distribution feeders is discussed, considering the correlation and validity of variables. This paper next modifies the XGBoost algorithm by adding a penalty function in consideration of the effect of the theoretical value to the loss function for the estimation of statistical line loss of distribution feeders. The validity of the proposed methodology is verified by 762 distribution feeders in the Shanghai distribution system. The results show that the XGBoost method has higher accuracy than decision tree, neural network, and random forests by comparison of Root Mean Square Error (RMSE, Mean Absolute Percentage Error (MAPE, and Absolute Percentage Error (APE indexes. In particular, the theoretical value can significantly improve the reasonability of estimated results.

  16. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  17. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Science.gov (United States)

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-01-01

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753

  18. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  19. A framework for using niche models to estimate impacts of climate change on species distributions.

    Science.gov (United States)

    Anderson, Robert P

    2013-09-01

    Predicting species geographic distributions in the future is an important yet exceptionally challenging endeavor. Overall, it requires a two-step process: (1) a niche model characterizing suitability, applied to projections of future conditions and linked to (2) a dispersal/demographic simulation estimating the species' future occupied distribution. Despite limitations, for the vast majority of species, correlative approaches are the most feasible avenue for building niche models. In addition to myriad technical issues regarding model building, researchers should follow critical principles for selecting predictor variables and occurrence data, demonstrating effective performance in prediction across space, and extrapolating into nonanalog conditions. Many of these principles relate directly to the niche space, dispersal/demographic noise, biotic noise, and human noise assumptions defined here. Issues requiring progress include modeling interactions between abiotic variables, integrating biotic variables, considering genetic heterogeneity, and quantifying uncertainty. Once built, the niche model identifying currently suitable conditions must be processed to approximate the areas that the species occupies. That estimate serves as a seed for the simulation of persistence, dispersal, and establishment in future suitable areas. The dispersal/demographic simulation also requires data regarding the species' dispersal ability and demography, scenarios for future land use, and the capability of considering multiple interacting species simultaneously. © 2013 New York Academy of Sciences.

  20. Habitat Preferences, Distribution Pattern, and Root Weight Estimation of Pasak Bumi (Eurycoma longifolia Jack.

    Directory of Open Access Journals (Sweden)

    Siti Masitoh Kartikawati

    2014-04-01

    Full Text Available Pasak bumi (Eurycoma longifolia Jack is one of non timber forest products with “indeterminate” conservation status and commercially traded in West Kalimantan. The research objective was to determine the potential of pasak bumi root per hectare and its ecological condition under natural habitat. Root weight of E. longifolia Jack was estimated using simple linear regression and exponential equation with stem diameter and height as independent variables. The results showed that the individual number of the population was 114 with the majority in seedling stage with 71 individuals (62.28%. The distribution was found in clumped pattern. Conditions of the habitat could be described as follows: daily average temperature of 25.6oC, daily average relative humidity of 73.6%, light intensity of 0.9 klx, and red-yellow podsolic soil with texture ranged from clay to sandy clay. The selected estimator model for E. longifolia Jack root weight used exponential equation with stem height as independent variable using the equation of Y= 21.99T0,010 and determination coefficient of 0.97. After height variable was added, the potential of E. longifolia Jack minimum root weight that could be harvested per hectare was 0.33 kg.Keywords: Eurycoma longifolia, habitat preference, distribution pattern, root weight

  1. A Survey on Distributed Estimation and Control Applications Using Linear Consensus Algorithms

    Science.gov (United States)

    Garin, Federica; Schenato, Luca

    In this chapter we present a popular class of distributed algorithms, known as linear consensus algorithms, which have the ability to compute the global average of local quantities. These algorithms are particularly suitable in the context of multi-agent systems and networked control systems, i.e. control systems that are physically distributed and cooperate by exchanging information through a communication network. We present the main results available in the literature about the analysis and design of linear consensus algorithms,for both synchronous and asynchronous implementations. We then show that many control, optimization and estimation problems such as least squares, sensor calibration, vehicle coordination and Kalman filtering can be cast as the computation of some sort of averages, therefore being suitable for consensus algorithms. We finally conclude by presenting very recent studies about the performance of many of these control and estimation problems, which give rise to novel metrics for the consensus algorithms. These indexes of performance are rather different from more traditional metrics like the rate of convergence and have fundamental consequences on the design of consensus algorithms.

  2. Using passive fiber-optic distributed temperature sensing to estimate soil water content at a discontinuous permafrost site

    Science.gov (United States)

    Wagner, A. M.; Lindsey, N.; Ajo Franklin, J. B.; Gelvin, A.; Saari, S.; Ekblaw, I.; Ulrich, C.; Dou, S.; James, S. R.; Martin, E. R.; Freifeld, B. M.; Bjella, K.; Daley, T. M.

    2016-12-01

    We present preliminary results from an experimental study targeting the use of passive fiber-optic distributed temperature sensing (DTS) in a variety of geometries to estimate moisture content evolution in a dynamic permafrost system. A 4 km continuous 2D array of multi-component fiber optic cable (6 SM/6 MM) was buried at the Fairbanks Permafrost Experiment Station to investigate the possibility of using fiber optic distributed sensing as an early detection system for permafrost thaw. A heating experiment using 120 60 Watt heaters was conducted in a 140 m2 area to artificially thaw the topmost section of permafrost. The soils at the site are primarily silt but some disturbed areas include backfilled gravel to depths of approximately 1.0 m. Where permafrost exists, the depth to permafrost ranges from 1.5 to approximately 5 m. The experiment was also used to spatially estimate soil water content distribution throughout the fiber optic array. The horizontal fiber optic cable was buried at depths between 10 and 20 cm. Soil temperatures were monitored with a DTS system at 25 cm increments along the length of the fiber. At five locations, soil water content time-domain reflectometer (TDR) probes were also installed at two depths, in line with the fiber optic cable and 15 to 25 cm below the cable. The moisture content along the fiber optic array was estimated using diurnal effects from the dual depth temperature measurements. In addition to the horizontally installed fiber optic cable, vertical lines of fiber optic cable were also installed inside and outside the heater plot to a depth of 10 m in small diameter (2 cm) boreholes. These arrays were installed in conjunction with thermistor strings and are used to monitor the thawing process and to cross correlate with soil temperatures at the depth of the TDR probes. Results will be presented from the initiation of the artificial thawing through subsequent freeze-up. A comparison of the DTS measured temperatures and

  3. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  4. Estimating the spatial and temporal distribution of species richness within Sequoia and Kings Canyon National Parks.

    Science.gov (United States)

    Wathen, Steve; Thorne, James H; Holguin, Andrew; Schwartz, Mark W

    2014-01-01

    Evidence for significant losses of species richness or biodiversity, even within protected natural areas, is mounting. Managers are increasingly being asked to monitor biodiversity, yet estimating biodiversity is often prohibitively expensive. As a cost-effective option, we estimated the spatial and temporal distribution of species richness for four taxonomic groups (birds, mammals, herpetofauna (reptiles and amphibians), and plants) within Sequoia and Kings Canyon National Parks using only existing biological studies undertaken within the Parks and the Parks' long-term wildlife observation database. We used a rarefaction approach to model species richness for the four taxonomic groups and analyzed those groups by habitat type, elevation zone, and time period. We then mapped the spatial distributions of species richness values for the four taxonomic groups, as well as total species richness, for the Parks. We also estimated changes in species richness for birds, mammals, and herpetofauna since 1980. The modeled patterns of species richness either peaked at mid elevations (mammals, plants, and total species richness) or declined consistently with increasing elevation (herpetofauna and birds). Plants reached maximum species richness values at much higher elevations than did vertebrate taxa, and non-flying mammals reached maximum species richness values at higher elevations than did birds. Alpine plant communities, including sagebrush, had higher species richness values than did subalpine plant communities located below them in elevation. These results are supported by other papers published in the scientific literature. Perhaps reflecting climate change: birds and herpetofauna displayed declines in species richness since 1980 at low and middle elevations and mammals displayed declines in species richness since 1980 at all elevations.

  5. Estimation of the volume of distribution of some pharmacologically important compounds from their structural descriptor

    Directory of Open Access Journals (Sweden)

    MOHAMMAD H. FATEMI

    2011-07-01

    Full Text Available Quantitative structure–activity relationship (QSAR approaches were used to estimate the volume of distribution (Vd using an artificial neural network (ANN. The data set consisted of the volume of distribution of 129 pharmacologically important compounds, i.e., benzodiazepines, barbiturates, nonsteroidal anti-inflammatory drugs (NSAIDs, tricyclic anti-depressants and some antibiotics, such as betalactams, tetracyclines and quinolones. The descriptors, which were selected by stepwise variable selection methods, were: the Moriguchi octanol–water partition coefficient; the 3D-MoRSE-signal 30, weighted by atomic van der Waals volumes; the fragment-based polar surface area; the d COMMA2 value, weighted by atomic masses; the Geary autocorrelation, weighted by the atomic Sanderson electronegativities; the 3D-MoRSE – signal 02, weighted by atomic masses, and the Geary autocorrelation – lag 5, weighted by the atomic van der Waals volumes. These descriptors were used as inputs for developing multiple linear regressions (MLR and artificial neural network models as linear and non-linear feature mapping techniques, respectively. The standard errors in the estimation of Vd by the MLR model were: 0.104, 0.103 and 0.076 and for the ANN model: 0.029, 0.087 and 0.082 for the training, internal and external validation test, respectively. The robustness of these models were also evaluated by the leave-5-out cross validation procedure, that gives the statistics Q2 = 0.72 for the MLR model and Q2 = 0.82 for the ANN model. Moreover, the results of the Y-randomization test revealed that there were no chance correlations among the data matrix. In conclusion, the results of this study indicate the applicability of the estimation of the Vd value of drugs from their structural molecular descriptors. Furthermore, the statistics of the developed models indicate the superiority of the ANN over the MLR model.

  6. Estimating the spatial and temporal distribution of species richness within Sequoia and Kings Canyon National Parks.

    Directory of Open Access Journals (Sweden)

    Steve Wathen

    Full Text Available Evidence for significant losses of species richness or biodiversity, even within protected natural areas, is mounting. Managers are increasingly being asked to monitor biodiversity, yet estimating biodiversity is often prohibitively expensive. As a cost-effective option, we estimated the spatial and temporal distribution of species richness for four taxonomic groups (birds, mammals, herpetofauna (reptiles and amphibians, and plants within Sequoia and Kings Canyon National Parks using only existing biological studies undertaken within the Parks and the Parks' long-term wildlife observation database. We used a rarefaction approach to model species richness for the four taxonomic groups and analyzed those groups by habitat type, elevation zone, and time period. We then mapped the spatial distributions of species richness values for the four taxonomic groups, as well as total species richness, for the Parks. We also estimated changes in species richness for birds, mammals, and herpetofauna since 1980. The modeled patterns of species richness either peaked at mid elevations (mammals, plants, and total species richness or declined consistently with increasing elevation (herpetofauna and birds. Plants reached maximum species richness values at much higher elevations than did vertebrate taxa, and non-flying mammals reached maximum species richness values at higher elevations than did birds. Alpine plant communities, including sagebrush, had higher species richness values than did subalpine plant communities located below them in elevation. These results are supported by other papers published in the scientific literature. Perhaps reflecting climate change: birds and herpetofauna displayed declines in species richness since 1980 at low and middle elevations and mammals displayed declines in species richness since 1980 at all elevations.

  7. Bayesian distributed lag models: estimating effects of particulate matter air pollution on daily mortality.

    Science.gov (United States)

    Welty, L J; Peng, R D; Zeger, S L; Dominici, F

    2009-03-01

    A distributed lag model (DLagM) is a regression model that includes lagged exposure variables as covariates; its corresponding distributed lag (DL) function describes the relationship between the lag and the coefficient of the lagged exposure variable. DLagMs have recently been used in environmental epidemiology for quantifying the cumulative effects of weather and air pollution on mortality and morbidity. Standard methods for formulating DLagMs include unconstrained, polynomial, and penalized spline DLagMs. These methods may fail to take full advantage of prior information about the shape of the DL function for environmental exposures, or for any other exposure with effects that are believed to smoothly approach zero as lag increases, and are therefore at risk of producing suboptimal estimates. In this article, we propose a Bayesian DLagM (BDLagM) that incorporates prior knowledge about the shape of the DL function and also allows the degree of smoothness of the DL function to be estimated from the data. We apply our BDLagM to its motivating data from the National Morbidity, Mortality, and Air Pollution Study to estimate the short-term health effects of particulate matter air pollution on mortality from 1987 to 2000 for Chicago, Illinois. In a simulation study, we compare our Bayesian approach with alternative methods that use unconstrained, polynomial, and penalized spline DLagMs. We also illustrate the connection between BDLagMs and penalized spline DLagMs. Software for fitting BDLagM models and the data used in this article are available online.

  8. Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks

    Science.gov (United States)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Rahmani, Amirreza

    2011-01-01

    Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation-flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are proposed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms rely on a local information exchange network, relaxing the assumptions on existing algorithms. Distributed space systems rely on a signal transmission network among multiple spacecraft for their operation. Control and coordination among multiple spacecraft in a formation is facilitated via a network of relative sensing and interspacecraft communications. Guidance, navigation, and control rely on the sensing network. This network becomes more complex the more spacecraft are added, or as mission requirements become more complex. The observability of a formation state was observed by a set of local observations from a particular node in the formation. Formation observability can be parameterized in terms of the matrices appearing in the formation dynamics and observation matrices. An agreement protocol was used as a mechanism for observing formation states from local measurements. An agreement protocol is essentially an unforced dynamic system whose trajectory is governed by the interconnection geometry and initial condition of each node, with a goal of reaching a common value of interest. The observability of the interconnected system depends on the geometry of the network, as well as the position of the observer relative to the topology. For the first time, critical GN&C (guidance, navigation, and control estimation) subsystems are synthesized by bringing the contribution of the spacecraft information-exchange network to the forefront of algorithmic analysis and design. The result is a

  9. Distributed Extended Kalman Filter for Position, Velocity, Time, Estimation in Satellite Navigation Receivers

    Directory of Open Access Journals (Sweden)

    O. Jakubov

    2013-09-01

    Full Text Available Common techniques for position-velocity-time estimation in satellite navigation, iterative least squares and the extended Kalman filter, involve matrix operations. The matrix inversion and inclusion of a matrix library pose requirements on a computational power and operating platform of the navigation processor. In this paper, we introduce a novel distributed algorithm suitable for implementation in simple parallel processing units each for a tracked satellite. Such a unit performs only scalar sum, subtraction, multiplication, and division. The algorithm can be efficiently implemented in hardware logic. Given the fast position-velocity-time estimator, frequent estimates can foster dynamic performance of a vector tracking receiver. The algorithm has been designed from a factor graph representing the extended Kalman filter by splitting vector nodes into scalar ones resulting in a cyclic graph with few iterations needed. Monte Carlo simulations have been conducted to investigate convergence and accuracy. Simulation case studies for a vector tracking architecture and experimental measurements with a real-time software receiver developed at CTU in Prague were conducted. The algorithm offers compromises in stability, accuracy, and complexity depending on the number of iterations. In scenarios with a large number of tracked satellites, it can outperform the traditional methods at low complexity.

  10. Estimating the formation age distribution of continental crust by unmixing zircon ages

    Science.gov (United States)

    Korenaga, Jun

    2018-01-01

    Continental crust provides first-order control on Earth's surface environment, enabling the presence of stable dry landmasses surrounded by deep oceans. The evolution of continental crust is important for atmospheric evolution, because continental crust is an essential component of deep carbon cycle and is likely to have played a critical role in the oxygenation of the atmosphere. Geochemical information stored in the mineral zircon, known for its resilience to diagenesis and metamorphism, has been central to ongoing debates on the genesis and evolution of continental crust. However, correction for crustal reworking, which is the most critical step when estimating original formation ages, has been incorrectly formulated, undermining the significance of previous estimates. Here I suggest a simple yet promising approach for reworking correction using the global compilation of zircon data. The present-day distribution of crustal formation age estimated by the new "unmixing" method serves as the lower bound to the true crustal growth, and large deviations from growth models based on mantle depletion imply the important role of crustal recycling through the Earth history.

  11. Global distribution and origin of target site insecticide resistance mutations in Tetranychus urticae.

    Science.gov (United States)

    Ilias, A; Vontas, J; Tsagkarakou, A

    2014-05-01

    The control of Tetranychus urticae, a worldwide agricultural pest, is largely dependent on pesticides. However, their efficacy is often compromised by the development of resistance. Recent molecular studies identified a number of target site resistance mutations, such as G119S, A201S, T280A, G328A, F331W in the acetylcholinesterase gene, L1024V, A1215D, F1538I in the voltage-gated sodium channel gene, G314D and G326E in glutamate-gated chloride channel genes, G126S, I136T, S141F, D161G, P262T in the cytochrome b and the I1017F in the chitin synthase 1 gene. We examined their distribution, by sequencing the relevant gene fragments in a large number of T. urticae collections from a wide geographic range. Our study revealed that most of the resistance mutations are spread worldwide, with remarkably variable frequencies. Furthermore, we analyzed the variability of the ace locus, which has been subjected to longer periods of selection pressure historically, to investigate the evolutionary origin of ace resistant alleles and determine whether they resulted from single or multiple mutation events. By sequencing a 1540 bp ace fragment, encompassing the resistance mutations and downstream introns in 139 T. urticae individuals from 27 countries, we identified 6 susceptible and 31 resistant alleles which have arisen from at least three independent mutation events. The frequency and distribution of these ace haplotypes varied geographically, suggesting an interplay between different mutational events, gene flow and local selection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Multidimensional metrics for estimating phage abundance, distribution, gene density, and sequence coverage in metagenomes

    Directory of Open Access Journals (Sweden)

    Ramy Karam Aziz

    2015-05-01

    Full Text Available Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set of publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. We propose adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution.

  13. 3D beam shape estimation based on distributed coaxial cable interferometric sensor

    Science.gov (United States)

    Cheng, Baokai; Zhu, Wenge; Liu, Jie; Yuan, Lei; Xiao, Hai

    2017-03-01

    We present a coaxial cable interferometer based distributed sensing system for 3D beam shape estimation. By making a series of reflectors on a coaxial cable, multiple Fabry-Perot cavities are created on it. Two cables are mounted on the beam at proper locations, and a vector network analyzer (VNA) is connected to them to obtain the complex reflection signal, which is used to calculate the strain distribution of the beam in horizontal and vertical planes. With 6 GHz swept bandwidth on the VNA, the spatial resolution for distributed strain measurement is 0.1 m, and the sensitivity is 3.768 MHz mɛ -1 at the interferogram dip near 3.3 GHz. Using displacement-strain transformation, the shape of the beam is reconstructed. With only two modified cables and a VNA, this system is easy to implement and manage. Comparing to optical fiber based sensor systems, the coaxial cable sensors have the advantage of large strain and robustness, making this system suitable for structure health monitoring applications.

  14. OligoRAP - an Oligo Re-Annotation Pipeline to improve annotation and estimate target specificity

    NARCIS (Netherlands)

    Neerincx, P.B.T.; Rauwerda, H.; Nie, H.; Groenen, M.A.M.; Breit, T.M.; Leunissen, J.A.M.

    2009-01-01

    Background: High throughput gene expression studies using oligonucleotide microarrays depend on the specificity of each oligonucleotide (oligo or probe) for its target gene. However, target specific probes can only be designed when a reference genome of the species at hand were completely sequenced,

  15. Radiological considerations on multi-MW targets Part II After-heat and temperature distribution in packed tantalum spheres

    CERN Document Server

    Magistris, M

    2005-01-01

    CERN is designing a Superconducting Proton Linac (SPL) to provide a 2.2GeV, 4MW proton beam to feed facilities like, for example, a future Neutrino Factory or a Neutrino SuperBeam. One of the most promising target candidates is a stationary consisting of a Ti container filled with small Ta pellets. The power deposited as heat by the radioactive nuclides (the so-called after-heat) can considerably increase the target temperature after ceasing operation, if no active cooling is provided. An estimate of the induced radioactivity and after-heat was performed with the FLUKA Monte Carlo code. To estimate the highest temperature reached inside the target, the effective thermal conductivity of packed spheres was evaluated using the basic cell method. A method for estimating the contribution to heat transmission from radiation is also discussed1).

  16. Ecological Niche Modeling to Estimate the Distribution of Japanese Encephalitis Virus in Asia

    Science.gov (United States)

    Miller, Robin H.; Masuoka, Penny; Klein, Terry A.; Kim, Heung-Chul; Somer, Todd; Grieco, John

    2012-01-01

    Background Culex tritaeniorhynchus is the primary vector of Japanese encephalitis virus (JEV), a leading cause of encephalitis in Asia. JEV is transmitted in an enzootic cycle involving large wading birds as the reservoirs and swine as amplifying hosts. The development of a JEV vaccine reduced the number of JE cases in regions with comprehensive childhood vaccination programs, such as in Japan and the Republic of Korea. However, the lack of vaccine programs or insufficient coverage of populations in other endemic countries leaves many people susceptible to JEV. The aim of this study was to predict the distribution of Culex tritaeniorhynchus using ecological niche modeling. Methods/Principal Findings An ecological niche model was constructed using the Maxent program to map the areas with suitable environmental conditions for the Cx. tritaeniorhynchus vector. Program input consisted of environmental data (temperature, elevation, rainfall) and known locations of vector presence resulting from an extensive literature search and records from MosquitoMap. The statistically significant Maxent model of the estimated probability of Cx. tritaeniorhynchus presence showed that the mean temperatures of the wettest quarter had the greatest impact on the model. Further, the majority of human Japanese encephalitis (JE) cases were located in regions with higher estimated probability of Cx. tritaeniorhynchus presence. Conclusions/Significance Our ecological niche model of the estimated probability of Cx. tritaeniorhynchus presence provides a framework for better allocation of vector control resources, particularly in locations where JEV vaccinations are unavailable. Furthermore, this model provides estimates of vector probability that could improve vector surveillance programs and JE control efforts. PMID:22724030

  17. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Science.gov (United States)

    Castaings, W.; Dartus, D.; Le Dimet, F.-X.; Saulnier, G.-M.

    2009-04-01

    Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  18. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  19. Estimated historical distribution of grassland communities of the Southern Great Plains

    Science.gov (United States)

    Reese, Gordon C.; Manier, Daniel J.; Carr, Natasha B.; Callan, Ramana; Leinwand, Ian I.F.; Assal, Timothy J.; Burris, Lucy; Ignizio, Drew A.

    2016-12-07

    The purpose of this project was to map the estimated distribution of grassland communities of the Southern Great Plains prior to Euro-American settlement. The Southern Great Plains Rapid Ecoregional Assessment (REA), under the direction of the Bureau of Land Management and the Great Plains Landscape Conservation Cooperative, includes four ecoregions: the High Plains, Central Great Plains, Southwestern Tablelands, and the Nebraska Sand Hills. The REA advisors and stakeholders determined that the mapping accuracy of available national land-cover maps was insufficient in many areas to adequately address management questions for the REA. Based on the recommendation of the REA stakeholders, we estimated the potential historical distribution of 10 grassland communities within the Southern Great Plains project area using data on soils, climate, and vegetation from the Natural Resources Conservation Service (NRCS) including the Soil Survey Geographic Database (SSURGO) and Ecological Site Information System (ESIS). The dominant grassland communities of the Southern Great Plains addressed as conservation elements for the REA area are shortgrass, mixed-grass, and sand prairies. We also mapped tall-grass, mid-grass, northwest mixed-grass, and cool season bunchgrass prairies, saline and foothill grasslands, and semi-desert grassland and steppe. Grassland communities were primarily defined using the annual productivity of dominant species in the ESIS data. The historical grassland community classification was linked to the SSURGO data using vegetation types associated with the predominant component of mapped soil units as defined in the ESIS data. We augmented NRCS data with Landscape Fire and Resource Management Planning Tools (LANDFIRE) Biophysical Settings classifications 1) where NRCS data were unavailable and 2) where fifth-level watersheds intersected the boundary of the High Plains ecoregion in Wyoming. Spatial data representing the estimated historical distribution of

  20. An experimental and theoretical model of children’s search behavior in relation to target conspicuity and spatial distribution

    Science.gov (United States)

    Rosetti, Marcos Francisco; Pacheco-Cobos, Luis; Larralde, Hernán; Hudson, Robyn

    2010-11-01

    This work explores search trajectories of children attempting to find targets distributed on a playing field. This task, of ludic nature, was developed to test the effect of conspicuity and spatial distribution of targets on the searcher’s performance. The searcher’s path was recorded by a Global Positioning System (GPS) device attached to the child’s waist. Participants were not rewarded nor their performance rated. Variation in the conspicuity of the targets influenced search performance as expected; cryptic targets resulted in slower searches and longer, more tortuous paths. Extracting the main features of the paths showed that the children: (1) paid little attention to the spatial distribution and at least in the conspicuous condition approximately followed a nearest neighbor pattern of target collection, (2) were strongly influenced by the conspicuity of the targets. We implemented a simple statistical model for the search rules mimicking the children’s behavior at the level of individual (coarsened) steps. The model reproduced the main features of the children’s paths without the participation of memory or planning.

  1. A batch algorithm for estimating trajectories of point targets using expectation maximization

    DEFF Research Database (Denmark)

    Rahmathullah, Abu; Raghavendra, Selvan; Svensson, Lennart

    2016-01-01

    In this paper, we propose a strategy that is based on expectation maximization for tracking multiple point targets. The algorithm is similar to probabilistic multi-hypothesis tracking (PMHT), but does not relax the point target model assumptions. According to the point target models, a target can......, extensive simulations comparing the mean optimal sub-pattern assignment (MOSPA) performance of the algorithm for different scenarios averaged over several Monte Carlo iterations show that the proposed algorithm performs better than JPDA and PMHT. We also compare it to benchmarking algorithm: N- scan pruning...... based track-oriented multiple hypothesis tracking (TOMHT). The proposed algorithm shows a good trade-off between computational complexity and the MOSPA performance....

  2. Moving Target Depth Estimation for Passive Sonar, Using Sequential Resampling Techniques

    National Research Council Canada - National Science Library

    Kraut, Shawn

    2001-01-01

    ... wave numbers of the channel modes used for the matched-field target response. Using this approach, the complex amplitudes of the modes are treated as nuisance parameters, which comprise a hidden, first-order Markov state process...

  3. Combining ultrasound-based elasticity estimation and FE models to predict 3D target displacement

    NARCIS (Netherlands)

    Assaad, W.; Misra, Sarthak

    During minimally invasive surgical procedures (e.g., needle insertion during interventional radiological procedures), needle–tissue interactions and physiological processes cause tissue deformation. Target displacement is caused by soft-tissue deformation, which results in misplacement of the

  4. Product-limit estimators of the gap time distribution of a renewal process under different sampling patterns

    OpenAIRE

    Gill, Richard D.; Keiding, Niels

    2010-01-01

    Nonparametric estimation of the gap time distribution in a simple renewal process may be considered a problem in survival analysis under particular sampling frames corresponding to how the renewal process is observed. This note describes several such situations where simple product limit estimators, though inefficient, may still be useful.

  5. Product-limit estimators of the gap time distribution of a renewal process under different sampling patterns.

    Science.gov (United States)

    Gill, Richard D; Keiding, Niels

    2010-10-01

    Nonparametric estimation of the gap time distribution in a simple renewal process may be considered a problem in survival analysis under particular sampling frames corresponding to how the renewal process is observed. This note describes several such situations where simple product limit estimators, though inefficient, may still be useful.

  6. Spatially distributed evapotranspiration estimation using remote sensing and ground-based radiometers over cotton at Maricopa, Arizona

    Science.gov (United States)

    Spatially distributed estimates of evapotranspiration (ET) over agricultural lands could be valuable for water management in arid environments and for monitoring irrigated croplands. In recent year various ET estimation approaches have been developed that utilize remote sense data to provide the nee...

  7. CATCH ESTIMATION AND SIZE DISTRIBUTION OF BILLFISHES LANDED IN PORT OF BENOA, BALI

    Directory of Open Access Journals (Sweden)

    Bram Setyadji

    2012-06-01

    Full Text Available Billfishes are generally considered as by-product in tuna long line fisheries that have high economic value in the market. By far, the information about Indian Ocean billfish biology and fisheries especially in Indonesia is very limited. This research aimed to elucidate the estimation of production and size distribution of billfishes landed in port of Benoa during 2010 (February – December through daily observation at the processing plants. The result showed that the landings dominated by Swordfish (Xiphias gladius 54.9%, Blue marlin (Makaira mazara 17.8% and Black marlin (Makaira indica 13.0% respectively, followed by small amount of striped marlin (Tetrapturus audax, sailfish (Istiophorus platypterus, and shortbil spearfish (Tetrapturus Angustirostris. Generally the individual size of billfishes range between 68 and 206 cm (PFL, and showing negative allometric pattern except on swordfish that was isometric. Most of the billfish landed haven’t reached their first sexual maturity.

  8. Taxation of Social Security benefits under the new income tax provisions: distributional estimates for 1994.

    Science.gov (United States)

    Pattison, D

    1994-01-01

    The 1993 Omnibus Budget Reconciliation Act raised the proportion of benefits includable in income for the Federal personal income tax. This article presents estimates of the income-distributional effects of the new provision in 1994, the first year for which it is effective. Under the pre-1993 law, up to 50 percent of benefits were included in taxable income for certain high-income beneficiaries. Under the new law, some of these beneficiaries are required to include an even higher proportion of benefits--up to 85 percent. Only 11 percent of beneficiary families, concentrated in the top three deciles by family income, include more of their benefits in taxable income under the new law than they would have under the old law. Another 8 percent include the same amount of benefits under either. The remaining beneficiary families, more than 80 percent, include no benefits in taxable income under either the old law or the new.

  9. Distributed soil loss estimation system including ephemeral gully development and tillage erosion

    Directory of Open Access Journals (Sweden)

    D. A. N. Vieira

    2015-03-01

    Full Text Available A new modelling system is being developed to provide spatially-distributed runoff and soil erosion predictions for conservation planning that integrates the 2D grid-based variant of the Revised Universal Soil Loss Equation, version 2 model (RUSLER, the Ephemeral Gully Erosion Estimator (EphGEE, and the Tillage Erosion and Landscape Evolution Model (TELEM. Digital representations of the area of interest (field, farm or entire watershed are created using high-resolution topography and data retrieved from established databases of soil properties, climate, and agricultural operations. The system utilizes a library of processing tools (LibRaster to deduce surface drainage from topography, determine the location of potential ephemeral gullies, and subdivide the study area into catchments for calculations of runoff and sheet-and-rill erosion using RUSLER. EphGEE computes gully evolution based on local soil erodibility and flow and sediment transport conditions. Annual tillage-induced morphological changes are computed separately by TELEM.

  10. Structure Learning of Bayesian Networks by Estimation of Distribution Algorithms with Transpose Mutation

    Directory of Open Access Journals (Sweden)

    D.W. Kim

    2013-08-01

    Full Text Available Estimation of distribution algorithms (EDAs constitute a new branch of evolutionary optimization algorithms that were developed as a natural alternative to genetic algorithms (GAs. Several studies have demonstrated that the heuristic scheme of EDAs is effective and efficient for many optimization problems. Recently, it has been reported that the incorporation of mutation into EDAs increases the diversity of genetic information in the population, thereby avoiding premature convergence into a suboptimal solution. In this study, we propose a new mutation operator, a transpose mutation, designed for Bayesian structure learning. It enhances the diversity of the offspring and it increases the possibility of inferring the correct arc direction by considering the arc directions in candidate solutions as bi-directional, using the matrix transpose operator. As compared to the conventional EDAs, the transpose mutation-adopted EDAs are superior and effective algorithms for learning Bayesian networks.

  11. A Secure Scheme for Distributed Consensus Estimation against Data Falsification in Heterogeneous Wireless Sensor Networks.

    Science.gov (United States)

    Mi, Shichao; Han, Hui; Chen, Cailian; Yan, Jian; Guan, Xinping

    2016-02-19

    Heterogeneous wireless sensor networks (HWSNs) can achieve more tasks and prolong the network lifetime. However, they are vulnerable to attacks from the environment or malicious nodes. This paper is concerned with the issues of a consensus secure scheme in HWSNs consisting of two types of sensor nodes. Sensor nodes (SNs) have more computation power, while relay nodes (RNs) with low power can only transmit information for sensor nodes. To address the security issues of distributed estimation in HWSNs, we apply the heterogeneity of responsibilities between the two types of sensors and then propose a parameter adjusted-based consensus scheme (PACS) to mitigate the effect of the malicious node. Finally, the convergence property is proven to be guaranteed, and the simulation results validate the effectiveness and efficiency of PACS.

  12. Estimation of Shallow Groundwater Recharge Using a Gis-Based Distributed Water Balance Model

    Directory of Open Access Journals (Sweden)

    Graf Renata

    2014-09-01

    Full Text Available In the paper we present the results of shallow groundwater recharge estimation using the WetSpass GISbased distributed water balance model. By taking into account WetSpass, which stands for Water an Energy Transfer between Soil, Plants and Atmosphere under quasi-Steady State, for average conditions during the period 1961-2000, we assessed the spatial conditions of the groundwater infiltration recharge process of shallow circulation systems in the Poznan Plateau area (the Great Poland Lowland in western Poland, which is classified as a region with observed water deficits. For three temporal variants, i.e. year, winter and summer half-years, we determined using the geological infiltration method by about 5-10% on average, marginally by 20%.

  13. Academic Training: Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms - Lecture series

    CERN Multimedia

    Françoise Benz

    2004-01-01

    ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on natural annealing processes or Evolutionary Computation, based on biological evolution processes. Geneti...

  14. Academic Training: Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms - Lecture serie

    CERN Multimedia

    Françoise Benz

    2004-01-01

    ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on nat...

  15. A Secure Scheme for Distributed Consensus Estimation against Data Falsification in Heterogeneous Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shichao Mi

    2016-02-01

    Full Text Available Heterogeneous wireless sensor networks (HWSNs can achieve more tasks and prolong the network lifetime. However, they are vulnerable to attacks from the environment or malicious nodes. This paper is concerned with the issues of a consensus secure scheme in HWSNs consisting of two types of sensor nodes. Sensor nodes (SNs have more computation power, while relay nodes (RNs with low power can only transmit information for sensor nodes. To address the security issues of distributed estimation in HWSNs, we apply the heterogeneity of responsibilities between the two types of sensors and then propose a parameter adjusted-based consensus scheme (PACS to mitigate the effect of the malicious node. Finally, the convergence property is proven to be guaranteed, and the simulation results validate the effectiveness and efficiency of PACS.

  16. Opportunistic citizen science data of animal species produce reliable estimates of distribution trends if analysed with occupancy models

    NARCIS (Netherlands)

    van Strien, A.J.; van Swaay, C.A.M.; Termaat, T.

    2013-01-01

    Many publications documenting large-scale trends in the distribution of species make use of opportunistic citizen data, that is, observations of species collected without standardized field protocol and without explicit sampling design. It is a challenge to achieve reliable estimates of distribution

  17. Simplified model for estimation of lightning induced transient transfer through distribution transformer

    Energy Technology Data Exchange (ETDEWEB)

    Manyahi, M.J. [University of Dar es Salaam (Tanzania). Faculty of Electrical and Computer Systems Engineering; Uppsala University (Sweden). The Angstrom Laboratory, Division for Electricity and Lightning Research; Thottappillil, R. [Uppsala University (Sweden). The Angstrom Laboratory, Division for Electricity and Lightning Research

    2005-05-01

    In this work a simplified procedure for the formulation of distribution transformer model for studying its response to lightning caused transients is presented. Simplification is achieved by the way with which the model formulation is realised. That is, by consolidating various steps for model formulation that is based on terminal measurements of driving point and transfer short circuit admittance parameters. Sequence of steps in the model formulation procedure begins with the determination of nodal admittance matrix of the transformer by network analyser measurements at the transformer terminals. Thereafter, the elements of nodal admittance matrix are simultaneously approximated in the form of rational functions consisting of real as well as complex conjugate poles and zeros, for realisation of admittance functions in the form of RLCG networks. Finally, the equivalent terminal model of the transformer is created as a {pi}-network consisting of the above RLCG networks for each of its branches. The model can be used in electromagnetic transient or circuit simulation programs in either time or frequency domain for estimating the transfer of common mode transients, such as that caused by lightning, across distribution class transformer. The validity of the model is verified by comparing the model predictions with experimentally measured outputs for different types of common-mode surge waveform as inputs, including a chopped waveform that simulate the operation of surge arresters. Besides it has been verified that the directly measured admittance functions by the network analyser closely matches the derived admittance functions from the time domain impulse measurements up to 3 MHz, higher than achieved in previous models, which improves the resulting model capability of simulating fast transients. The model can be used in power quality studies, to estimate the transient voltages appearing at the low voltage customer installation due to the induced lightning surges on

  18. Global and regional estimates of cancer mortality and incidence by site: I. Application of regional cancer survival model to estimate cancer mortality distribution by site

    Directory of Open Access Journals (Sweden)

    Lopez Alan D

    2002-12-01

    Full Text Available Abstract Background The Global Burden of Disease 2000 (GBD 2000 study starts from an analysis of the overall mortality envelope in order to ensure that the cause-specific estimates add to the total all cause mortality by age and sex. For regions where information on the distribution of cancer deaths is not available, a site-specific survival model was developed to estimate the distribution of cancer deaths by site. Methods An age-period-cohort model of cancer survival was developed based on data from the Surveillance, Epidemiology, and End Results (SEER. The model was further adjusted for the level of economic development in each region. Combined with the available incidence data, cancer death distributions were estimated and the model estimates were validated against vital registration data from regions other than the United States. Results Comparison with cancer mortality distribution from vital registration confirmed the validity of this approach. The model also yielded the cancer mortality distribution which is consistent with the estimates based on regional cancer registries. There was a significant variation in relative interval survival across regions, in particular for cancers of bladder, breast, melanoma of the skin, prostate and haematological malignancies. Moderate variations were observed among cancers of colon, rectum, and uterus. Cancers with very poor prognosis such as liver, lung, and pancreas cancers showed very small variations across the regions. Conclusions The survival model presented here offers a new approach to the calculation of the distribution of deaths for areas where mortality data are either scarce or unavailable.

  19. HIGH RESOLUTION DEFORMATION TIME SERIES ESTIMATION FOR DISTRIBUTED SCATTERERS USING TERRASAR-X DATA

    Directory of Open Access Journals (Sweden)

    K. Goel

    2012-07-01

    Full Text Available In recent years, several SAR satellites such as TerraSAR-X, COSMO-SkyMed and Radarsat-2 have been launched. These satellites provide high resolution data suitable for sophisticated interferometric applications. With shorter repeat cycles, smaller orbital tubes and higher bandwidth of the satellites; deformation time series analysis of distributed scatterers (DSs is now supported by a practical data basis. Techniques for exploiting DSs in non-urban (rural areas include the Small Baseline Subset Algorithm (SBAS. However, it involves spatial phase unwrapping, and phase unwrapping errors are typically encountered in rural areas and are difficult to detect. In addition, the SBAS technique involves a rectangular multilooking of the differential interferograms to reduce phase noise, resulting in a loss of resolution and superposition of different objects on ground. In this paper, we introduce a new approach for deformation monitoring with a focus on DSs, wherein, there is no need to unwrap the differential interferograms and the deformation is mapped at object resolution. It is based on a robust object adaptive parameter estimation using single look differential interferograms, where, the local tilts of deformation velocity and local slopes of residual DEM in range and azimuth directions are estimated. We present here the technical details and a processing example of this newly developed algorithm.

  20. Habitat Preferences, Distribution Pattern, and Root Weight Estimation of Pasak Bumi (Eurycoma longifolia Jack.

    Directory of Open Access Journals (Sweden)

    Siti Masitoh Kartikawati

    2014-04-01

    Full Text Available Pasak bumi (Eurycoma longifolia Jack is one of non timber forest products with “indeterminate” conservation status and commercially traded in West Kalimantan. The research objective was to determine the potential of pasak bumi root per hectare and its ecological condition under natural habitat. Root weight of E. longifolia Jack was estimated using simple linear regression and exponential equation with stem diameter and height as independent variables. The results showed that the individual number of the population was 114 with the majority in seedling stage with 71 individuals (62.28%. The distribution was found in clumped pattern. Conditions of the habitat could be described as follows: daily average temperature of 25.6oC, daily average relative humidity of 73.6%, light intensity of 0.9 klx, and red-yellow podsolic soil with texture ranged from clay to sandy clay. The selected estimator model for E. longifolia Jack root weight used exponential equation with stem height as independent variable using the equation of Y= 21.99T0,010 and determination coefficient of 0.97. After height variable was added, the potential of E. longifolia Jack minimum root weight that could be harvested per hectare was 0.33 kg.

  1. Tests of Catastrophic Outlier Prediction in Empirical Photometric Redshift Estimation with Redshift Probability Distributions

    Science.gov (United States)

    Jones, Evan; Singal, Jack

    2018-01-01

    We present results of using individual galaxies' redshift probability information derived from a photometric redshift (photo-z) algorithm, SPIDERz, to identify potential catastrophic outliers in photometric redshift determinations. By using test data comprised of COSMOS multi-band photometry and known spectroscopic redshifts from the 3D-HST survey spanning a wide redshift range (0method to flag potential catastrophic outliers in an analysis which relies on accurate photometric redshifts. SPIDERz is a custom support vector machine classification algorithm for photo-z analysis that naturally outputs a distribution of redshift probability information for each galaxy in addition to a discrete most probable photo-z value. By applying an analytic technique with flagging criteria to identify the presence of probability distribution features characteristic of catastrophic outlier photo-z estimates, such as multiple redshift probability peaks separated by substantial redshift distances, we can flag potential catastrophic outliers in photo-z determinations. We find that our proposed method can correctly flag large fractions of the outliers and catastrophic outlier galaxies, while only flagging a small fraction of the total non-outlier galaxies. We examine the performance of this strategy in photo-z determinations using a range of flagging parameter values. These results could potentially be useful for utilization of photometric redshifts in future large scale surveys where catastrophic outliers are particularly detrimental to the science goals.

  2. Spatial Distribution of Estimated Wind-Power Royalties in West Texas

    Directory of Open Access Journals (Sweden)

    Christian Brannstrom

    2015-12-01

    Full Text Available Wind-power development in the U.S. occurs primarily on private land, producing royalties for landowners through private contracts with wind-farm operators. Texas, the U.S. leader in wind-power production with well-documented support for wind power, has virtually all of its ~12 GW of wind capacity sited on private lands. Determining the spatial distribution of royalty payments from wind energy is a crucial first step to understanding how renewable power may alter land-based livelihoods of some landowners, and, as a result, possibly encourage land-use changes. We located ~1700 wind turbines (~2.7 GW on 241 landholdings in Nolan and Taylor counties, Texas, a major wind-development region. We estimated total royalties to be ~$11.5 million per year, with mean annual royalty received per landowner per year of $47,879 but with significant differences among quintiles and between two sub-regions. Unequal distribution of royalties results from land-tenure patterns established before wind-power development because of a “property advantage,” defined as the pre-existing land-tenure patterns that benefit the fraction of rural landowners who receive wind turbines. A “royalty paradox” describes the observation that royalties flow to a small fraction of landowners even though support for wind power exceeds 70 percent.

  3. Extended Distributed State Estimation: A Detection Method against Tolerable False Data Injection Attacks in Smart Grids

    Directory of Open Access Journals (Sweden)

    Dai Wang

    2014-03-01

    Full Text Available False data injection (FDI is considered to be one of the most dangerous cyber-attacks in smart grids, as it may lead to energy theft from end users, false dispatch in the distribution process, and device breakdown during power generation. In this paper, a novel kind of FDI attack, named tolerable false data injection (TFDI, is constructed. Such attacks exploit the traditional detector’s tolerance of observation errors to bypass the traditional bad data detection. Then, a method based on extended distributed state estimation (EDSE is proposed to detect TFDI in smart grids. The smart grid is decomposed into several subsystems, exploiting graph partition algorithms. Each subsystem is extended outward to include the adjacent buses and tie lines, and generate the extended subsystem. The Chi-squares test is applied to detect the false data in each extended subsystem. Through decomposition, the false data stands out distinctively from normal observation errors and the detection sensitivity is increased. Extensive TFDI attack cases are simulated in the Institute of Electrical and Electronics Engineers (IEEE 14-, 39-, 118- and 300-bus systems. Simulation results show that the detection precision of the EDSE-based method is much higher than that of the traditional method, while the proposed method significantly reduces the associated computational costs.

  4. An Overview of Distributed Microgrid State Estimation and Control for Smart Grids

    Directory of Open Access Journals (Sweden)

    Md Masud Rana

    2015-02-01

    Full Text Available Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.

  5. An overview of distributed microgrid state estimation and control for smart grids.

    Science.gov (United States)

    Rana, Md Masud; Li, Li

    2015-02-12

    Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.

  6. Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function

    Science.gov (United States)

    Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan

    2017-08-01

    We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.

  7. ESTIMATION OF THE SCALE PARAMETER FROM THE RAYLEIGH DISTRIBUTION FROM TYPE II SINGLY AND DOUBLY CENSORED DATA

    Directory of Open Access Journals (Sweden)

    Ahmad Saeed Akhter

    2009-01-01

    Full Text Available As common as the normal distribution is the Rayleigh distribution which occurs in works on radar, properties of sine wave plus-noise, etc. Rayleigh (1880 derived it from the amplitude of sound resulting from many important sources. The Rayleigh distribution is widely used in communication engineering, reliability analysis and applied statistics. Since the Rayleigh distribution has linearly increasing rate, it is appropriate for components which might not have manufacturing defects but age rapidly with time. Several types of electro-vacum devices have this feature. It is connected with one dimension and two dimensions random walk and is some times referred to as a random walk frequency distribution. It is a special case of Weibull distribution (1951 of wide applicability. It can be easily derived from the bivariate normal distribution with and p = 0. For further application of Rayleigh distribution, we refer to Johnson and Kotz (1994. Adatia (1995 has obtained the best linear unbiased estimator of the Rayleigh scale parameter based on fairly large censored samples. Dyer and Whisend (1973 obtained the BLUE of scale parameter based on type II censored samples for small N = 2(15. With the advance of computer technology it is now possible to obtain BLUE for large samples. Hirai (1978 obtained the estimate of the scale parameter from the Rayleigh distribution singly type II censored from the left side and right side and variances of the scale parameter. In this paper, we estimate the scale parameter of type II singly and doubly censored data from the Rayleigh distribution using Blom’s (1958 unbiased nearly best estimates and compare the efficiency of this estimate with BLUE and MLE.

  8. RZLINE code modelling of distributed tin targets for laser-produced plasma sources of extreme ultraviolet radiation

    NARCIS (Netherlands)

    Koshelev, K.; Noivkov, V.G.; Medvedev, Viacheslav; Grushin, A.S.; Krivtsun, V.M.

    2012-01-01

    Abstract. An integrated model is developed to describe the hydrodynamics, atomic, and radiation processes that take place in extreme ultraviolet (EUV) radiation sources based on a laser-produced plasma with a distributed tin target. The modeling was performed using the RZLINE code—a numerical code

  9. A parametric model to estimate the proportion from true null using a distribution for p-values.

    Science.gov (United States)

    Yu, Chang; Zelterman, Daniel

    2017-10-01

    Microarray studies generate a large number of p-values from many gene expression comparisons. The estimate of the proportion of the p-values sampled from the null hypothesis draws broad interest. The two-component mixture model is often used to estimate this proportion. If the data are generated under the null hypothesis, the p-values follow the uniform distribution. What is the distribution of p-values when data are sampled from the alternative hypothesis? The distribution is derived for the chi-squared test. Then this distribution is used to estimate the proportion of p-values sampled from the null hypothesis in a parametric framework. Simulation studies are conducted to evaluate its performance in comparison with five recent methods. Even in scenarios with clusters of correlated p-values and a multicomponent mixture or a continuous mixture in the alternative, the new method performs robustly. The methods are demonstrated through an analysis of a real microarray dataset.

  10. The distribution of blow fly (Diptera: Calliphoridae) larval lengths and its implications for estimating post mortem intervals.

    Science.gov (United States)

    Moffatt, Colin; Heaton, Viv; De Haan, Dorine

    2016-01-01

    The length or stage of development of blow fly (Diptera: Calliphoridae) larvae may be used to estimate a minimum postmortem interval, often by targeting the largest individuals of a species in the belief that they will be the oldest. However, natural variation in rate of development, and therefore length, implies that the size of the largest larva, as well as the number of larvae longer than any stated length, will be greater for larger cohorts. Length data from the blow flies Protophormia terraenovae and Lucilia sericata were collected from one field-based and two laboratory-based experiments. The field cohorts contained considerably more individuals than have been used for reference data collection in the literature. Cohorts were shown to have an approximately normal distribution. Summary statistics were derived from the collected data allowing the quantification of errors in development time which arise when different sized cohorts are compared through their largest larvae. These errors may be considerable and can lead to overestimation of postmortem intervals when making comparisons with reference data collected from smaller cohorts. This source of error has hitherto been overlooked in forensic entomology.

  11. Novel probabilistic and distributed algorithms for guidance, control, and nonlinear estimation of large-scale multi-agent systems

    Science.gov (United States)

    Bandyopadhyay, Saptarshi

    guidance algorithms using results from numerical simulations and closed-loop hardware experiments on multiple quadrotors. In the second part of this dissertation, we present two novel discrete-time algorithms for distributed estimation, which track a single target using a network of heterogeneous sensing agents. The Distributed Bayesian Filtering (DBF) algorithm, the sensing agents combine their normalized likelihood functions using the logarithmic opinion pool and the discrete-time dynamic average consensus algorithm. Each agent's estimated likelihood function converges to an error ball centered on the joint likelihood function of the centralized multi-sensor Bayesian filtering algorithm. Using a new proof technique, the convergence, stability, and robustness properties of the DBF algorithm are rigorously characterized. The explicit bounds on the time step of the robust DBF algorithm are shown to depend on the time-scale of the target dynamics. Furthermore, the DBF algorithm for linear-Gaussian models can be cast into a modified form of the Kalman information filter. In the Bayesian Consensus Filtering (BCF) algorithm, the agents combine their estimated posterior pdfs multiple times within each time step using the logarithmic opinion pool scheme. Thus, each agent's consensual pdf minimizes the sum of Kullback-Leibler divergences with the local posterior pdfs. The performance and robust properties of these algorithms are validated using numerical simulations. In the third part of this dissertation, we present an attitude control strategy and a new nonlinear tracking controller for a spacecraft carrying a large object, such as an asteroid or a boulder. If the captured object is larger or comparable in size to the spacecraft and has significant modeling uncertainties, conventional nonlinear control laws that use exact feed-forward cancellation are not suitable because they exhibit a large resultant disturbance torque. The proposed nonlinear tracking control law guarantees

  12. Sampling variance of flood quantiles from the generalised logistic distribution estimated using the method of L-moments

    Science.gov (United States)

    Kjeldsen, Thomas R.; Jones, David A.

    The method of L-moments is the recommended method for fitting the three parameters (location, scale and shape) of a Generalised Logistic (GLO) distribution when conducting flood frequency analyses in the UK. This paper examines the sampling uncertainty of quantile estimates obtained using the GLO distribution for single site analysis using the median to estimate the location parameter. Analytical expressions for the mean and variance of the quantile estimates were derived, based on asymptotic theory. This has involved deriving expressions for the covariance between the sampling median (location parameter) and the quantiles of the estimated unit-median GLO distribution (growth curve). The accuracy of the asymptotic approximations for many of these intermediate results and for the quantile estimates was investigated by comparing the approximations to the outcome of a series of Monte Carlo experiments. The approximations were found to be adequate for GLO shape parameter values between -0.35 and 0.25, which is an interval that includes the shape parameter estimates for most British catchments. An investigation into the contribution of different components to the total uncertainty showed that for large returns periods, the variance of the growth curve is larger than the contribution of the median. Therefore, statistical methods using regional information to estimate the growth curve should be considered when estimating design events at large return periods.

  13. Bayesian Estimation of Inequality and Poverty Indices in Case of Pareto Distribution Using Different Priors under LINEX Loss Function

    Directory of Open Access Journals (Sweden)

    Kamaljit Kaur

    2015-01-01

    Full Text Available Bayesian estimators of Gini index and a Poverty measure are obtained in case of Pareto distribution under censored and complete setup. The said estimators are obtained using two noninformative priors, namely, uniform prior and Jeffreys’ prior, and one conjugate prior under the assumption of Linear Exponential (LINEX loss function. Using simulation techniques, the relative efficiency of proposed estimators using different priors and loss functions is obtained. The performances of the proposed estimators have been compared on the basis of their simulated risks obtained under LINEX loss function.

  14. Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2010-11-01

    Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...

  15. Estimating Soil Water Retention Curve Using The Particle Size Distribution Based on Fractal Approach

    Directory of Open Access Journals (Sweden)

    M.M. Chari

    2016-02-01

    showed that the fractal dimension of particle size distributions obtained with both methods were not significantly different from each other. DSWRCwas also using the suction-moisture . The results indicate that all three fractal dimensions related to soil texture and clay content of the soil increases. Linear regression relationships between Dm1 and Dm2 with DSWRC was created using 48 soil samples in order to determine the coefficient of 0.902 and 0.871 . Then, based on relationships obtained from the four methods (1- Dm1 = DSWRC, 2-regression equationswere obtained Dm1, 3- Dm2 = DSWRC and 4. The regression equation obtained Dm2. DSWRC expression was used to express DSWRC. Various models for the determination of soil moisture suction according to statistical indicators normalized root mean square error, mean error, relative error.And mean geometric modeling efficiency was evaluated. The results of all four fractalsare close to each other and in most soils it is consistent with the measured data. Models predict the ability to work well in sandy loam soil fractal models and the predicted measured moisture value is less than the estimated fractal dimension- less than its actual value is the moisture curve. Conclusions: In this study, the work of Skaggs et al. (24 was used and it was amended by Fooladmand and Sepaskhah (8 grading curve using the percentage of developed sand, silt and clay . The fractal dimension of the particle size distribution was obtained.The fractal dimension particle size of the radius of the particle size of sand, silt and clay were used, respectively.In general, the study of fractals to simulate the effectiveness of retention curve proved successful. And soon it was found that the use of data, such as sand, silt and clay retention curve can be estimated with reasonable accuracy.

  16. Distributed Formation State Estimation Algorithms Under Resource and Multi-Tasking Constraints Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Recent work has developed a number of architectures and algorithms for accurately estimating spacecraft and formation states. The estimation accuracy achievable...

  17. [Estimates of Target Population for Pneumococcal Vaccination in People over 50 years in Catalonia and Spain].

    Science.gov (United States)

    Vila-Córcoles, Angel; Ochoa-Gondar, Olga; Satué, Eva; de Diego, Cinta; Vila-Rovira, Marc; Jariod, Manel

    2017-03-15

    Published data about prevalence of distinct risk condictions for pneumococcal disease is scarce. This study investigated the prevalence of distinct risk conditions for pneumococal disease in Catalonian adults and stimated the potential size of target population for pneumococcal vaccination in Catalonia and Spain. Cross-sectional population-based study that included 2,033,465 individuals older than 49 years-old assigned to the Catalonian Health Institute (Catalonia, Spain) at 01/01/2015. The Catalonian Health Institute Information System for the Development of Research in Primary Care (SIDIAP) was used to identify comorbidities and/or underlying conditions in each subject and establish potential target population for pneumococcal vaccination on the basis of their risk for suffering pneumococcal infections: 1) immunocompromised subjects; 2) immunocompetents subjects with any risk condition; 3) immunocompetents subjects without risk conditions. Of the 2,033,465 study subjects, 1,053,155 (51.8%) had no risk conditions, 649,014 (31.9%) had one risk condition and 331,296 (16.3%) had multiple risk conditions (11.4% in 50-64 years vs 21.2% in people older than 65 years, p smaller than 0.001; 21.8% in men vs 11.6% in women, p smaller than 0.001). Overall, 176,600 (8.7%) and 803,710 (39.5%) were classified in risk stratum 1 and 2, respectively. According to distinct risk strata considered, the target population for pneumococcal vaccination varied between 0.2-1.9 million in Catalonia and 1.5-2.3 million in Spain. In our setting, almost fifty percent of people ≥50 years have at least one risk condition to suffert pneumococcal disease. Adult population susceptible for pneumococal vaccination largely varies depending on the risk stratum considered as targeted people for pneumococcal vaccination.

  18. Measurement of bubble size distributions in vesiculated rocks with implications for quantitative estimation of eruption processes

    Science.gov (United States)

    Toramaru, Atsushi

    1990-10-01

    This paper outlines methods for determining a bubble size distribution (BSD) and the moments of the BSD function in vesiculated clasts produced by volcanic eruptions. It reports the results of applications of the methods to 11 natural samples and discusses the implications for quantitative estimates of eruption processes. The analysis is based on a quantitative morphological (stereological) method for 2-dimensional imaging of cross-sections of samples. One method determines, with some assumptions, the complete shape of the BSD function from the chord lengths cut by bubbles. The other determines the 1st, 2nd and 3rd moments of distribution functions by measurement of the number of bubbles per unit area, the surface area per unit volume, and the volume fraction of bubbles. Comparison of procedures and results of these two distinct methods shows that the latter yields rather more reliable results than the former, though the results coincide in absolute and relative magnitudes. Results of the analysis for vesiculated rocks from eleven subPlinian to Plinian eruptions show some interesting systematic correlations both between moments of the BSD and between a moment and the eruption column height or the SiO 2 content of magma. These correlations are successfully interpreted in terms of the nucleation and growth processes of bubbles in ascending magmas. This suggests that bubble coalescence does not predominate in sub-Plinian to Plinian explosive eruptions. The moment-moment correlations put constraints on the style of the nucleation and growth process of bubbles. The scaling argument suggests that a single nucleation event and subsequent growth with any kind of bubble interaction under continuous depressurization, which leads to an intermediate growth law between the diffusional growth ( R m ∝ t {2}/{3}) at a constant depressurization rate and the Ostwald ripening ( R m ∝ t {1}/{3}) under a constant pressure, where Rm and t are the mean radius of bubble and the

  19. A Spatially Distributed Conceptual Model for Estimating Suspended Sediment Yield in Alpine catchments

    Science.gov (United States)

    Costa, Anna; Molnar, Peter; Anghileri, Daniela

    2017-04-01

    Suspended sediment is associated with nutrient and contaminant transport in water courses. Estimating suspended sediment load is relevant for water-quality assessment, recreational activities, reservoir sedimentation issues, and ecological habitat assessment. Suspended sediment concentration (SSC) along channels is usually reproduced by suspended sediment rating curves, which relate SSC to discharge with a power law equation. Large uncertainty characterizes rating curves based only on discharge, because sediment supply is not explicitly accounted for. The aim of this work is to develop a source-oriented formulation of suspended sediment dynamics and to estimate suspended sediment yield at the outlet of a large Alpine catchment (upper Rhône basin, Switzerland). We propose a novel modelling approach for suspended sediment which accounts for sediment supply by taking into account the variety of sediment sources in an Alpine environment, i.e. the spatial location of sediment sources (e.g. distance from the outlet and lithology) and the different processes of sediment production and transport (e.g. by rainfall, overland flow, snowmelt). Four main sediment sources, typical of Alpine environments, are included in our model: glacial erosion, hillslope erosion, channel erosion and erosion by mass wasting processes. The predictive model is based on gridded datasets of precipitation and air temperature which drive spatially distributed degree-day models to simulate snowmelt and ice-melt, and determine erosive rainfall. A mass balance at the grid scale determines daily runoff. Each cell belongs to a different sediment source (e.g. hillslope, channel, glacier cell). The amount of sediment entrained and transported in suspension is simulated through non-linear functions of runoff, specific for sediment production and transport processes occurring at the grid scale (e.g. rainfall erosion, snowmelt-driven overland flow). Erodibility factors identify different lithological units

  20. Estimating usual food intake distributions by using the multiple source method in the EPIC-Potsdam Calibration Study.

    Science.gov (United States)

    Haubrock, Jennifer; Nöthlings, Ute; Volatier, Jean-Luc; Dekkers, Arnold; Ocké, Marga; Harttig, Ulrich; Illner, Anne-Kathrin; Knüppel, Sven; Andersen, Lene F; Boeing, Heiner

    2011-05-01

    Estimating usual food intake distributions from short-term quantitative measurements is critical when occasionally or rarely eaten food groups are considered. To overcome this challenge by statistical modeling, the Multiple Source Method (MSM) was developed in 2006. The MSM provides usual food intake distributions from individual short-term estimates by combining the probability and the amount of consumption with incorporation of covariates into the modeling part. Habitual consumption frequency information may be used in 2 ways: first, to distinguish true nonconsumers from occasional nonconsumers in short-term measurements and second, as a covariate in the statistical model. The MSM is therefore able to calculate estimates for occasional nonconsumers. External information on the proportion of nonconsumers of a food can also be handled by the MSM. As a proof-of-concept, we applied the MSM to a data set from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam Calibration Study (2004) comprising 393 participants who completed two 24-h dietary recalls and one FFQ. Usual intake distributions were estimated for 38 food groups with a proportion of nonconsumers > 70% in the 24-h dietary recalls. The intake estimates derived by the MSM corresponded with the observed values such as the group mean. This study shows that the MSM is a useful and applicable statistical technique to estimate usual food intake distributions, if at least 2 repeated measurements per participant are available, even for food groups with a sizeable percentage of nonconsumers.

  1. Geographical distribution of COPD prevalence in Europe, estimated by an inverse distance weighting interpolation technique

    Directory of Open Access Journals (Sweden)

    Blanco I

    2017-12-01

    Full Text Available Ignacio Blanco,1 Isidro Diego,2 Patricia Bueno,3 Eloy Fernández,4 Francisco Casas-Maldonado,5 Cristina Esquinas,6 Joan B Soriano,7 Marc Miravitlles6 1Alpha1-Antitrypsin Deficiency Spanish Registry, Lung Foundation Breathe, Spanish Society of Pneumology, Barcelona, 2Materials and Energy Department, School of Mining Engineering, Oviedo University, 3Internal Medicine Department, County Hospital of Jarrio, 4Clinical Analysis Laboratory, University Hospital of Cabueñes, Principality of Asturias, 5Pneumology Department, University Hospital San Cecilio, Granada, 6Pneumology Department, Hospital Universitari Vall d’Hebron, CIBER de Enfermedades Respiratorias (CIBERES, Barcelona, 7Instituto de Investigación Hospital Universitario de la Princesa, Universidad Autónoma de Madrid, Madrid, Spain Abstract: Existing data on COPD prevalence are limited or totally lacking in many regions of Europe. The geographic information system inverse distance weighted (IDW interpolation technique has proved to be an effective tool in spatial distribution estimation of epidemiological variables, when real data are few and widely separated. Therefore, in order to represent cartographically the prevalence of COPD in Europe, an IDW interpolation mapping was performed. The point prevalence data provided by 62 studies from 19 countries (21 from 5 Northern European countries, 11 from 3 Western European countries, 14 from 5 Central European countries, and 16 from 6 Southern European countries were identified using validated spirometric criteria. Despite the lack of data in many areas (including all regions of the eastern part of the continent, the IDW mapping predicted the COPD prevalence in the whole territory, even in extensive areas lacking real data. Although the quality of the data obtained from some studies may have some limitations related to different confounding factors, this methodology may be a suitable tool for obtaining epidemiological estimates that can enable

  2. Distribution of near-surface permafrost in Alaska: estimates of present and future conditions

    Science.gov (United States)

    Pastick, Neal J.; Jorgenson, M. Torre; Wylie, Bruce K.; Nield, Shawn J.; Johnson, Kristofer D.; Finley, Andrew O.

    2015-01-01

    High-latitude regions are experiencing rapid and extensive changes in ecosystem composition and function as the result of increases in average air temperature. Increasing air temperatures have led to widespread thawing and degradation of permafrost, which in turn has affected ecosystems, socioeconomics, and the carbon cycle of high latitudes. Here we overcome complex interactions among surface and subsurface conditions to map nearsurface permafrost through decision and regression tree approaches that statistically and spatially extend field observations using remotely sensed imagery, climatic data, and thematic maps of a wide range of surface and subsurface biophysical characteristics. The data fusion approach generated medium-resolution (30-m pixels) maps of near-surface (within 1 m) permafrost, active-layer thickness, and associated uncertainty estimates throughout mainland Alaska. Our calibrated models (overall test accuracy of ~85%) were used to quantify changes in permafrost distribution under varying future climate scenarios assuming no other changes in biophysical factors. Models indicate that near-surface permafrost underlies 38% of mainland Alaska and that near-surface permafrost will disappear on 16 to 24% of the landscape by the end of the 21st Century. Simulations suggest that near-surface permafrost degradation is more probable in central regions of Alaska than more northerly regions. Taken together, these results have obvious implications for potential remobilization of frozen soil carbon pools under warmer temperatures. Additionally, warmer and drier conditions may increase fire activity and severity, which may exacerbate rates of permafrost thaw and carbon remobilization relative to climate alone. The mapping of permafrost distribution across Alaska is important for land-use planning, environmental assessments, and a wide-array of geophysical studies.

  3. Estimates of the Size Distribution of Meteoric Smoke Particles From Rocket-Borne Impact Probes

    Science.gov (United States)

    Antonsen, Tarjei; Havnes, Ove; Mann, Ingrid

    2017-11-01

    Ice particles populating noctilucent clouds and being responsible for polar mesospheric summer echoes exist around the mesopause in the altitude range from 80 to 90 km during polar summer. The particles are observed when temperatures around the mesopause reach a minimum, and it is presumed that they consist of water ice with inclusions of smaller mesospheric smoke particles (MSPs). This work provides estimates of the mean size distribution of MSPs through analysis of collision fragments of the ice particles populating the mesospheric dust layers. We have analyzed data from two triplets of mechanically identical rocket probes, MUltiple Dust Detector (MUDD), which are Faraday bucket detectors with impact grids that partly fragments incoming ice particles. The MUDD probes were launched from Andøya Space Center (69°17'N, 16°1'E) on two payloads during the MAXIDUSTY campaign on 30 June and 8 July 2016, respectively. Our analysis shows that it is unlikely that ice particles produce significant current to the detector, and that MSPs dominate the recorded current. The size distributions obtained from these currents, which reflect the MSP sizes, are described by inverse power laws with exponents of k˜ [3.3 ± 0.7, 3.7 ± 0.5] and k˜ [3.6 ± 0.8, 4.4 ± 0.3] for the respective flights. We derived two k values for each flight depending on whether the charging probability is proportional to area or volume of fragments. We also confirm that MSPs are probably abundant inside mesospheric ice particles larger than a few nanometers, and the volume filling factor can be a few percent for reasonable assumptions of particle properties.

  4. Atmospheric number size distributions of soot particles and estimation of emission factors

    Directory of Open Access Journals (Sweden)

    D. Rose

    2006-01-01

    Full Text Available Number fractions of externally mixed particles of four different sizes (30, 50, 80, and 150 nm in diameter were measured using a Volatility Tandem DMA. The system was operated in a street canyon (Eisenbahnstrasse, EI and at an urban background site (Institute for Tropospheric Research, IfT, both in the city of Leipzig, Germany as well as at a rural site (Melpitz (ME, a village near Leipzig. Intensive campaigns of 3–5 weeks each took place in summer 2003 as well as in winter 2003/04. The data set thus obtained provides mean number fractions of externally mixed soot particles of atmospheric aerosols in differently polluted areas and different seasons (e.g. at 80 nm on working days, 60% (EI, 22% (IfT, and 6% (ME in summer and 26% (IfT, and 13% (ME in winter. Furthermore, a new method is used to calculate the size distribution of these externally mixed soot particles from parallel number size distribution measurements. A decrease of the externally mixed soot fraction with decreasing urbanity and a diurnal variation linked to the daily traffic changes demonstrate, that the traffic emissions have a significant impact on the soot fraction in urban areas. This influence becomes less in rural areas, due to atmospheric mixing and transformation processes. For estimating the source strength of soot particles emitted by vehicles (veh, soot particle emission factors were calculated using the Operational Street Pollution Model (OSPM. The emission factor for an average vehicle was found to be (1.5±0.4·1014 #(km·veh. The separation of the emission factor into passenger cars ((5.8±2·1013} #(km·veh and trucks ((2.5±0.9·1015 #(km·veh yielded in a 40-times higher emission factor for trucks compared to passenger cars.

  5. Targeted Learning

    CERN Document Server

    van der Laan, Mark J

    2011-01-01

    The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows (1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and (2) targeting the fitting of the probability distribution of the data toward the targe

  6. The Effect of Beam Intensity on Temperature Distribution in ADS Windowless Lead-Bismuth Eutectic Spallation Target

    Directory of Open Access Journals (Sweden)

    Jie Liu

    2014-01-01

    Full Text Available The spallation target is the component coupling the accelerator and the reactor and is regarded as the “heart” of the accelerator driven system (ADS. Heavy liquid metal lead-bismuth eutectic (LBE is served as core coolant and spallation material to carry away heat deposition of spallation reaction and produce high flux neutron. So it is very important to study the heat transfer process in the target. In this paper, the steady-state flow pattern has been numerically obtained and taken as the input for the nuclear physics calculation, and then the distribution of the extreme large power density of the heat load is imported back to the computational fluid dynamics as the source term in the energy equation. Through the coupling, the transient and steady-state temperature distribution in the windowless spallation target is obtained and analyzed based on the flow process and heat transfer. Comparison of the temperature distribution with the different beam intensity shows that its shape is the same as broken wing of the butterfly. Nevertheless, the maximum temperature as well as the temperature gradient is different. The results play an important role and can be applied to the further design and optimization of the ADS windowless spallation target.

  7. Estimating species and size composition of rockfishes to verify targets in acoustic surveys of untrawlable areas

    OpenAIRE

    Rooper, Christopher N.; Martin, Michael H.; Butler, John L.; Jones, Darin T.; Zimmermann, Mark

    2012-01-01

    Rockfish (Sebastes spp.) biomass is difficult to assess with standard bottom trawl or acoustic surveys because of their propensity to aggregate near the seafloor in highrelief areas that are inaccessible to sampling by trawling. We compared the ability of a remotely operated vehicle (ROV), a modified bottom trawl, and a stereo drop camera system (SDC) to identify rockfish species and estimate their size composition. The ability to discriminate species was highest for the bottom trawl...

  8. A physics-based solver to optimize the illumination of cylindrical targets in spherically distributed high power laser systems

    Science.gov (United States)

    Gourdain, P.-A.

    2017-05-01

    In recent years, our understanding of high energy density plasmas has played an important role in improving inertial fusion confinement and in emerging new fields of physics, such as laboratory astrophysics. Every new idea required developing innovative experimental platforms at high power laser facilities, such as OMEGA or NIF. These facilities, designed to focus all their beams onto spherical targets or hohlraum windows, are now required to shine them on more complex targets. While the pointing on planar geometries is relatively straightforward, it becomes problematic for cylindrical targets or target with more complex geometries. This publication describes how the distribution of laser beams on a cylindrical target can be done simply by using a set of physical laws as a pointing procedure. The advantage of the method is threefold. First, it is straightforward, requiring no mathematical enterprise besides solving ordinary differential equations. Second, it will converge if a local optimum exists. Finally, it is computationally inexpensive. Experimental results show that this approach produces a geometrical beam distribution that yields cylindrically symmetric implosions.

  9. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  10. Using gradient-based ray and candidate shadow maps for environmental illumination distribution estimation

    Science.gov (United States)

    Eem, Changkyoung; Kim, Iksu; Hong, Hyunki

    2015-07-01

    A method to estimate the environmental illumination distribution of a scene with gradient-based ray and candidate shadow maps is presented. In the shadow segmentation stage, we apply a Canny edge detector to the shadowed image by using a three-dimensional (3-D) augmented reality (AR) marker of a known size and shape. Then the hierarchical tree of the connected edge components representing the topological relation is constructed, and the connected components are merged, taking their hierarchical structures into consideration. A gradient-based ray that is perpendicular to the gradient of the edge pixel in the shadow image can be used to extract the shadow regions. In the light source detection stage, shadow regions with both a 3-D AR marker and the light sources are partitioned into candidate shadow maps. A simple logic operation between each candidate shadow map and the segmented shadow is used to efficiently compute the area ratio between them. The proposed method successively extracts the main light sources according to their relative contributions on the segmented shadows. The proposed method can reduce unwanted effects due to the sampling positions in the shadow region and the threshold values in the shadow edge detection.

  11. Estimation of distribution algorithm for resource allocation in green cooperative cognitive radio sensor networks.

    Science.gov (United States)

    Naeem, Muhammad; Pareek, Udit; Lee, Daniel C; Anpalagan, Alagan

    2013-04-12

    Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN) that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN). The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP), which is generally non-deterministic polynomial-time (NP)-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA) for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results.

  12. Implementing the distributed consensus-based estimation of environmental variables in unattended wireless sensor networks

    Science.gov (United States)

    Contreras, Rodrigo; Restrepo, Silvia E.; Pezoa, Jorge E.

    2014-10-01

    In this paper, the prototype implementation of a scalable, distributed protocol for calculating the global average of sensed environmental variables in unattended wireless sensor networks (WSNs) is presented. The design and implementation of the protocol introduces a communication scheme for discovering the WSN topology. Such scheme uses a synchronous flooding algorithm, which was implemented over an unreliable radiogram-based wireless channel. The topology discovery protocol has been synchronized with sampling time of the WSN and must be executed before the consensus-based estimation of the global averages. An average consensus algorithm, suited for clustered WSNs with static topologies, was selected from the literature. The algorithm was properly modified so that its implementation guarantees that the convergence time is bounded and less than the sampling time of the WSN. Moreover, to implement the consensus algorithm, a reliable packet-passing protocol was designed to exchange the weighting factors among the sensor nodes. Since the amount of data exchanged in each packet is bounded by the degree of the WSN, the scalability of the protocol is guaranteed to be linear. The proposed protocol was implemented in the Sun SPOT hardware/software platform using the Java programming language. All the radio communications were implemented over the IEEE 802.15.4 standard and the sensed environmental variables corresponded to the temperature and luminosity.

  13. Estimation of peak heat flux onto the targets for CFETR with extended divertor leg

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chuanjia; Chen, Bin [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Xing, Zhe [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Wu, Haosheng [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Mao, Shifeng, E-mail: sfmao@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Luo, Zhengping; Peng, Xuebing [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Ye, Minyou [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China)

    2016-11-01

    Highlights: • A hypothetical geometry is assumed to extend the outer divertor leg in CFETR. • Density scan SOLPS simulation is done to study the peak heat flux onto target. • Attached–detached regime transition in out divertor occurs at lower puffing rate. • Unexpected delay of attached–detached regime transition occurs in inner divertor. - Abstract: China Fusion Engineering Test Reactor (CFETR) is now in conceptual design phase. CFETR is proposed as a good complement to ITER for demonstrating of fusion energy. Divertor is a crucial component which faces the plasmas and handles huge heat power for CFETR and future fusion reactor. To explore an effective way for heat exhaust, various methods to reduce the heat flux to divertor target should be considered for CFETR. In this work, the effect of extended out divertor leg on the peak heat flux is studied. The magnetic configuration of the long leg divertor is obtained by EFIT and Tokamak Simulation Code (TSC), while a hypothetical geometry is assumed to extend the out divertor leg as long as possible inside vacuum vessel. A SOLPS simulation is performed to study peak heat flux of the long leg divertor for CFETR. D{sub 2} gas puffing is used and increasing of the puffing rate means increase of plasma density. Both peak heat flux onto inner and outer targets are below 10 MW/m{sup 2} is achieved. A comparison between the peak heat flux between long leg and conventional divertor shows that an attached–detached regime transition of out divertor occurs at lower gas puffing gas puffing rate for long leg divertor. While for the inner divertor, even the configuration is almost the same, the situation is opposite.

  14. Image segmentation and activity estimation for microPET 11C-raclopride images using an expectation-maximum algorithm with a mixture of Poisson distributions.

    Science.gov (United States)

    Su, Kuan-Hao; Chen, Jay S; Lee, Jih-Shian; Hu, Chi-Min; Chang, Chi-Wei; Chou, Yuan-Hwa; Liu, Ren-Shyan; Chen, Jyh-Cheng

    2011-07-01

    The objective of this study was to use a mixture of Poisson (MOP) model expectation maximum (EM) algorithm for segmenting microPET images. Simulated rat phantoms with partial volume effect and different noise levels were generated to evaluate the performance of the method. The partial volume correction was performed using an EM deblurring method before the segmentation. The EM-MOP outperforms the EM-MOP in terms of the estimated spatial accuracy, quantitative accuracy, robustness and computing efficiency. To conclude, the proposed EM-MOP method is a reliable and accurate approach for estimating uptake levels and spatial distributions across target tissues in microPET (11)C-raclopride imaging studies. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  16. Performance Comparison of Time-Frequency Distributions for Estimation of Instantaneous Frequency of Heart Rate Variability Signals

    Directory of Open Access Journals (Sweden)

    Nabeel Ali Khan

    2017-02-01

    Full Text Available The instantaneous frequency (IF of a non-stationary signal is usually estimated from a time-frequency distribution (TFD. The IF of heart rate variability (HRV is an important parameter because the power in a frequency band around the IF can be used for the interpretation and analysis of the respiratory rate but also for a more accurate analysis of heart rate (HR signals. In this study, we compare the performance of five states of the art kernel-based time-frequency distributions (TFDs in terms of their ability to accurately estimate the IF of HR signals. The selected TFDs include three widely used fixed kernel methods: the modified B distribution, the S-method and the spectrogram; and two adaptive kernel methods: the adaptive optimal kernel TFD and the recently developed adaptive directional TFD. The IF of the respiratory signal, which is usually easier to estimate as the respiratory signal is a mono-component with small amplitude variations with time, is used as a reference to examine the accuracy of the HRV IF estimates. Experimental results indicate that the most reliable estimates are obtained using the adaptive directional TFD in comparison to other commonly used methods such as the adaptive optimal kernel TFD and the modified B distribution.

  17. Optimum arrangement of seismic intensity monitoring points for immediate estimation system of wide-area distribution of seismic intensity

    Science.gov (United States)

    Furumoto, Yoshinori; Wada, Ayaka; Machida, Tetsu; Watanabe, Taichi; Bong, Michelle

    2017-10-01

    In this paper, the immediate estimation system for wide-area distribution of seismic intensity by using seismic intensity information network system is discussed. In general, although seismic intensity on each seismic intensity monitoring points can be obtained by using seismic intensity information network system within a few minutes after earthquake occurs, wide area distribution of seismic intensity is not obtained. This is because the number of seismic intensity monitoring points on the network system are very few and limited to estimate distribution of seismic intensity. However, by using other information, such as soil profiles on the ground of local areas and attenuation characteristics of seismic intensity, distribution of seismic intensity can be estimated with computer simulation considering seismic wave amplification on the ground immediately after seismic intensity information form the network system is obtained. Especially, array density and optimum arrangement of seismic intensity monitoring points are discussed to estimate efficiently the distribution of seismic intensity in local municipality. Then, the concluded result is that it is effective to place seismic monitoring points in high density populated areas.

  18. Estimating the spatial distribution of PM2.5 concentration by integrating geographic data and field measurements

    Science.gov (United States)

    Zhai, L.; Sang, H.; Zhang, J.; An, F.

    2015-06-01

    Air quality directly affects the health and living of human beings, and it receives wide concern of public and attaches great important of governments at all levels. The estimation of the concentration distribution of PM2.5 and the analysis of its impacting factors is significant for understanding the spatial distribution regularity and further for decision supporting of governments. In this study, multiple sources of remote sensing and GIS data are utilized to estimate the spatial distribution of PM2.5 concentration in Shijiazhuang, China, by utilizing multivariate linear regression modelling, and integrating year average values of PM2.5 collected from local environment observing stations. Two major sources of PM2.5 are collected, including dust surfaces and industrial polluting sources. The area attribute of dust surfaces and point attribute of industrial polluting enterprises are extracted from high resolution remote sensing images and GIS data in 2013. 30m land cover products, annual average PM2.5 concentration values from the 8 environment monitoring stations, annual mean MODIS AOD data, traffic and DEM data are utilized in the study for regression modeling analysis. The multivariate regression analysis model is applied to estimate the spatial distribution of PM2.5 concentration. There is an upward trend of the spatial distribution of PM2.5 concentration gradually from west to east, of which the highest concentration appears in the municipal district and its surrounding areas. The spatial distribution pattern relatively fit the reality.

  19. Novel coherent receivers for AF distributed STBC using disintegrated channel estimation

    KAUST Repository

    Khan, Fahd Ahmed

    2011-05-01

    For a single relay network, disintegrated channel estimation (DCE), where the source-relay channel is estimated at the relay and the relay-destination channel is estimated at the destination, gives better performance than the cascaded channel estimation. We derive novel receivers for the relay network with disintegrated channel estimation. The derived receivers do not require channel estimation at the destination, as they use the received pilot signals and the source-relay channel estimate for decoding directly. We also consider the effect of quantized source-relay channel estimate on the performance of the designed receivers. Simulation results show that a performance gain of up to 2.2 dB can be achieved by the new receivers, compared with the conventional mismatched coherent receiver with DCE. © 2011 IEEE.

  20. Modifying the planning target volume to optimize the dose distribution in dynamic conformal arc therapy for large metastatic brain tumors.

    Science.gov (United States)

    Ogura, Kengo; Kosaka, Yasuhiro; Imagumbai, Toshiyuki; Ueki, Kazuhito; Narukami, Ryo; Hattori, Takayuki; Kokubo, Masaki

    2017-06-01

    When treating large metastatic brain tumors with stereotactic radiotherapy (SRT), high dose conformity to target is difficult to achieve. Employing a modified planning target volume (mPTV) instead of the original PTV may be one way to improve the dose distribution in linear accelerator-based SRT using a dynamic conformal technique. In this study, we quantitatively analyzed the impact of a mPTV on dose distribution. Twenty-four tumors with a maximum diameter of >2 cm were collected. For each tumor, two plans were created: one used a mPTV and the other did not. The mPTV was produced by shrinking or enlarging the original PTV according to the dose distribution in the original plan. The dose conformity was evaluated and compared between the plans using a two-sided paired t test. The conformity index defined by the Radiation Therapy Oncology Group was 1.34 ± 0.10 and 1.41 ± 0.13, and Paddick's conformity index was 0.75 ± 0.05 and 0.71 ± 0.06, for the plans with and without a mPTV, respectively. All of these improvements were statistically significant (P < 0.05). The use of a mPTV can improve target conformity when planning SRT for large metastatic brain tumors.

  1. Distributed Cerebral Blood Flow estimation using a spatiotemporal hemodynamic response model and a Kalman-like Filter approach

    KAUST Repository

    Belkhatir, Zehor

    2015-11-23

    This paper discusses the estimation of distributed Cerebral Blood Flow (CBF) using spatiotemporal traveling wave model. We consider a damped wave partial differential equation that describes a physiological relationship between the blood mass density and the CBF. The spatiotemporal model is reduced to a finite dimensional system using a cubic b-spline continuous Galerkin method. A Kalman Filter with Unknown Inputs without Direct Feedthrough (KF-UI-WDF) is applied on the obtained reduced differential model to estimate the source term which is the CBF scaled by a factor. Numerical results showing the performances of the adopted estimator are provided.

  2. Estimation of Distribution Algorithm for Resource Allocation in Green Cooperative Cognitive Radio Sensor Networks

    Directory of Open Access Journals (Sweden)

    Alagan Anpalagan

    2013-04-01

    Full Text Available Due to the rapid increase in the usage and demand of wireless sensor networks (WSN, the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN. The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP, which is generally non-deterministic polynomial-time (NP-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results.

  3. Estimating aquifer properties and distributed groundwater recharge in a hard-rock catchment of Udaipur, India

    Science.gov (United States)

    Machiwal, Deepesh; Singh, P. K.; Yadav, K. K.

    2017-10-01

    The present study determined aquifer parameters in hard-rock aquifer system of Ahar River catchment, Udaipur, India by conducting 19 pumping tests in large-diameter wells. Spreadsheet programs were developed for analyzing pumping test data, and their accuracy was evaluated by root mean square error (RMSE) and correlation coefficient ( R). Histograms and Shapiro-Wilk test indicated non-normality ( p value groundwater levels at 50 sites for years 2006-2008, and hence, logarithmic transformations were done. Furthermore, recharge was estimated using GIS-based water table fluctuation method. The groundwater levels were found to be influenced by the topography, presence of structural hills, density of pumping wells, and seasonal recharge. The results of the pumping tests revealed that the transmissivity ( T) ranges from 68-2239 m2/day, and the specific yield ( S y) varies from 0.211 to 0.51 × 10-5. The T and S y values were found reasonable for the hard-rock formations in the area, and the spreadsheet programs were found reliable (RMSE 0.017-0.339 m; R > 0.95). Distribution of the aquifer parameters and recharge indicated that the northern portion with high ground elevations (575-700 m MSL), and high S y (0.08-0.25) and T (>600 m2/day) values may act as recharge zone. The T and S y values revealed significant spatial variability, which suggests strong heterogeneity of the hard-rock aquifer system. Overall, the findings of this study are useful to formulate appropriate strategies for managing water resources in the area. Also, the developed spreadsheet programs may be used to analyze the pumping test data of large-diameter wells in other hard-rock regions of the world.

  4. Fault Slip Distribution of the 2016 Fukushima Earthquake Estimated from Tsunami Waveforms

    Science.gov (United States)

    Gusman, Aditya Riadi; Satake, Kenji; Shinohara, Masanao; Sakai, Shin'ichi; Tanioka, Yuichiro

    2017-08-01

    The 2016 Fukushima normal-faulting earthquake (Mjma 7.4) occurred 40 km off the coast of Fukushima within the upper crust. The earthquake generated a moderate tsunami which was recorded by coastal tide gauges and offshore pressure gauges. First, the sensitivity of tsunami waveforms to fault dimensions and depths was examined and the best size and depth were determined. Tsunami waveforms computed based on four available focal mechanisms showed that a simple fault striking northeast-southwest and dipping southeast (strike = 45°, dip = 41°, rake = -95°) yielded the best fit to the observed waveforms. This fault geometry was then used in a tsunami waveform inversion to estimate the fault slip distribution. A large slip of 3.5 m was located near the surface and the major slip region covered an area of 20 km × 20 km. The seismic moment, calculated assuming a rigidity of 2.7 × 1010 N/m2 was 3.70 × 1019 Nm, equivalent to Mw = 7.0. This is slightly larger than the moments from the moment tensor solutions (Mw 6.9). Large secondary tsunami peaks arrived approximately an hour after clear initial peaks were recorded by the offshore pressure gauges and the Sendai and Ofunato tide gauges. Our tsunami propagation model suggests that the large secondary tsunami signals were from tsunami waves reflected off the Fukushima coast. A rather large tsunami amplitude of 75 cm at Kuji, about 300 km north of the source, was comparable to those recorded at stations located much closer to the epicenter, such as Soma and Onahama. Tsunami simulations and ray tracing for both real and artificial bathymetry indicate that a significant portion of the tsunami wave was refracted to the coast located around Kuji and Miyako due to bathymetry effects.

  5. Role of Target Indicators in Determination of Prognostic Estimates for the Construction Industry

    Directory of Open Access Journals (Sweden)

    Zalunina Olha M.

    2014-03-01

    Full Text Available The article considers interrelation of planning and forecasting in the construction industry. It justifies a need of determining key indicators for specific conditions of formation of the market model of development of economy, inconstant volumes of production in industry, absence of required volumes of investments for technical re-equipment of the branch, absence of sufficient volumes of own primary energy carriers, sharp growth of prices on imported energy carriers, absence of the modern system of tariffs on electric energy, and inefficiency of energy saving measures. The article offers to form key indicators on the basis of a factor analysis, which envisages stage-by-stage transformation of the matrix of original data with the result of “compression” of information. This allows identification of the most significant properties that influence economic state of the region under conditions of use of minimum of original information. The article forms key target indicators of the energy sector for the Poltava oblast. It calculates, using the proposed method, prognostic values of key indicators of territorial functioning for the Poltava oblast.

  6. Investigation on target normal sheath acceleration through measurements of ions energy distribution

    Energy Technology Data Exchange (ETDEWEB)

    Tudisco, S., E-mail: tudisco@lns.infn.it; Cirrone, G. A. P.; Mascali, D.; Schillaci, F. [Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali del Sud, Via S. Sofia 62, 95123 Catania (Italy); Altana, C. [Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali del Sud, Via S. Sofia 62, 95123 Catania (Italy); Dipartimento di Fisica e Astronomia, Università degli Studi di Catania, Via S. Sofia 64, 95123 Catania (Italy); Lanzalone, G. [Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali del Sud, Via S. Sofia 62, 95123 Catania (Italy); Università degli Studi di Enna “Kore,” Via delle Olimpiadi, 94100 Enna (Italy); Muoio, A. [Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali del Sud, Via S. Sofia 62, 95123 Catania (Italy); Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Messina, Viale F.S. D’Alcontres 31, 98166 Messina (Italy); Brandi, F. [Consiglio Nazionale delle Ricerche, Istituto Nazionale di Ottica, Intense Laser Irradiation Laboratory, Via G. Moruzzi 1, 56124 Pisa (Italy); Istituto Italiano di Tecnologia, Via Morego 30, 16163 Genova (Italy); Cristoforetti, G.; Ferrara, P.; Fulgentini, L.; Koester, P. [Consiglio Nazionale delle Ricerche, Istituto Nazionale di Ottica, Intense Laser Irradiation Laboratory, Via G. Moruzzi 1, 56124 Pisa (Italy); Labate, L.; Gizzi, L. A. [Consiglio Nazionale delle Ricerche, Istituto Nazionale di Ottica, Intense Laser Irradiation Laboratory, Via G. Moruzzi 1, 56124 Pisa (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); and others

    2016-02-15

    An experimental campaign aiming at investigating the ion acceleration mechanisms through laser-matter interaction in femtosecond domain has been carried out at the Intense Laser Irradiation Laboratory facility with a laser intensity of up to 2 × 10{sup 19} W/cm{sup 2}. A Thomson parabola spectrometer was used to obtain the spectra of the ions of the different species accelerated. Here, we show the energy spectra of light-ions and we discuss their dependence on structural characteristics of the target and the role of surface and target bulk in the acceleration process.

  7. A method to estimate the distribution of various fractions of PM10 in ambient air in the Netherlands

    NARCIS (Netherlands)

    Janssen, L.H.J.M.; Buringh, E.; Meulen, A. van der; Hout, K.D. van den

    1999-01-01

    Eight rules based on measurements, model calculations, additional deductions and expert judgement have been devised for use in estimating the distribution and sources of various fractions of PM10 in the Netherlands. As some of the underlying assumptions to these rules are debatable, they can be best

  8. Estimation and prediction of the HIV-AIDS-epidemic under conditions of HAART using mixtures of incubation time distributions

    NARCIS (Netherlands)

    Heisterkamp, S. H.; de Vries, R.; Sprenger, H. G.; Hubben, G. A. A.; Postma, M. J.

    2008-01-01

    The estimation of the HIV-AIDS epidemic by means of back-calculation (BC) has been difficult since the introduction of highly active anti-retroviral therapy (HAART) because the incubation time distributions needed for BC were poorly known. Moreover, it has been assumed that if the general public is

  9. Influence of Crown Biomass Estimators and Distribution on Canopy Fuel Characteristics in Ponderosa Pine Stands of the Black Hills

    Science.gov (United States)

    Tara Keyser; Frederick Smith

    2009-01-01

    Two determinants of crown fire hazard are canopy bulk density (CBD) and canopy base height (CBH). The Fire and Fuels Extension to the Forest Vegetation Simulator (FFE-FVS) is a model that predicts CBD and CBH. Currently, FFE-FVS accounts for neither geographic variation in tree allometries nor the nonuniform distribution of crown mass when one is estimating CBH and CBD...

  10. Determination of metal ion content of beverages and estimation of target hazard quotients: a comparative study

    Directory of Open Access Journals (Sweden)

    Barker James

    2008-06-01

    Full Text Available Abstract Background Considerable research has been directed towards the roles of metal ions in nutrition with metal ion toxicity attracting particular attention. The aim of this study is to measure the levels of metal ions found in selected beverages (red wine, stout and apple juice and to determine their potential detrimental effects via calculation of the Target Hazard Quotients (THQ for 250 mL daily consumption. Results The levels (mean ± SEM and diversity of metals determined by ICP-MS were highest for red wine samples (30 metals totalling 5620.54 ± 123.86 ppb followed by apple juice (15 metals totalling 1339.87 ± 10.84 ppb and stout (14 metals totalling 464.85 ± 46.74 ppb. The combined THQ values were determined based upon levels of V, Cr, Mn, Ni, Cu, Zn and Pb which gave red wine samples the highest value (5100.96 ± 118.93 ppb followed by apple juice (666.44 ± 7.67 ppb and stout (328.41 ± 42.36 ppb. The THQ values were as follows: apple juice (male 3.11, female 3.87, stout (male 1.84, female 2.19, red wine (male 126.52, female 157.22 and ultra-filtered red wine (male 110.48, female 137.29. Conclusion This study reports relatively high levels of metal ions in red wine, which give a very high THQ value suggesting potential hazardous exposure over a lifetime for those who consume at least 250 mL daily. In addition to the known hazardous metals (e.g. Pb, many metals (e.g. Rb have not had their biological effects systematically investigated and hence the impact of sustained ingestion is not known.

  11. Image Segmentation Using a Trimmed Likelihood Estimator in the Asymmetric Mixture Model Based on Generalized Gamma and Gaussian Distributions

    Directory of Open Access Journals (Sweden)

    Yi Zhou

    2018-01-01

    Full Text Available Finite mixture model (FMM is being increasingly used for unsupervised image segmentation. In this paper, a new finite mixture model based on a combination of generalized Gamma and Gaussian distributions using a trimmed likelihood estimator (GGMM-TLE is proposed. GGMM-TLE combines the effectiveness of Gaussian distribution with the asymmetric capability of generalized Gamma distribution to provide superior flexibility for describing different shapes of observation data. Another advantage is that we consider the spatial information among neighbouring pixels by introducing Markov random field (MRF; thus, the proposed mixture model remains sufficiently robust with respect to different types and levels of noise. Moreover, this paper presents a new component-based confidence level ordering trimmed likelihood estimator, with a simple form, allowing GGMM-TLE to estimate the parameters after discarding the outliers. Thus, the proposed algorithm can effectively eliminate the disturbance of outliers. Furthermore, the paper proves the identifiability of the proposed mixture model in theory to guarantee that the parameter estimation procedures are well defined. Finally, an expectation maximization (EM algorithm is included to estimate the parameters of GGMM-TLE by maximizing the log-likelihood function. Experiments on multiple public datasets demonstrate that GGMM-TLE achieves a superior performance compared with several existing methods in image segmentation tasks.

  12. Resampling methods for evaluating the uncertainty of the nonparametric magnitude distribution estimation in the Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Orlecka-Sikora, Beata

    2008-08-01

    The cumulative distribution function (CDF) of magnitude of seismic events is one of the most important probabilistic characteristics in Probabilistic Seismic Hazard Analysis (PSHA). The magnitude distribution of mining induced seismicity is complex. Therefore, it is estimated using kernel nonparametric estimators. Because of its model-free character the nonparametric approach cannot, however, provide confidence interval estimates for CDF using the classical methods of mathematical statistics. To assess errors in the seismic events magnitude estimation, and thereby in the seismic hazard parameters evaluation in the nonparametric approach, we propose the use of the resampling methods. Resampling techniques applied to a one dataset provide many replicas of this sample, which preserve its probabilistic properties. In order to estimate the confidence intervals for the CDF of magnitude, we have developed an algorithm based on the bias corrected and accelerated method (BC a method). This procedure uses the smoothed bootstrap and second-order bootstrap samples. We refer to this algorithm as the iterated BC a method. The algorithm performance is illustrated through the analysis of Monte Carlo simulated seismic event catalogues and actual data from an underground copper mine in the Legnica-Głogów Copper District in Poland. The studies show that the iterated BC a technique provides satisfactory results regardless of the sample size and actual shape of the magnitude distribution.

  13. Targeting of RGD-modified proteins to tumor vasculature : A pharmacokinetic and cellular distribution study

    NARCIS (Netherlands)

    Schraa, Astrid J.; Kok, Robbert J.; Moorlag, Henk E.; Bos, EJ; Proost, Johannes H.; Meijer, Dirk K.F.; de Leij, Lou F.M.H.; Molema, Grietje

    2002-01-01

    Angiogenesis-associated integrin alpha(v)beta(3) represents an attractive target for therapeutic intervention because it becomes highly upregulated on angiogenic endothelium and plays an important role in the survival of endothelial cells. Cyclic RGD peptides were prior shown to have a high affinity

  14. Origin of discrepancies between crater size-frequency distributions of coeval lunar geologic units via target property contrasts

    Science.gov (United States)

    Van der Bogert, Carolyn H.; Hiesinger, Harald; Dundas, Colin M.; Kruger, T.; McEwen, Alfred S.; Zanetti, Michael; Robinson, Mark S.

    2017-01-01

    Recent work on dating Copernican-aged craters, using Lunar Reconnaissance Orbiter (LRO) Camera data, re-encountered a curious discrepancy in crater size-frequency distribution (CSFD) measurements that was observed, but not understood, during the Apollo era. For example, at Tycho, Copernicus, and Aristarchus craters, CSFDs of impact melt deposits give significantly younger relative and absolute model ages (AMAs) than impact ejecta blankets, although these two units formed during one impact event, and would ideally yield coeval ages at the resolution of the CSFD technique. We investigated the effects of contrasting target properties on CSFDs and their resultant relative and absolute model ages for coeval lunar impact melt and ejecta units. We counted craters with diameters through the transition from strength- to gravity-scaling on two large impact melt deposits at Tycho and King craters, and we used pi-group scaling calculations to model the effects of differing target properties on final crater diameters for five different theoretical lunar targets. The new CSFD for the large King Crater melt pond bridges the gap between the discrepant CSFDs within a single geologic unit. Thus, the observed trends in the impact melt CSFDs support the occurrence of target property effects, rather than self-secondary and/or field secondary contamination. The CSFDs generated from the pi-group scaling calculations show that targets with higher density and effective strength yield smaller crater diameters than weaker targets, such that the relative ages of the former are lower relative to the latter. Consequently, coeval impact melt and ejecta units will have discrepant apparent ages. Target property differences also affect the resulting slope of the CSFD, with stronger targets exhibiting shallower slopes, so that the final crater diameters may differ more greatly at smaller diameters. Besides their application to age dating, the CSFDs may provide additional information about the

  15. Origin of discrepancies between crater size-frequency distributions of coeval lunar geologic units via target property contrasts

    Science.gov (United States)

    van der Bogert, C. H.; Hiesinger, H.; Dundas, C. M.; Krüger, T.; McEwen, A. S.; Zanetti, M.; Robinson, M. S.

    2017-12-01

    Recent work on dating Copernican-aged craters, using Lunar Reconnaissance Orbiter (LRO) Camera data, re-encountered a curious discrepancy in crater size-frequency distribution (CSFD) measurements that was observed, but not understood, during the Apollo era. For example, at Tycho, Copernicus, and Aristarchus craters, CSFDs of impact melt deposits give significantly younger relative and absolute model ages (AMAs) than impact ejecta blankets, although these two units formed during one impact event, and would ideally yield coeval ages at the resolution of the CSFD technique. We investigated the effects of contrasting target properties on CSFDs and their resultant relative and absolute model ages for coeval lunar impact melt and ejecta units. We counted craters with diameters through the transition from strength- to gravity-scaling on two large impact melt deposits at Tycho and King craters, and we used pi-group scaling calculations to model the effects of differing target properties on final crater diameters for five different theoretical lunar targets. The new CSFD for the large King Crater melt pond bridges the gap between the discrepant CSFDs within a single geologic unit. Thus, the observed trends in the impact melt CSFDs support the occurrence of target property effects, rather than self-secondary and/or field secondary contamination. The CSFDs generated from the pi-group scaling calculations show that targets with higher density and effective strength yield smaller crater diameters than weaker targets, such that the relative ages of the former are lower relative to the latter. Consequently, coeval impact melt and ejecta units will have discrepant apparent ages. Target property differences also affect the resulting slope of the CSFD, with stronger targets exhibiting shallower slopes, so that the final crater diameters may differ more greatly at smaller diameters. Besides their application to age dating, the CSFDs may provide additional information about the

  16. Merging plot and Landsata data to estimate the frequency distribution of Central Amazon mortality event size for landscape-scale ecosystem simulations

    Science.gov (United States)

    Di Vittorio, A. V.; Chambers, J. Q.

    2012-12-01

    Mitigation strategies and estimates of land use change emissions assume initial states of landscapes that respond to prescribed scenarios. The Amazon basin is a target for both mitigation (e.g. maintenance of old-growth forest) and land use change (e.g. agriculture), but the current states of its old-growth and secondary forest landscapes are uncertain with respect to carbon cycling. Contributing to this uncertainty in old-growth forest ecosystems is a mosaic of patches in different successional stages, with the areal fraction of any particular stage relatively constant over large temporal and spatial scales. Old-growth mosaics are generally created through ongoing effects of tree mortality, with the Central Amazon mosaic generated primarily by wind mortality. Unfortunately, estimation of generalizable frequency distributions of mortality event size has been hindered by limited spatial and temporal scales of observations. To overcome these limitations we merge field and remotely sensed tree mortality data and fit the top two candidate distributions (power law and exponential) to these data to determine the most appropriate statistical mortality model for use in landscape-scale ecosystem simulations. Our results show that the power law model better represents the distribution of mortality event size than the exponential model. We also use an individual-tree-based forest stand model to simulate a 100 ha landscape using the best fit of each candidate distribution to demonstrate the effects of different mortality regimes on above ground biomass in the Central Amazon forest mosaic. We conclude that the correct mortality distribution model is critical for robust simulation of patch succession dynamics and above ground biomass.

  17. Preclinical PET imaging of EGFR levels: pairing a targeting with a non-targeting Sel-tagged Affibody-based tracer to estimate the specific uptake.

    Science.gov (United States)

    Cheng, Qing; Wållberg, Helena; Grafström, Jonas; Lu, Li; Thorell, Jan-Olov; Hägg Olofsson, Maria; Linder, Stig; Johansson, Katarina; Tegnebratt, Tetyana; Arnér, Elias S J; Stone-Elander, Sharon; Ahlzén, Hanna-Stina Martinsson; Ståhl, Stefan

    2016-12-01

    Though overexpression of epidermal growth factor receptor (EGFR) in several forms of cancer is considered to be an important prognostic biomarker related to poor prognosis, clear correlations between biomarker assays and patient management have been difficult to establish. Here, we utilize a targeting directly followed by a non-targeting tracer-based positron emission tomography (PET) method to examine some of the aspects of determining specific EGFR binding in tumors. The EGFR-binding Affibody molecule ZEGFR:2377 and its size-matched non-binding control ZTaq:3638 were recombinantly fused with a C-terminal selenocysteine-containing Sel-tag (ZEGFR:2377-ST and ZTaq:3638-ST). The proteins were site-specifically labeled with DyLight488 for flow cytometry and ex vivo tissue analyses or with (11)C for in vivo PET studies. Kinetic scans with the (11)C-labeled proteins were performed in healthy mice and in mice bearing xenografts from human FaDu (squamous cell carcinoma) and A431 (epidermoid carcinoma) cell lines. Changes in tracer uptake in A431 xenografts over time were also monitored, followed by ex vivo proximity ligation assays (PLA) of EGFR expressions. Flow cytometry and ex vivo tissue analyses confirmed EGFR targeting by ZEGFR:2377-ST-DyLight488. [Methyl-(11)C]-labeled ZEGFR:2377-ST-CH3 and ZTaq:3638-ST-CH3 showed similar distributions in vivo, except for notably higher concentrations of the former in particularly the liver and the blood. [Methyl-(11)C]-ZEGFR:2377-ST-CH3 successfully visualized FaDu and A431 xenografts with moderate and high EGFR expression levels, respectively. However, in FaDu tumors, the non-specific uptake was large and sometimes equally large, illustrating the importance of proper controls. In the A431 group observed longitudinally, non-specific uptake remained at same level over the observation period. Specific uptake increased with tumor size, but changes varied widely over time in individual tumors. Total (membranous and cytoplasmic) EGFR

  18. Estimation of supraglacial debris thickness using a novel target decomposition on L-band polarimetric SAR images in the Tianshan Mountains

    Science.gov (United States)

    Huang, L.; Li, Zh.; Tian, B. S.; Han, H. D.; Liu, Y. Q.; Zhou, J. M.; Chen, Q.

    2017-04-01

    Debris is widely distributed in the ablation zones of mountain glaciers in the Tianshan Mountains. Supraglacial debris can accelerate or hamper glacier ablation, depending on its thickness. Thus, it plays an important role in the mass balance of debris-covered glaciers. This paper proposes a novel method to estimate supraglacial debris thickness by using L-band polarimetric synthetic aperture radar. A new model-based target decomposition is used to extract the surface scattering, double bounce, and volume scattering components. The surface scatter model uses the extended Bragg scatter, which considers the depolarization effect for rough surfaces. The volume scatter model uses elliptical scatterers, which approximate the shape of the solids in the debris. The volume scattering power is related to the dielectric properties of the debris, the radar wavelength, the incidence angle, and the elliptical scatter shape. Once the target decomposition is performed, the debris thickness can be inverted from the volume scattering power and other known parameters. Through comparison with a large number of field measurements, the inversion is shown to be reasonable, and the accuracy is validated to be ±0.12 m. Based on the inversion map in the study area, the debris thicknesses of the Koxkar glacier and its neighboring glaciers are presented and analyzed.

  19. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    Science.gov (United States)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal

  20. Estimation of volumes of distribution and intratumoral ethanol concentrations by computed tomography scanning after percutaneous ethanol injection.

    Science.gov (United States)

    Alexander, D G; Unger, E C; Seeger, S J; Karmann, S; Krupinski, E A

    1996-01-01

    We developed a technique for estimating the volumes of distribution and intratumoral ethanol concentrations using computed tomography (CT) scanning in patients undergoing percutaneous ethanol injection (PEI) treatment of malignant hepatic tumors. A phantom containing anhydrous ethanol diluted with deionized distilled water to concentrations of 0-100% ethanol was scanned by CT. Thirty-seven treatment sessions were performed on eight patients with malignant hepatic tumors undergoing PEI under CT guidance. The patients were scanned pre- and post-PEI, and a region of interest containing the treated hepatic tissue was selected for pixels between -250 and 15 Hounsfield units (H). The mean density of the pixels in this range was computed and the concentration of ethanol estimated. Volumes of distribution of ethanol and intratumoral concentration were then correlated with volume of ethanol injected during PEI. The ratios of volumes of distribution of ethanol to ethanol injected (adjusted in-range [IR]/volume injected) were compared for responders (n = 4) and nonresponders (n = 4). CT numbers in the phantom scaled linearly with ethanol concentration; 100% ethanol measured -234 H. On CT scans after PEI, the volume of distribution of ethanol correlated positively with the volume injected. Calculated intratumoral ethanol concentrations ranged from 4% to 31%. The adjusted IR/volume injected was significantly higher for responders than nonresponders (p ethanol distribution in tissue; a larger relative intratumoral distribution of alcohol appears to correlate with a favorable response to PEI. However, CT measurement of intratumoral ethanol concentrations may require more complex computational techniques.