WorldWideScience

Sample records for estimation target distributions

  1. Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar

    Directory of Open Access Journals (Sweden)

    Zhenxin Cao

    2018-02-01

    Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.

  2. Sensitivity of postplanning target and OAR coverage estimates to dosimetric margin distribution sampling parameters.

    Science.gov (United States)

    Xu, Huijun; Gordon, J James; Siebers, Jeffrey V

    2011-02-01

    A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The

  3. Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting

    Directory of Open Access Journals (Sweden)

    Stanislav Anatolyev

    2015-08-01

    Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.

  4. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    Science.gov (United States)

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  5. Targeting estimation of CCC-GARCH models with infinite fourth moments

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    . In this paper we consider the large-sample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using non-standard limit theory we derive new results...... for the estimator stating that its limiting distribution is multivariate stable. The rate of consistency of the estimator is slower than √Τ (as obtained by the quasi-maximum likelihood estimator) and depends on the tails of the data generating process....

  6. Development of distributed target

    CERN Document Server

    Yu Hai Jun; Li Qin; Zhou Fu Xin; Shi Jin Shui; Ma Bing; Chen Nan; Jing Xiao Bing

    2002-01-01

    Linear introduction accelerator is expected to generate small diameter X-ray spots with high intensity. The interaction of the electron beam with plasmas generated at the X-ray converter will make the spot on target increase with time and debase the X-ray dose and the imaging resolving power. A distributed target is developed which has about 24 pieces of thin 0.05 mm tantalum films distributed over 1 cm. due to the structure adoption, the distributed target material over a large volume decreases the energy deposition per unit volume and hence reduces the temperature of target surface, then reduces the initial plasma formalizing and its expansion velocity. The comparison and analysis with two kinds of target structures are presented using numerical calculation and experiments, the results show the X-ray dose and normalized angle distribution of the two is basically the same, while the surface of the distributed target is not destroyed like the previous block target

  7. Target distribution in cooperative combat based on Bayesian optimization algorithm

    Institute of Scientific and Technical Information of China (English)

    Shi Zhifu; Zhang An; Wang Anli

    2006-01-01

    Target distribution in cooperative combat is a difficult and emphases. We build up the optimization model according to the rule of fire distribution. We have researched on the optimization model with BOA. The BOA can estimate the joint probability distribution of the variables with Bayesian network, and the new candidate solutions also can be generated by the joint distribution. The simulation example verified that the method could be used to solve the complex question, the operation was quickly and the solution was best.

  8. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    Science.gov (United States)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  9. Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks

    Directory of Open Access Journals (Sweden)

    José Raúl Machado-Fernández

    2016-12-01

    Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.

  10. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar

    Science.gov (United States)

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-01-01

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058

  11. Bayesian Nonparametric Estimation of Targeted Agent Effects on Biomarker Change to Predict Clinical Outcome

    Science.gov (United States)

    Graziani, Rebecca; Guindani, Michele; Thall, Peter F.

    2015-01-01

    Summary The effect of a targeted agent on a cancer patient's clinical outcome putatively is mediated through the agent's effect on one or more early biological events. This is motivated by pre-clinical experiments with cells or animals that identify such events, represented by binary or quantitative biomarkers. When evaluating targeted agents in humans, central questions are whether the distribution of a targeted biomarker changes following treatment, the nature and magnitude of this change, and whether it is associated with clinical outcome. Major difficulties in estimating these effects are that a biomarker's distribution may be complex, vary substantially between patients, and have complicated relationships with clinical outcomes. We present a probabilistically coherent framework for modeling and estimation in this setting, including a hierarchical Bayesian nonparametric mixture model for biomarkers that we use to define a functional profile of pre-versus-post treatment biomarker distribution change. The functional is similar to the receiver operating characteristic used in diagnostic testing. The hierarchical model yields clusters of individual patient biomarker profile functionals, and we use the profile as a covariate in a regression model for clinical outcome. The methodology is illustrated by analysis of a dataset from a clinical trial in prostate cancer using imatinib to target platelet-derived growth factor, with the clinical aim to improve progression-free survival time. PMID:25319212

  12. Distributed estimation and control for mobile sensor networks with coupling delays.

    Science.gov (United States)

    Su, Housheng; Chen, Xuan; Chen, Michael Z Q; Wang, Lei

    2016-09-01

    This paper deals with the issue of distributed estimation and control for mobile sensor networks with coupling delays. Based on the Kalman-Consensus filter and the flocking algorithm, all mobile sensors move to a target to increase the quality of gathered data, and achieve consensus on the estimation values of the target in the presence of time-delay and noises. By applying an effective cascading Lyapunov method and matrix theory, stability analysis is carried out. Furthermore, a necessary condition for the convergence is presented via the boundary conditions of feedback coefficients. Some numerical examples are provided to validate the effectiveness of theoretical results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. P3T+: A Performance Estimator for Distributed and Parallel Programs

    Directory of Open Access Journals (Sweden)

    T. Fahringer

    2000-01-01

    Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.

  14. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  15. Kalman filter data assimilation: targeting observations and parameter estimation.

    Science.gov (United States)

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  16. Kalman filter data assimilation: Targeting observations and parameter estimation

    International Nuclear Information System (INIS)

    Bellsky, Thomas; Kostelich, Eric J.; Mahalov, Alex

    2014-01-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation

  17. Distributed state estimation for multi-agent based active distribution networks

    NARCIS (Netherlands)

    Nguyen, H.P.; Kling, W.L.

    2010-01-01

    Along with the large-scale implementation of distributed generators, the current distribution networks have changed gradually from passive to active operation. State estimation plays a vital role to facilitate this transition. In this paper, a suitable state estimation method for the active network

  18. Estimation of the target stem-cell population size in chronic myeloid leukemogenesis

    International Nuclear Information System (INIS)

    Radivoyevitch, T.; Ramsey, M.J.; Tucker, J.D.

    1999-01-01

    Estimation of the number of hematopoietic stem cells capable of causing chronic myeloid leukemia (CML) is relevant to the development of biologically based risk models of radiation-induced CML. Through a comparison of the age structure of CML incidence data from the Surveillance, Epidemiology, and End Results (SEER) Program and the age structure of chromosomal translocations found in healthy subjects, the number of CML target stem cells is estimated for individuals above 20 years of age. The estimation involves three steps. First, CML incidence among adults is fit to an exponentially increasing function of age. Next, assuming a relatively short waiting time distribution between BCR-ABL induction and the appearance of CML, an exponential age function with rate constants fixed to the values found for CML is fitted to the translocation data. Finally, assuming that translocations are equally likely to occur between any two points in the genome, the parameter estimates found in the first two steps are used to estimate the number of target stem cells for CML. The population-averaged estimates of this number are found to be 1.86 x 10 8 for men and 1.21 x 10 8 for women; the 95% confidence intervals of these estimates are (1.34 x 10 8 , 2.50 x 10 8 ) and (0.84 x 10 8 , 1.83 x 10 8 ), respectively. (orig.)

  19. Resilient Distributed Estimation Through Adversary Detection

    Science.gov (United States)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2018-05-01

    This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.

  20. Distributed collaborative processing in wireless sensor networks with application to target localization and beamforming

    OpenAIRE

    Béjar Haro, Benjamín

    2013-01-01

    Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of es...

  1. Measurement for cobalt target activity and its axial distribution

    International Nuclear Information System (INIS)

    Li Xingyuan; Chen Zigen.

    1985-01-01

    Cobalt target activity and its axial distribution are measured in process of producing radioactive isotopes 60 Co by irradiation in HFETR. Cobalt target activity is obtained with measured data at 3.60 m and 4.60 m, relative axial distribution of cobalt target activity is obtained with one at 30 cm, and axial distribution of cobalt target activity(or specific activity) is obtained with both of data. The difference between this specific activity and measured result for 60 Co teletherapy sources in the end is less than +- 5%

  2. Pilots' Attention Distributions Between Chasing a Moving Target and a Stationary Target.

    Science.gov (United States)

    Li, Wen-Chin; Yu, Chung-San; Braithwaite, Graham; Greaves, Matthew

    2016-12-01

    Attention plays a central role in cognitive processing; ineffective attention may induce accidents in flight operations. The objective of the current research was to examine military pilots' attention distributions between chasing a moving target and a stationary target. In the current research, 37 mission-ready F-16 pilots participated. Subjects' eye movements were collected by a portable head-mounted eye-tracker during tactical training in a flight simulator. The scenarios of chasing a moving target (air-to-air) and a stationary target (air-to-surface) consist of three operational phases: searching, aiming, and lock-on to the targets. The findings demonstrated significant differences in pilots' percentage of fixation during the searching phase between air-to-air (M = 37.57, SD = 5.72) and air-to-surface (M = 33.54, SD = 4.68). Fixation duration can indicate pilots' sustained attention to the trajectory of a dynamic target during air combat maneuvers. Aiming at the stationary target resulted in larger pupil size (M = 27,105, SD = 6565), reflecting higher cognitive loading than aiming at the dynamic target (M = 23,864, SD = 8762). Pilots' visual behavior is not only closely related to attention distribution, but also significantly associated with task characteristics. Military pilots demonstrated various visual scan patterns for searching and aiming at different types of targets based on the research settings of a flight simulator. The findings will facilitate system designers' understanding of military pilots' cognitive processes during tactical operations. They will assist human-centered interface design to improve pilots' situational awareness. The application of an eye-tracking device integrated with a flight simulator is a feasible and cost-effective intervention to improve the efficiency and safety of tactical training.Li W-C, Yu C-S, Braithwaite G, Greaves M. Pilots' attention distributions between chasing a moving target and a stationary target. Aerosp Med

  3. Distribution load estimation (DLE)

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A; Lehtonen, M [VTT Energy, Espoo (Finland)

    1998-08-01

    The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented

  4. Low Complexity Parameter Estimation For Off-the-Grid Targets

    KAUST Repository

    Jardak, Seifallah

    2015-10-05

    In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.

  5. Statistical distributions applications and parameter estimates

    CERN Document Server

    Thomopoulos, Nick T

    2017-01-01

    This book gives a description of the group of statistical distributions that have ample application to studies in statistics and probability.  Understanding statistical distributions is fundamental for researchers in almost all disciplines.  The informed researcher will select the statistical distribution that best fits the data in the study at hand.  Some of the distributions are well known to the general researcher and are in use in a wide variety of ways.  Other useful distributions are less understood and are not in common use.  The book describes when and how to apply each of the distributions in research studies, with a goal to identify the distribution that best applies to the study.  The distributions are for continuous, discrete, and bivariate random variables.  In most studies, the parameter values are not known a priori, and sample data is needed to estimate parameter values.  In other scenarios, no sample data is available, and the researcher seeks some insight that allows the estimate of ...

  6. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...

  7. Range distributions in multiply implanted targets

    International Nuclear Information System (INIS)

    Kostic, S.; Jimenez-Rodriguez, J.J.; Karpuzov, D.S.; Armour, D.G.; Carter, G.; Salford Univ.

    1984-01-01

    Range distributions in inhomogeneous binary targets have been investigated both theoretically and experimentally. Silicon single crystal targets [(111) orientation] were implanted with 40 keV Pb + ions to fluences in the range from 5x10 14 to 7.5x10 16 cm -2 prior to bombardment with 80 keV Kr + ions to a fluence of 5x10 15 cm -2 . The samples were analysed using high resolution Rutherford backscattering before and after the krypton implantation in order to determine the dependence of the krypton distribution on the amount of lead previously implanted. The theoretical analysis was undertaken using the formalism developed in [1] and the computer simulation was based on the MARLOWE code. The agreement between the experimental, theoretical and computational krypton profiles is very good and the results indicate that accurate prediction of ranges profiles in inhomogeneous binary targets is possible using available theoretical and computational treatments. (orig.)

  8. Nonparametric e-Mixture Estimation.

    Science.gov (United States)

    Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2016-12-01

    This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.

  9. Estimation of Radar Cross Section of a Target under Track

    Directory of Open Access Journals (Sweden)

    Hong Sun-Mog

    2010-01-01

    Full Text Available In allocating radar beam for tracking a target, it is attempted to maintain the signal-to-noise ratio (SNR of signal returning from the illuminated target close to an optimum value for efficient track updates. An estimate of the average radar cross section (RCS of the target is required in order to adjust transmitted power based on the estimate such that a desired SNR can be realized. In this paper, a maximum-likelihood (ML approach is presented for estimating the average RCS, and a numerical solution to the approach is proposed based on a generalized expectation maximization (GEM algorithm. Estimation accuracy of the approach is compared to that of a previously reported procedure.

  10. A Bayesian nonparametric estimation of distributions and quantiles

    International Nuclear Information System (INIS)

    Poern, K.

    1988-11-01

    The report describes a Bayesian, nonparametric method for the estimation of a distribution function and its quantiles. The method, presupposing random sampling, is nonparametric, so the user has to specify a prior distribution on a space of distributions (and not on a parameter space). In the current application, where the method is used to estimate the uncertainty of a parametric calculational model, the Dirichlet prior distribution is to a large extent determined by the first batch of Monte Carlo-realizations. In this case the results of the estimation technique is very similar to the conventional empirical distribution function. The resulting posterior distribution is also Dirichlet, and thus facilitates the determination of probability (confidence) intervals at any given point in the space of interest. Another advantage is that also the posterior distribution of a specified quantitle can be derived and utilized to determine a probability interval for that quantile. The method was devised for use in the PROPER code package for uncertainty and sensitivity analysis. (orig.)

  11. Multiframe Superresolution of Vehicle License Plates Based on Distribution Estimation Approach

    Directory of Open Access Journals (Sweden)

    Renchao Jin

    2016-01-01

    Full Text Available Low-resolution (LR license plate images or videos are often captured in the practical applications. In this paper, a distribution estimation based superresolution (SR algorithm is proposed to reconstruct the license plate image. Different from the previous work, here, the high-resolution (HR image is estimated via the obtained posterior probability distribution by using the variational Bayesian framework. To regularize the estimated HR image, a feature-specific prior model is proposed by considering the most significant characteristic of license plate images; that is, the target has high contrast with the background. In order to assure the success of the SR reconstruction, the models representing smoothness constraints on images are also used to regularize the estimated HR image with the proposed feature-specific prior model. We show by way of experiments, under challenging blur with size 7 × 7 and zero-mean Gaussian white noise with variances 0.2 and 0.5, respectively, that the proposed method could achieve the peak signal-to-noise ratio (PSNR of 22.69 dB and the structural similarity (SSIM of 0.9022 under the noise with variance 0.2 and the PSNR of 19.89 dB and the SSIM of 0.8582 even under the noise with variance 0.5, which are 1.84 dB and 0.04 improvements in comparison with other methods.

  12. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  13. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A. [VTT Energy, Espoo (Finland)

    1996-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  14. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A [VTT Energy, Espoo (Finland)

    1997-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  15. Effect of Smart Meter Measurements Data On Distribution State Estimation

    DEFF Research Database (Denmark)

    Pokhrel, Basanta Raj; Nainar, Karthikeyan; Bak-Jensen, Birgitte

    2018-01-01

    Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements in the phy......Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements...... in the physical grid can enforce significant stress not only on the communication infrastructure but also in the control algorithms. This paper aims to propose a methodology to analyze needed real time smart meter data from low voltage distribution grids and their applicability in distribution state estimation...

  16. Estimation of expected value for lognormal and gamma distributions

    International Nuclear Information System (INIS)

    White, G.C.

    1978-01-01

    Concentrations of environmental pollutants tend to follow positively skewed frequency distributions. Two such density functions are the gamma and lognormal. Minimum variance unbiased estimators of the expected value for both densities are available. The small sample statistical properties of each of these estimators were compared for its own distribution, as well as the other distribution to check the robustness of the estimator. Results indicated that the arithmetic mean provides an unbiased estimator when the underlying density function of the sample is either lognormal or gamma, and that the achieved coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two. Further Monte Carlo simulations were conducted to study the robustness of the above estimators by simulating a lognormal or gamma distribution with the expected value of a particular observation selected from a uniform distribution before the lognormal or gamma observation is generated. Again, the arithmetic mean provides an unbiased estimate of expected value, and the coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two

  17. Distributed Estimation using Bayesian Consensus Filtering

    Science.gov (United States)

    2014-06-06

    Convergence rate analysis of distributed gossip (linear parameter) estimation: Fundamental limits and tradeoffs,” IEEE J. Sel. Topics Signal Process...Dimakis, S. Kar, J. Moura, M. Rabbat, and A. Scaglione, “ Gossip algorithms for distributed signal processing,” Proc. of the IEEE, vol. 98, no. 11, pp

  18. Quantum partial search for uneven distribution of multiple target items

    Science.gov (United States)

    Zhang, Kun; Korepin, Vladimir

    2018-06-01

    Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.

  19. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  20. A simplified approach to estimating the distribution of occasionally-consumed dietary components, applied to alcohol intake

    Directory of Open Access Journals (Sweden)

    Julia Chernova

    2016-07-01

    Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the

  1. Rotating Parabolic-Reflector Antenna Target in SAR Data: Model, Characteristics, and Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Bin Deng

    2013-01-01

    Full Text Available Parabolic-reflector antennas (PRAs, usually possessing rotation, are a particular type of targets of potential interest to the synthetic aperture radar (SAR community. This paper is aimed to investigate PRA’s scattering characteristics and then to extract PRA’s parameters from SAR returns, for supporting image interpretation and target recognition. We at first obtain both closed-form and numeric solutions to PRA’s backscattering by geometrical optics (GO, physical optics, and graphical electromagnetic computation, respectively. Based on the GO solution, a migratory scattering center model is at first presented for representing the movement of the specular point with aspect angle, and then a hybrid model, named the migratory/micromotion scattering center (MMSC model, is proposed for characterizing a rotating PRA in the SAR geometry, which incorporates PRA’s rotation into its migratory scattering center model. Additionally, we in detail analyze PRA’s radar characteristics on radar cross-section, high-resolution range profiles, time-frequency distribution, and 2D images, which also confirm the models proposed. A maximal likelihood estimator is developed for jointly solving the MMSC model for PRA’s multiple parameters by optimization. By exploiting the aforementioned characteristics, the coarse parameter estimation guarantees convergency upon global minima. The signatures recovered can be favorably utilized for SAR image interpretation and target recognition.

  2. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  3. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  4. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  5. Hydroacoustic estimates of fish biomass and spatial distributions in shallow lakes

    Science.gov (United States)

    Lian, Yuxi; Huang, Geng; Godlewska, Małgorzata; Cai, Xingwei; Li, Chang; Ye, Shaowen; Liu, Jiashou; Li, Zhongjie

    2018-03-01

    We conducted acoustical surveys with a horizontal beam transducer to detect fish and with a vertical beam transducer to detect depth and macrophytes in two typical shallow lakes along the middle and lower reaches of the Changjiang (Yangtze) River in November 2013. Both lakes are subject to active fish management with annual stocking and removal of large fish. The purpose of the study was to compare hydroacoustic horizontal beam estimates with fish landings. The preliminary results show that the fish distribution patterns differed in the two lakes and were affected by water depth and macrophyte coverage. The hydroacoustically estimated fish biomass matched the commercial catch very well in Niushan Lake, but it was two times higher in Kuilei Lake. However, acoustic estimates included all fish, whereas the catch included only fish >45 cm (smaller ones were released). We were unable to determine the proper regression between acoustic target strength and fish length for the dominant fish species in the two lakes.

  6. Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes

    Directory of Open Access Journals (Sweden)

    Chun-mao Yeh

    2016-01-01

    Full Text Available This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs. Then, the rotating velocity (RV is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.

  7. Multivariate phase type distributions - Applications and parameter estimation

    DEFF Research Database (Denmark)

    Meisch, David

    The best known univariate probability distribution is the normal distribution. It is used throughout the literature in a broad field of applications. In cases where it is not sensible to use the normal distribution alternative distributions are at hand and well understood, many of these belonging...... and statistical inference, is the multivariate normal distribution. Unfortunately only little is known about the general class of multivariate phase type distribution. Considering the results concerning parameter estimation and inference theory of univariate phase type distributions, the class of multivariate...... projects and depend on reliable cost estimates. The Successive Principle is a group analysis method primarily used for analyzing medium to large projects in relation to cost or duration. We believe that the mathematical modeling used in the Successive Principle can be improved. We suggested a novel...

  8. Weighted Optimization-Based Distributed Kalman Filter for Nonlinear Target Tracking in Collaborative Sensor Networks.

    Science.gov (United States)

    Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang

    2017-11-01

    The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.

  9. Measurements of activation reaction rate distributions on a mercury target bombarded with high-energy protons at AGS

    International Nuclear Information System (INIS)

    Takada, Hiroshi; Kasugai, Yoshimi; Nakashima, Hiroshi; Ikeda, Yujiro; Jerde, Eric; Glasgow, David

    2000-02-01

    A neutronics experiment was carried out using a thick mercury target at the Alternating Gradient Synchrotron (AGS) facility of Brookhaven National Laboratory in a framework of the ASTE (AGS Spallation Target Experiment) collaboration. Reaction rate distributions around the target were measured by the activation technique at incident proton energies of 1.6, 12 and 24 GeV. Various activation detectors such as the 115 In(n,n') 115m In, 93 Nb(n,2n) 92m Nb, and 209 Bi(n,xn) reactions with threshold energies ranging from 0.3 to 70.5 MeV were employed to obtain the reaction rate data for estimating spallation source neutron characteristics of the mercury target. It was found from the measured 115 In(n,n') 115m In reaction rate distribution that the number of leakage neutrons becomes maximum at about 11 cm from the top of hemisphere of the mercury target for the 1.6-GeV proton incidence and the peak position moves towards forward direction with increase of the incident proton energy. The similar result was observed in the reaction rate distributions of other activation detectors. The experimental procedures and a full set of experimental data in numerical form are summarized in this report. (author)

  10. Estimating the parameters of a generalized lambda distribution

    International Nuclear Information System (INIS)

    Fournier, B.; Rupin, N.; Najjar, D.; Iost, A.; Rupin, N.; Bigerelle, M.; Wilcox, R.; Fournier, B.

    2007-01-01

    The method of moments is a popular technique for estimating the parameters of a generalized lambda distribution (GLD), but published results suggest that the percentile method gives superior results. However, the percentile method cannot be implemented in an automatic fashion, and automatic methods, like the starship method, can lead to prohibitive execution time with large sample sizes. A new estimation method is proposed that is automatic (it does not require the use of special tables or graphs), and it reduces the computational time. Based partly on the usual percentile method, this new method also requires choosing which quantile u to use when fitting a GLD to data. The choice for u is studied and it is found that the best choice depends on the final goal of the modeling process. The sampling distribution of the new estimator is studied and compared to the sampling distribution of estimators that have been proposed. Naturally, all estimators are biased and here it is found that the bias becomes negligible with sample sizes n ≥ 2 * 10(3). The.025 and.975 quantiles of the sampling distribution are investigated, and the difference between these quantiles is found to decrease proportionally to 1/root n.. The same results hold for the moment and percentile estimates. Finally, the influence of the sample size is studied when a normal distribution is modeled by a GLD. Both bounded and unbounded GLDs are used and the bounded GLD turns out to be the most accurate. Indeed it is shown that, up to n = 10(6), bounded GLD modeling cannot be rejected by usual goodness-of-fit tests. (authors)

  11. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  12. A Study of Adaptive Detection of Range-Distributed Targets

    National Research Council Canada - National Science Library

    Gerlach, Karl R

    2000-01-01

    ... to be characterized as complex zero-mean correlated Gaussian random variables. The target's or targets' complex amplitudes are assumed to be distributed across the entire input data block (sensor x range...

  13. SIMPLE ESTIMATOR AND CONSISTENT STRONGLY OF STABLE DISTRIBUTIONS

    Directory of Open Access Journals (Sweden)

    Cira E. Guevara Otiniano

    2016-06-01

    Full Text Available Stable distributions are extensively used to analyze earnings of financial assets, such as exchange rates and stock prices assets. In this paper we propose a simple and strongly consistent estimator for the scale parameter of a symmetric stable L´evy distribution. The advantage of this estimator is that your computational time is minimum thus it can be used to initialize intensive computational procedure such as maximum likelihood. With random samples of sized n we tested the efficacy of these estimators by Monte Carlo method. We also included applications for three data sets.

  14. Distributed Dynamic State Estimation with Extended Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Du, Pengwei; Huang, Zhenyu; Sun, Yannan; Diao, Ruisheng; Kalsi, Karanjit; Anderson, Kevin K.; Li, Yulan; Lee, Barry

    2011-08-04

    Increasing complexity associated with large-scale renewable resources and novel smart-grid technologies necessitates real-time monitoring and control. Our previous work applied the extended Kalman filter (EKF) with the use of phasor measurement data (PMU) for dynamic state estimation. However, high computation complexity creates significant challenges for real-time applications. In this paper, the problem of distributed dynamic state estimation is investigated. One domain decomposition method is proposed to utilize decentralized computing resources. The performance of distributed dynamic state estimation is tested on a 16-machine, 68-bus test system.

  15. Estimation of Bridge Reliability Distributions

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...

  16. Collaborative 3D Target Tracking in Distributed Smart Camera Networks for Wide-Area Surveillance

    Directory of Open Access Journals (Sweden)

    Xenofon Koutsoukos

    2013-05-01

    Full Text Available With the evolution and fusion of wireless sensor network and embedded camera technologies, distributed smart camera networks have emerged as a new class of systems for wide-area surveillance applications. Wireless networks, however, introduce a number of constraints to the system that need to be considered, notably the communication bandwidth constraints. Existing approaches for target tracking using a camera network typically utilize target handover mechanisms between cameras, or combine results from 2D trackers in each camera into 3D target estimation. Such approaches suffer from scale selection, target rotation, and occlusion, drawbacks typically associated with 2D tracking. In this paper, we present an approach for tracking multiple targets directly in 3D space using a network of smart cameras. The approach employs multi-view histograms to characterize targets in 3D space using color and texture as the visual features. The visual features from each camera along with the target models are used in a probabilistic tracker to estimate the target state. We introduce four variations of our base tracker that incur different computational and communication costs on each node and result in different tracking accuracy. We demonstrate the effectiveness of our proposed trackers by comparing their performance to a 3D tracker that fuses the results of independent 2D trackers. We also present performance analysis of the base tracker along Quality-of-Service (QoS and Quality-of-Information (QoI metrics, and study QoS vs. QoI trade-offs between the proposed tracker variations. Finally, we demonstrate our tracker in a real-life scenario using a camera network deployed in a building.

  17. Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target

    Directory of Open Access Journals (Sweden)

    Chang-jian Ru

    2015-01-01

    Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.

  18. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    Science.gov (United States)

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  19. Using a Regression Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

    Science.gov (United States)

    2017-12-01

    Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR and FAR. Analysis was...distribution is unlimited. 8 Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR...stimuli. Simulations show that this regression method results in an unbiased and accurate estimate of target detection performance. The regression

  20. Measurements of activation reaction rate distributions on a mercury target bombarded with high-energy protons at AGS

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Hiroshi; Kasugai, Yoshimi; Nakashima, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ino, Takashi; Kawai, Masayoshi [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Jerde, Eric; Glasgow, David [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2000-02-01

    A neutronics experiment was carried out using a thick mercury target at the Alternating Gradient Synchrotron (AGS) facility of Brookhaven National Laboratory in a framework of the ASTE (AGS Spallation Target Experiment) collaboration. Reaction rate distributions around the target were measured by the activation technique at incident proton energies of 1.6, 12 and 24 GeV. Various activation detectors such as the {sup 115}In(n,n'){sup 115m}In, {sup 93}Nb(n,2n){sup 92m}Nb, and {sup 209}Bi(n,xn) reactions with threshold energies ranging from 0.3 to 70.5 MeV were employed to obtain the reaction rate data for estimating spallation source neutron characteristics of the mercury target. It was found from the measured {sup 115}In(n,n'){sup 115m}In reaction rate distribution that the number of leakage neutrons becomes maximum at about 11 cm from the top of hemisphere of the mercury target for the 1.6-GeV proton incidence and the peak position moves towards forward direction with increase of the incident proton energy. The similar result was observed in the reaction rate distributions of other activation detectors. The experimental procedures and a full set of experimental data in numerical form are summarized in this report. (author)

  1. Estimating the cost of improving quality in electricity distribution: A parametric distance function approach

    International Nuclear Information System (INIS)

    Coelli, Tim J.; Gautier, Axel; Perelman, Sergio; Saplacan-Pop, Roxana

    2013-01-01

    The quality of electricity distribution is being more and more scrutinized by regulatory authorities, with explicit reward and penalty schemes based on quality targets having been introduced in many countries. It is then of prime importance to know the cost of improving the quality for a distribution system operator. In this paper, we focus on one dimension of quality, the continuity of supply, and we estimated the cost of preventing power outages. For that, we make use of the parametric distance function approach, assuming that outages enter in the firm production set as an input, an imperfect substitute for maintenance activities and capital investment. This allows us to identify the sources of technical inefficiency and the underlying trade-off faced by operators between quality and other inputs and costs. For this purpose, we use panel data on 92 electricity distribution units operated by ERDF (Electricité de France - Réseau Distribution) in the 2003–2005 financial years. Assuming a multi-output multi-input translog technology, we estimate that the cost of preventing one interruption is equal to 10.7€ for an average DSO. Furthermore, as one would expect, marginal quality improvements tend to be more expensive as quality itself improves. - Highlights: ► We estimate the implicit cost of outages for the main distribution company in France. ► For this purpose, we make use of a parametric distance function approach. ► Marginal quality improvements tend to be more expensive as quality itself improves. ► The cost of preventing one interruption varies from 1.8 € to 69.2 € (2005 prices). ► We estimate that, in average, it lays 33% above the regulated price of quality.

  2. SAR target recognition and posture estimation using spatial pyramid pooling within CNN

    Science.gov (United States)

    Peng, Lijiang; Liu, Xiaohua; Liu, Ming; Dong, Liquan; Hui, Mei; Zhao, Yuejin

    2018-01-01

    Many convolution neural networks(CNN) architectures have been proposed to strengthen the performance on synthetic aperture radar automatic target recognition (SAR-ATR) and obtained state-of-art results on targets classification on MSTAR database, but few methods concern about the estimation of depression angle and azimuth angle of targets. To get better effect on learning representation of hierarchies of features on both 10-class target classification task and target posture estimation tasks, we propose a new CNN architecture with spatial pyramid pooling(SPP) which can build high hierarchy of features map by dividing the convolved feature maps from finer to coarser levels to aggregate local features of SAR images. Experimental results on MSTAR database show that the proposed architecture can get high recognition accuracy as 99.57% on 10-class target classification task as the most current state-of-art methods, and also get excellent performance on target posture estimation tasks which pays attention to depression angle variety and azimuth angle variety. What's more, the results inspire us the application of deep learning on SAR target posture description.

  3. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  4. Unbiased estimators for spatial distribution functions of classical fluids

    Science.gov (United States)

    Adib, Artur B.; Jarzynski, Christopher

    2005-01-01

    We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.

  5. Impact of microbial count distributions on human health risk estimates

    DEFF Research Database (Denmark)

    Ribeiro Duarte, Ana Sofia; Nauta, Maarten

    2015-01-01

    Quantitative microbiological risk assessment (QMRA) is influenced by the choice of the probability distribution used to describe pathogen concentrations, as this may eventually have a large effect on the distribution of doses at exposure. When fitting a probability distribution to microbial...... enumeration data, several factors may have an impact on the accuracy of that fit. Analysis of the best statistical fits of different distributions alone does not provide a clear indication of the impact in terms of risk estimates. Thus, in this study we focus on the impact of fitting microbial distributions...... on risk estimates, at two different concentration scenarios and at a range of prevalence levels. By using five different parametric distributions, we investigate whether different characteristics of a good fit are crucial for an accurate risk estimate. Among the factors studied are the importance...

  6. The influence of drug distribution and drug-target binding on target occupancy : The rate-limiting step approximation

    NARCIS (Netherlands)

    Witte, de W.E.A.; Vauquelin, G.; Graaf, van der P.H.; Lange, de E.C.M.

    2017-01-01

    The influence of drug-target binding kinetics on target occupancy can be influenced by drug distribution and diffusion around the target, often referred to as "rebinding" or "diffusion-limited binding". This gives rise to a decreased decline of the drug-target complex concentration as a result of a

  7. Quadratic Frequency Modulation Signals Parameter Estimation Based on Two-Dimensional Product Modified Parameterized Chirp Rate-Quadratic Chirp Rate Distribution.

    Science.gov (United States)

    Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong

    2018-05-19

    In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.

  8. Estimation of the shape parameter of a generalized Pareto distribution based on a transformation to Pareto distributed variables

    OpenAIRE

    van Zyl, J. Martin

    2012-01-01

    Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good wit...

  9. Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs

    Science.gov (United States)

    Kar, Soummya; Moura, José M. F.

    2011-08-01

    The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.

  10. Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); D. Thierens (Dirk); D. Thierens (Dirk)

    2007-01-01

    htmlabstractRecent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we

  11. An observer-theoretic approach to estimating neutron flux distribution

    International Nuclear Information System (INIS)

    Park, Young Ho; Cho, Nam Zin

    1989-01-01

    State feedback control provides many advantages such as stabilization and improved transient response. However, when the state feedback control is considered for spatial control of a nuclear reactor, it requires complete knowledge of the distributions of the system state variables. This paper describes a method for estimating the flux spatial distribution using only limited flux measurements. It is based on the Luenberger observer in control theory, extended to the distributed parameter systems such as the space-time reactor dynamics equation. The results of the application of the method to simple reactor models showed that the flux distribution is estimated by the observer very efficiently using information from only a few sensors

  12. Wireless sensor networks distributed consensus estimation

    CERN Document Server

    Chen, Cailian; Guan, Xinping

    2014-01-01

    This SpringerBrief evaluates the cooperative effort of sensor nodes to accomplish high-level tasks with sensing, data processing and communication. The metrics of network-wide convergence, unbiasedness, consistency and optimality are discussed through network topology, distributed estimation algorithms and consensus strategy. Systematic analysis reveals that proper deployment of sensor nodes and a small number of low-cost relays (without sensing function) can speed up the information fusion and thus improve the estimation capability of wireless sensor networks (WSNs). This brief also investiga

  13. Estimation of potential distribution of gas hydrate in the northern South China Sea

    Science.gov (United States)

    Wang, Chunjuan; Du, Dewen; Zhu, Zhiwei; Liu, Yonggang; Yan, Shijuan; Yang, Gang

    2010-05-01

    Gas hydrate research has significant importance for securing world energy resources, and has the potential to produce considerable economic benefits. Previous studies have shown that the South China Sea is an area that harbors gas hydrates. However, there is a lack of systematic investigations and understanding on the distribution of gas hydrate throughout the region. In this paper, we applied mineral resource quantitative assessment techniques to forecast and estimate the potential distribution of gas hydrate resources in the northern South China Sea. However, current hydrate samples from the South China Sea are too few to produce models of occurrences. Thus, according to similarity and contrast principles of mineral outputs, we can use a similar hydrate-mining environment with sufficient gas hydrate data as a testing ground for modeling northern South China Sea gas hydrate conditions. We selected the Gulf of Mexico, which has extensively studied gas hydrates, to develop predictive models of gas hydrate distributions, and to test errors in the model. Then, we compared the existing northern South China Sea hydrate-mining data with the Gulf of Mexico characteristics, and collated the relevant data into the model. Subsequently, we applied the model to the northern South China Sea to obtain the potential gas hydrate distribution of the area, and to identify significant exploration targets. Finally, we evaluated the reliability of the predicted results. The south seabed area of Taiwan Bank is recommended as a priority exploration target. The Zhujiang Mouth, Southeast Hainan, and Southwest Taiwan Basins, including the South Bijia Basin, also are recommended as exploration target areas. In addition, the method in this paper can provide a useful predictive approach for gas hydrate resource assessment, which gives a scientific basis for construction and implementation of long-term planning for gas hydrate exploration and general exploitation of the seabed of China.

  14. An ML-Based Radial Velocity Estimation Algorithm for Moving Targets in Spaceborne High-Resolution and Wide-Swath SAR Systems

    Directory of Open Access Journals (Sweden)

    Tingting Jin

    2017-04-01

    Full Text Available Multichannel synthetic aperture radar (SAR is a significant breakthrough to the inherent limitation between high-resolution and wide-swath (HRWS compared with conventional SAR. Moving target indication (MTI is an important application of spaceborne HRWS SAR systems. In contrast to previous studies of SAR MTI, the HRWS SAR mainly faces the problem of under-sampled data of each channel, causing single-channel imaging and processing to be infeasible. In this study, the estimation of velocity is equivalent to the estimation of the cone angle according to their relationship. The maximum likelihood (ML based algorithm is proposed to estimate the radial velocity in the existence of Doppler ambiguities. After that, the signal reconstruction and compensation for the phase offset caused by radial velocity are processed for a moving target. Finally, the traditional imaging algorithm is applied to obtain a focused moving target image. Experiments are conducted to evaluate the accuracy and effectiveness of the estimator under different signal-to-noise ratios (SNR. Furthermore, the performance is analyzed with respect to the motion ship that experiences interference due to different distributions of sea clutter. The results verify that the proposed algorithm is accurate and efficient with low computational complexity. This paper aims at providing a solution to the velocity estimation problem in the future HRWS SAR systems with multiple receive channels.

  15. Distribution measurement of 60Co target radioactive specific activity

    International Nuclear Information System (INIS)

    Li Xingyan; Chen Zigen; Ren Min

    1994-01-01

    Radioactive specific activity distribution of cobalt 60 target by irradiation in HFETR is a key parameter. With the collimate principle, the under water measurement device and conversion coefficient which is get by experiments, and the radioactive specific activity distribution is obtained. The uncertainty of measurement is less than 10%

  16. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  17. Parameter estimation of the zero inflated negative binomial beta exponential distribution

    Science.gov (United States)

    Sirichantra, Chutima; Bodhisuwan, Winai

    2017-11-01

    The zero inflated negative binomial-beta exponential (ZINB-BE) distribution is developed, it is an alternative distribution for the excessive zero counts with overdispersion. The ZINB-BE distribution is a mixture of two distributions which are Bernoulli and negative binomial-beta exponential distributions. In this work, some characteristics of the proposed distribution are presented, such as, mean and variance. The maximum likelihood estimation is applied to parameter estimation of the proposed distribution. Finally some results of Monte Carlo simulation study, it seems to have high-efficiency when the sample size is large.

  18. Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials

    Science.gov (United States)

    Niu, Qifei; Zhang, Chi

    2018-03-01

    There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.

  19. Asymptotically Constant-Risk Predictive Densities When the Distributions of Data and Target Variables Are Different

    Directory of Open Access Journals (Sweden)

    Keisuke Yano

    2014-05-01

    Full Text Available We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and the Fisher information matrix for the target variables. We assume that the trace has a unique maximum point with respect to the parameter. We construct asymptotically constant-risk Bayesian predictive densities using a prior depending on the sample size. Further, we apply the theory to the subminimax estimator problem and the prediction based on the binary regression model.

  20. Improved target detection and bearing estimation utilizing fast orthogonal search for real-time spectral analysis

    International Nuclear Information System (INIS)

    Osman, Abdalla; El-Sheimy, Naser; Nourledin, Aboelamgd; Theriault, Jim; Campbell, Scott

    2009-01-01

    The problem of target detection and tracking in the ocean environment has attracted considerable attention due to its importance in military and civilian applications. Sonobuoys are one of the capable passive sonar systems used in underwater target detection. Target detection and bearing estimation are mainly obtained through spectral analysis of received signals. The frequency resolution introduced by current techniques is limited which affects the accuracy of target detection and bearing estimation at a relatively low signal-to-noise ratio (SNR). This research investigates the development of a bearing estimation method using fast orthogonal search (FOS) for enhanced spectral estimation. FOS is employed in this research in order to improve both target detection and bearing estimation in the case of low SNR inputs. The proposed methods were tested using simulated data developed for two different scenarios under different underwater environmental conditions. The results show that the proposed method is capable of enhancing the accuracy for target detection as well as bearing estimation especially in cases of a very low SNR

  1. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.

    2011-01-01

    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  2. A game theory approach to target tracking in sensor networks.

    Science.gov (United States)

    Gu, Dongbing

    2011-02-01

    In this paper, we investigate a moving-target tracking problem with sensor networks. Each sensor node has a sensor to observe the target and a processor to estimate the target position. It also has wireless communication capability but with limited range and can only communicate with neighbors. The moving target is assumed to be an intelligent agent, which is "smart" enough to escape from the detection by maximizing the estimation error. This adversary behavior makes the target tracking problem more difficult. We formulate this target estimation problem as a zero-sum game in this paper and use a minimax filter to estimate the target position. The minimax filter is a robust filter that minimizes the estimation error by considering the worst case noise. Furthermore, we develop a distributed version of the minimax filter for multiple sensor nodes. The distributed computation is implemented via modeling the information received from neighbors as measurements in the minimax filter. The simulation results show that the target tracking algorithm proposed in this paper provides a satisfactory result.

  3. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    Science.gov (United States)

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  4. Supersaturation Control using Analytical Crystal Size Distribution Estimator for Temperature Dependent in Nucleation and Crystal Growth Phenomena

    Science.gov (United States)

    Zahari, Zakirah Mohd; Zubaidah Adnan, Siti; Kanthasamy, Ramesh; Saleh, Suriyati; Samad, Noor Asma Fazli Abdul

    2018-03-01

    The specification of the crystal product is usually given in terms of crystal size distribution (CSD). To this end, optimal cooling strategy is necessary to achieve the CSD. The direct design control involving analytical CSD estimator is one of the approaches that can be used to generate the set-point. However, the effects of temperature on the crystal growth rate are neglected in the estimator. Thus, the temperature dependence on the crystal growth rate needs to be considered in order to provide an accurate set-point. The objective of this work is to extend the analytical CSD estimator where Arrhenius expression is employed to cover the effects of temperature on the growth rate. The application of this work is demonstrated through a potassium sulphate crystallisation process. Based on specified target CSD, the extended estimator is capable of generating the required set-point where a proposed controller successfully maintained the operation at the set-point to achieve the target CSD. Comparison with other cooling strategies shows a reduction up to 18.2% of the total number of undesirable crystals generated from secondary nucleation using linear cooling strategy is achieved.

  5. High-Speed Target Identification System Based on the Plume’s Spectral Distribution

    Directory of Open Access Journals (Sweden)

    Wenjie Lang

    2015-01-01

    Full Text Available In order to recognize the target of high speed quickly and accurately, an identification system was designed based on analysis of the distribution characteristics of the plume spectrum. In the system, the target was aligned with visible light tracking module, and the spectral analysis of the target’s plume radiation was achieved by interference module. The distinguishing factor recognition algorithm was designed on basis of ratio of multifeature band peaks and valley mean values. Effective recognition of the high speed moving target could be achieved after partition of the active region and the influence of target motion on spectral acquisition was analyzed. In the experiment the small rocket combustion was used as the target. The spectral detection experiment was conducted at different speeds 2.0 km away from the detection system. Experimental results showed that spectral distribution had significant spectral offset in the same sampling period for the target with different speeds, but the spectral distribution was basically consistent. Through calculation of the inclusion relationship between distinguishing factor and distinction interval of the peak value and the valley value at the corresponding wave-bands, effective identification of target could be achieved.

  6. Distributions of component failure rates, estimated from LER data

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1985-01-01

    Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter γ distributions. In this study, a more complicated distributional form is considered, a mixture of γs. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single γ distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution

  7. Target Tracking in 3-D Using Estimation Based Nonlinear Control Laws for UAVs

    Directory of Open Access Journals (Sweden)

    Mousumi Ahmed

    2016-02-01

    Full Text Available This paper presents an estimation based backstepping like control law design for an Unmanned Aerial Vehicle (UAV to track a moving target in 3-D space. A ground-based sensor or an onboard seeker antenna provides range, azimuth angle, and elevation angle measurements to a chaser UAV that implements an extended Kalman filter (EKF to estimate the full state of the target. A nonlinear controller then utilizes this estimated target state and the chaser’s state to provide speed, flight path, and course/heading angle commands to the chaser UAV. Tracking performance with respect to measurement uncertainty is evaluated for three cases: (1 stationary white noise; (2 stationary colored noise and (3 non-stationary (range correlated white noise. Furthermore, in an effort to improve tracking performance, the measurement model is made more realistic by taking into consideration range-dependent uncertainties in the measurements, i.e., as the chaser closes in on the target, measurement uncertainties are reduced in the EKF, thus providing the UAV with more accurate control commands. Simulation results for these cases are shown to illustrate target state estimation and trajectory tracking performance.

  8. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    Science.gov (United States)

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  9. Estimation of subcriticality and fuel concentration by using 'pattern matching' of neutron flux distribution under non uniformed system

    International Nuclear Information System (INIS)

    Ishitani, Kazuki; Yamane, Yoshihiro

    1999-01-01

    In nuclear fuel reprocessing plants, monitoring the spatial profile of neutron flux to infer subcriticality and distribution of fuel concentration using detectors such as PSPC, is very beneficial in sight of criticality safety. In this paper a method of subcriticality and fuel concentration estimation which is supposed to use under non-uniformed system is proposed. Its basic concept is the pattern matching between measured neutron flux distribution and beforehand calculated one. In any kind of subcriticality estimation, we can regard that measured neutron counts put any kind of black box, and then this black box outputs subcriticality. We proposed the use of artificial neural network or 'pattern matching' as black box which have no theoretical clear base. These method are wholly based on the calculated value as recently advancement of computer code accuracy for criticality safety. The most difference between indirect bias estimation method and our method is that our new approach target are the unknown non-uniform system. (J.P.N.)

  10. The Burr X Pareto Distribution: Properties, Applications and VaR Estimation

    Directory of Open Access Journals (Sweden)

    Mustafa Ç. Korkmaz

    2017-12-01

    Full Text Available In this paper, a new three-parameter Pareto distribution is introduced and studied. We discuss various mathematical and statistical properties of the new model. Some estimation methods of the model parameters are performed. Moreover, the peaks-over-threshold method is used to estimate Value-at-Risk (VaR by means of the proposed distribution. We compare the distribution with a few other models to show its versatility in modelling data with heavy tails. VaR estimation with the Burr X Pareto distribution is presented using time series data, and the new model could be considered as an alternative VaR model against the generalized Pareto model for financial institutions.

  11. Influence of the statistical distribution of bioassay measurement errors on the intake estimation

    International Nuclear Information System (INIS)

    Lee, T. Y; Kim, J. K

    2006-01-01

    The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution

  12. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  13. Low Complexity Parameter Estimation For Off-the-Grid Targets

    KAUST Repository

    Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim

    2015-01-01

    In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms

  14. Estimating particle number size distributions from multi-instrument observations with Kalman Filtering

    Energy Technology Data Exchange (ETDEWEB)

    Viskari, T.

    2012-07-01

    Atmospheric aerosol particles have several important effects on the environment and human society. The exact impact of aerosol particles is largely determined by their particle size distributions. However, no single instrument is able to measure the whole range of the particle size distribution. Estimating a particle size distribution from multiple simultaneous measurements remains a challenge in aerosol physical research. Current methods to combine different measurements require assumptions concerning the overlapping measurement ranges and have difficulties in accounting for measurement uncertainties. In this thesis, Extended Kalman Filter (EKF) is presented as a promising method to estimate particle number size distributions from multiple simultaneous measurements. The particle number size distribution estimated by EKF includes information from prior particle number size distributions as propagated by a dynamical model and is based on the reliabilities of the applied information sources. Known physical processes and dynamically evolving error covariances constrain the estimate both over time and particle size. The method was tested with measurements from Differential Mobility Particle Sizer (DMPS), Aerodynamic Particle Sizer (APS) and nephelometer. The particle number concentration was chosen as the state of interest. The initial EKF implementation presented here includes simplifications, yet the results are positive and the estimate successfully incorporated information from the chosen instruments. For particle sizes smaller than 4 micrometers, the estimate fits the available measurements and smooths the particle number size distribution over both time and particle diameter. The estimate has difficulties with particles larger than 4 micrometers due to issues with both measurements and the dynamical model in that particle size range. The EKF implementation appears to reduce the impact of measurement noise on the estimate, but has a delayed reaction to sudden

  15. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  16. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  17. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  18. Transmuted of Rayleigh Distribution with Estimation and Application on Noise Signal

    Science.gov (United States)

    Ahmed, Suhad; Qasim, Zainab

    2018-05-01

    This paper deals with transforming one parameter Rayleigh distribution, into transmuted probability distribution through introducing a new parameter (λ), since this studied distribution is necessary in representing signal data distribution and failure data model the value of this transmuted parameter |λ| ≤ 1, is also estimated as well as the original parameter (⊖) by methods of moments and maximum likelihood using different sample size (n=25, 50, 75, 100) and comparing the results of estimation by statistical measure (mean square error, MSE).

  19. Distributions of component failure rates estimated from LER data

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1985-01-01

    Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter gamma distributions. In this study, a more complicated distributional form is considered, a mixture of gammas. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single gamma distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution. 9 refs

  20. Distribution state estimation based voltage control for distribution networks; Koordinierte Spannungsregelung anhand einer Zustandsschaetzung im Verteilnetz

    Energy Technology Data Exchange (ETDEWEB)

    Diwold, Konrad; Yan, Wei [Fraunhofer IWES, Kassel (Germany); Braun, Martin [Fraunhofer IWES, Kassel (Germany); Stuttgart Univ. (Germany). Inst. fuer Energieuebertragung und Hochspannungstechnik (IEH)

    2012-07-01

    The increased integration of distributed energy units creates challenges for the operators of distribution systems. This is due to the fact that distribution systems that were initially designed for distributed consumption and central generation now face decentralized feed-in. One imminent problem associated with decentralised fee-in are local voltage violations in the distribution system, which are hard to handle via conventional voltage control strategies. This article proposes a new voltage control framework for distribution system operation. The framework utilizes reactive power of distributed energy units as well on-load tap changers to mitigate voltage problems in the network. Using an optimization-band the control strategy can be used in situations where network information is derived from distribution state estimators and thus holds some error. The control capabilities in combination with a distribution state estimator are tested using data from a real rural distribution network. The results are very promising, as voltage control is achieved fast and accurate, preventing a majority of the voltage violations during system operation under realistic system conditions. (orig.)

  1. Low Complexity Moving Target Parameter Estimation for MIMO Radar using 2D-FFT

    KAUST Repository

    Jardak, Seifallah

    2017-06-16

    In multiple-input multiple-output radar, to localize a target and estimate its reflection coefficient, a given cost function is usually optimized over a grid of points. The performance of such algorithms is directly affected by the grid resolution. Increasing the number of grid points enhances the resolution of the estimator but also increases its computational complexity exponentially. In this work, two reduced complexity algorithms are derived based on Capon and amplitude and phase estimation (APES) to estimate the reflection coefficient, angular location and, Doppler shift of multiple moving targets. By exploiting the structure of the terms, the cost-function is brought into a form that allows us to apply the two-dimensional fast-Fourier-transform (2D-FFT) and reduce the computational complexity of estimation. Using low resolution 2D-FFT, the proposed algorithm identifies sub-optimal estimates and feeds them as initial points to the derived Newton gradient algorithm. In contrast to the grid-based search algorithms, the proposed algorithm can optimally estimate on- and off-the-grid targets in very low computational complexity. A new APES cost-function with better estimation performance is also discussed. Generalized expressions of the Cramér-Rao lower bound are derived to asses the performance of the proposed algorithm.

  2. Low Complexity Moving Target Parameter Estimation for MIMO Radar using 2D-FFT

    KAUST Repository

    Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim

    2017-01-01

    In multiple-input multiple-output radar, to localize a target and estimate its reflection coefficient, a given cost function is usually optimized over a grid of points. The performance of such algorithms is directly affected by the grid resolution. Increasing the number of grid points enhances the resolution of the estimator but also increases its computational complexity exponentially. In this work, two reduced complexity algorithms are derived based on Capon and amplitude and phase estimation (APES) to estimate the reflection coefficient, angular location and, Doppler shift of multiple moving targets. By exploiting the structure of the terms, the cost-function is brought into a form that allows us to apply the two-dimensional fast-Fourier-transform (2D-FFT) and reduce the computational complexity of estimation. Using low resolution 2D-FFT, the proposed algorithm identifies sub-optimal estimates and feeds them as initial points to the derived Newton gradient algorithm. In contrast to the grid-based search algorithms, the proposed algorithm can optimally estimate on- and off-the-grid targets in very low computational complexity. A new APES cost-function with better estimation performance is also discussed. Generalized expressions of the Cramér-Rao lower bound are derived to asses the performance of the proposed algorithm.

  3. Moving-Target Position Estimation Using GPU-Based Particle Filter for IoT Sensing Applications

    Directory of Open Access Journals (Sweden)

    Seongseop Kim

    2017-11-01

    Full Text Available A particle filter (PF has been introduced for effective position estimation of moving targets for non-Gaussian and nonlinear systems. The time difference of arrival (TDOA method using acoustic sensor array has normally been used to for estimation by concealing the location of a moving target, especially underwater. In this paper, we propose a GPU -based acceleration of target position estimation using a PF and propose an efficient system and software architecture. The proposed graphic processing unit (GPU-based algorithm has more advantages in applying PF signal processing to a target system, which consists of large-scale Internet of Things (IoT-driven sensors because of the parallelization which is scalable. For the TDOA measurement from the acoustic sensor array, we use the generalized cross correlation phase transform (GCC-PHAT method to obtain the correlation coefficient of the signal using Fast Fourier Transform (FFT, and we try to accelerate the calculations of GCC-PHAT based TDOA measurements using FFT with GPU compute unified device architecture (CUDA. The proposed approach utilizes a parallelization method in the target position estimation algorithm using GPU-based PF processing. In addition, it could efficiently estimate sudden movement change of the target using GPU-based parallel computing which also can be used for multiple target tracking. It also provides scalability in extending the detection algorithm according to the increase of the number of sensors. Therefore, the proposed architecture can be applied in IoT sensing applications with a large number of sensors. The target estimation algorithm was verified using MATLAB and implemented using GPU CUDA. We implemented the proposed signal processing acceleration system using target GPU to analyze in terms of execution time. The execution time of the algorithm is reduced by 55% from to the CPU standalone operation in target embedded board, NVIDIA Jetson TX1. Also, to apply large

  4. Feasibility of estimating generalized extreme-value distribution of floods

    International Nuclear Information System (INIS)

    Ferreira de Queiroz, Manoel Moises

    2004-01-01

    Flood frequency analysis by generalized extreme-value probability distribution (GEV) has found increased application in recent years, given its flexibility in dealing with the three asymptotic forms of extreme distribution derived from different initial probability distributions. Estimation of higher quantiles of floods is usually accomplished by extrapolating one of the three inverse forms of GEV distribution fitted to the experimental data for return periods much higher than those actually observed. This paper studies the feasibility of fitting GEV distribution by moments of linear combinations of higher order statistics (LH moments) using synthetic annual flood series with varying characteristics and lengths. As the hydrologic events in nature such as daily discharge occur with finite values, their annual maximums are expected to follow the asymptotic form of the limited GEV distribution. Synthetic annual flood series were thus obtained from the stochastic sequences of 365 daily discharges generated by Monte Carlo simulation on the basis of limited probability distribution underlying the limited GEV distribution. The results show that parameter estimation by LH moments of this distribution, fitted to annual flood samples of less than 100-year length derived from initial limited distribution, may indicate any form of extreme-value distribution, not just the limited form as expected, and with large uncertainty in fitted parameters. A frequency analysis, on the basis of GEV distribution and LH moments, of annual flood series of lengths varying between 13 and 73 years observed at 88 gauge stations on Parana River in Brazil, indicated all the three forms of GEV distribution.(Author)

  5. Cost-effectiveness of targeted screening for abdominal aortic aneurysm. Monte Carlo-based estimates.

    Science.gov (United States)

    Pentikäinen, T J; Sipilä, T; Rissanen, P; Soisalon-Soininen, S; Salo, J

    2000-01-01

    This article reports a cost-effectiveness analysis of targeted screening for abdominal aortic aneurysm (AAA). A major emphasis was on the estimation of distributions of costs and effectiveness. We performed a Monte Carlo simulation using C programming language in a PC environment. Data on survival and costs, and a majority of screening probabilities, were from our own empirical studies. Natural history data were based on the literature. Each screened male gained 0.07 life-years at an incremental cost of FIM 3,300. The expected values differed from zero very significantly. For females, expected gains were 0.02 life-years at an incremental cost of FIM 1,100, which was not statistically significant. Cost-effectiveness ratios and their 95% confidence intervals were FIM 48,000 (27,000-121,000) and 54,000 (22,000-infinity) for males and females, respectively. Sensitivity analysis revealed that the results for males were stable. Individual variation in life-year gains was high. Males seemed to benefit from targeted AAA screening, and the results were stable. As far as the cost-effectiveness ratio is considered acceptable, screening for males seemed to be justified. However, our assumptions about growth and rupture behavior of AAAs might be improved with further clinical and epidemiological studies. As a point estimate, females benefited in a similar manner, but the results were not statistically significant. The evidence of this study did not justify screening of females.

  6. Energy distribution of the fast electron from Cu and CH targets irradiated with fs-laser pulses

    International Nuclear Information System (INIS)

    Cai Dafeng; Gu Yuqiu; Zheng Zhijian; Zhou Weimin; Jiao Chunye

    2014-01-01

    In order to investigate the effect of target's material on fast electron energy distribution, the energy distribution of fast electrons from the front and the rear of Cu and CH targets have been measured during the interaction of femtosecond laser-foil targets. The results show that the fast electron spectrums from the front of Cu and CH targets are similar, which show energy distribution of fast electrons depends very little on material of targets. The fast electron spectrums from the rear of Cu and CH targets are obviously dissimilar, which indicate a mighty effect of target material on fast electron transport. The fast electron spectrums from the Cu target is 'soften', which is due to electron recirculation and self-magnetic field produced by electrons transported in the target. The fast electron spectrums from the CH target is a Maxwellian distribution, which is due to collision effect when electrons transport in the target. (authors)

  7. Multimedia approach to estimating target cleanup levels for soils at hazardous waste sites

    International Nuclear Information System (INIS)

    Hwang, S.T.

    1990-04-01

    Contaminated soils at hazardous and nuclear waste sites pose a potential threat to human health via transport through environmental media and subsequent human intake. To minimize health risks, it is necessary to identify those risks and ensure that appropriate actions are taken to protect public health. The regulatory process may typically include identification of target cleanup levels and evaluation of the effectiveness of remedial alternatives and the corresponding reduction in risks at a site. The US Environmental Protection Agency (EPA) recommends that exposure assessments be combined with toxicity information to quantify the health risk posed by a specific site. This recommendation then forms the basis for establishing target cleanup levels. An exposure assessment must first identify the chemical concentration in a specific medium (soil, water, air, or food), estimate the exposure potential based on human intake from that media, and then combined with health criteria to estimate the upperbound health risks for noncarcinogens and carcinogens. Estimation of target cleanup levels involves the use of these same principles but can occur in reverse order. The procedure starts from establishing a permissible health effect level and ends with an estimated target cleanup level through an exposure assessment process. 17 refs

  8. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    Science.gov (United States)

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  9. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  10. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Directory of Open Access Journals (Sweden)

    Qian Li

    Full Text Available BACKGROUND: Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. METHODOLOGY: We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671 between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. CONCLUSIONS: This article proposes a network-based multi-target computational estimation

  11. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Science.gov (United States)

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by

  12. Distributed estimation based on observations prediction in wireless sensor networks

    KAUST Repository

    Bouchoucha, Taha

    2015-03-19

    We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.

  13. Real-time measurements and their effects on state estimation of distribution power system

    DEFF Research Database (Denmark)

    Han, Xue; You, Shi; Thordarson, Fannar

    2013-01-01

    between the estimated values (voltage and injected power) and the measurements are applied to evaluate the accuracy of the estimated grid states. Eventually, some suggestions are provided for the distribution grid operators on placing the real-time meters in the distribution grid.......This paper aims at analyzing the potential value of using different real-time metering and measuring instruments applied in the low voltage distribution networks for state-estimation. An algorithm is presented to evaluate different combinations of metering data using a tailored state estimator....... It is followed by a case study based on the proposed algorithm. A real distribution grid feeder with different types of meters installed either in the cabinets or at the customer side is selected for simulation and analysis. Standard load templates are used to initiate the state estimation. The deviations...

  14. Depth-Dose and LET Distributions of Antiproton Beams in Various Target Materials

    DEFF Research Database (Denmark)

    Herrmann, Rochus; Olsen, Sune; Petersen, Jørgen B.B.

    the annihilation process. Materials We have investigated the impact of substituting the target material on  the depth-dose distribution of pristine and  spread out antiproton beams using the FLUKA Monte Carlo transport program. Classical ICRP targets are compared to water phantoms. In addition, track average...... unrestricted LET is calculated for all configurations. Finally, we investigate which concentrations of gadolinium and boron are needed in a water target in order to observe a significant change in the antiproton depth-dose distribution.  Results Results indicate, that there is no significant change...... in the depth-dose distribution and average LET when substituting the materials. Adding boron and gadolinium up to concentrations of 1 per 1000 atoms to a water phantom, did not change the depth-dose profile nor the average LET. Conclusions  According to our FLUKA calculations, antiproton neutron capture...

  15. Distributed estimation based on observations prediction in wireless sensor networks

    KAUST Repository

    Bouchoucha, Taha; Ahmed, Mohammed F A; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process

  16. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    Directory of Open Access Journals (Sweden)

    Chaoyang Shi

    2017-12-01

    Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  17. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-01

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed

  18. Acoustic Estimates of Distribution and Biomass of Different Acoustic Scattering Types Between the New England Shelf Break and Slope Waters

    KAUST Repository

    McLaren, Alexander

    2011-11-01

    Due to their great ecological significance, mesopelagic fishes are attracting a wider audience on account of the large biomass they represent. Data from the National Marine Fisheries Service (NMFS) provided the opportunity to explore an unknown region of the North-West Atlantic, adjacent to one of the most productive fisheries in the world. Acoustic data collected during the cruise required the identification of acoustically distinct scattering types to make inferences on the migrations, distributions and biomass of mesopelagic scattering layers. Six scattering types were identified by the proposed method in our data and traces their migrations and distributions in the top 200m of the water column. This method was able to detect and trace the movements of three scattering types to 1000m depth, two of which can be further subdivided. This process of identification enabled the development of three physically-derived target-strength models adapted to traceable acoustic scattering types for the analysis of biomass and length distribution to 1000m depth. The abundance and distribution of acoustic targets varied closely in relation to varying physical environments associated with a warm core ring in the New England continental Shelf break region. The continental shelf break produces biomass density estimates that are twice as high as the warm core ring and the surrounding continental slope waters are an order of magnitude lower than either estimate. Biomass associated with distinct layers is assessed and any benefits brought about by upwelling at the edge of the warm core ring are shown not to result in higher abundance of deepwater species. Finally, asymmetric diurnal migrations in shelf break waters contrasts markedly with the symmetry of migrating layers within the warm ring, both in structure and density estimates, supporting a theory of predatorial and nutritional constraints to migrating pelagic species.

  19. Estimating probable flaw distributions in PWR steam generator tubes

    International Nuclear Information System (INIS)

    Gorman, J.A.; Turner, A.P.L.

    1997-01-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses

  20. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  1. Estimation of the tritium retention in ITER tungsten divertor target using macroscopic rate equations simulations

    Science.gov (United States)

    Hodille, E. A.; Bernard, E.; Markelj, S.; Mougenot, J.; Becquart, C. S.; Bisson, R.; Grisolia, C.

    2017-12-01

    Based on macroscopic rate equation simulations of tritium migration in an actively cooled tungsten (W) plasma facing component (PFC) using the code MHIMS (migration of hydrogen isotopes in metals), an estimation has been made of the tritium retention in ITER W divertor target during a non-uniform exponential distribution of particle fluxes. Two grades of materials are considered to be exposed to tritium ions: an undamaged W and a damaged W exposed to fast fusion neutrons. Due to strong temperature gradient in the PFC, Soret effect’s impacts on tritium retention is also evaluated for both cases. Thanks to the simulation, the evolutions of the tritium retention and the tritium migration depth are obtained as a function of the implanted flux and the number of cycles. From these evolutions, extrapolation laws are built to estimate the number of cycles needed for tritium to permeate from the implantation zone to the cooled surface and to quantify the corresponding retention of tritium throughout the W PFC.

  2. The space distribution of neutrons generated in massive lead target by relativistic nuclear beam

    International Nuclear Information System (INIS)

    Chultem, D.; Damdinsuren, Ts.; Enkh-Gin, L.; Lomova, L.; Perelygin, V.; Tolstov, K.

    1993-01-01

    The present paper is devoted to implementation of solid state nuclear track detectors in the research of the neutron generation in extended lead spallation target. Measured neutrons space distribution inside the lead target and neutron distribution in the thick water moderator are assessed. (Author)

  3. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    Science.gov (United States)

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

  4. Estimating the temporal distribution of exposure-related cancers

    International Nuclear Information System (INIS)

    Carter, R.L.; Sposto, R.; Preston, D.L.

    1993-09-01

    The temporal distribution of exposure-related cancers is relevant to the study of carcinogenic mechanisms. Statistical methods for extracting pertinent information from time-to-tumor data, however, are not well developed. Separation of incidence from 'latency' and the contamination of background cases are two problems. In this paper, we present methods for estimating both the conditional distribution given exposure-related cancers observed during the study period and the unconditional distribution. The methods adjust for confounding influences of background cases and the relationship between time to tumor and incidence. Two alternative methods are proposed. The first is based on a structured, theoretically derived model and produces direct inferences concerning the distribution of interest but often requires more-specialized software. The second relies on conventional modeling of incidence and is implemented through readily available, easily used computer software. Inferences concerning the effects of radiation dose and other covariates, however, are not always obtainable directly. We present three examples to illustrate the use of these two methods and suggest criteria for choosing between them. The first approach was used, with a log-logistic specification of the distribution of interest, to analyze times to bone sarcoma among a group of German patients injected with 224 Ra. Similarly, a log-logistic specification was used in the analysis of time to chronic myelogenous leukemias among male atomic-bomb survivors. We used the alternative approach, involving conventional modeling, to estimate the conditional distribution of exposure-related acute myelogenous leukemias among male atomic-bomb survivors, given occurrence between 1 October 1950 and 31 December 1985. All analyses were performed using Poisson regression methods for analyzing grouped survival data. (J.P.N.)

  5. Distributed and decentralized state estimation in gas networks as distributed parameter systems.

    Science.gov (United States)

    Ahmadian Behrooz, Hesam; Boozarjomehry, R Bozorgmehry

    2015-09-01

    In this paper, a framework for distributed and decentralized state estimation in high-pressure and long-distance gas transmission networks (GTNs) is proposed. The non-isothermal model of the plant including mass, momentum and energy balance equations are used to simulate the dynamic behavior. Due to several disadvantages of implementing a centralized Kalman filter for large-scale systems, the continuous/discrete form of extended Kalman filter for distributed and decentralized estimation (DDE) has been extended for these systems. Accordingly, the global model is decomposed into several subsystems, called local models. Some heuristic rules are suggested for system decomposition in gas pipeline networks. In the construction of local models, due to the existence of common states and interconnections among the subsystems, the assimilation and prediction steps of the Kalman filter are modified to take the overlapping and external states into account. However, dynamic Riccati equation for each subsystem is constructed based on the local model, which introduces a maximum error of 5% in the estimated standard deviation of the states in the benchmarks studied in this paper. The performance of the proposed methodology has been shown based on the comparison of its accuracy and computational demands against their counterparts in centralized Kalman filter for two viable benchmarks. In a real life network, it is shown that while the accuracy is not significantly decreased, the real-time factor of the state estimation is increased by a factor of 10. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Bayesian estimation of Weibull distribution parameters

    International Nuclear Information System (INIS)

    Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.

    1994-11-01

    In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs

  7. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  8. Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates.

    Directory of Open Access Journals (Sweden)

    Caroline A Curtis

    Full Text Available Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance.We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the 'plant characteristics' information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN, and tested whether ΔCN was influenced by growth form or range size.Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001. The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation.Our results show that distribution data are consistently broader than USDA PLANTS experts' knowledge and likely provide more robust estimates of climatic tolerance, especially for

  9. Methodology for estimation of potential for solar water heating in a target area

    International Nuclear Information System (INIS)

    Pillai, Indu R.; Banerjee, Rangan

    2007-01-01

    Proper estimation of potential of any renewable energy technology is essential for planning and promotion of the technology. The methods reported in literature for estimation of potential of solar water heating in a target area are aggregate in nature. A methodology for potential estimation (technical, economic and market potential) of solar water heating in a target area is proposed in this paper. This methodology links the micro-level factors and macro-level market effects affecting the diffusion or adoption of solar water heating systems. Different sectors with end uses of low temperature hot water are considered for potential estimation. Potential is estimated at each end use point by simulation using TRNSYS taking micro-level factors. The methodology is illustrated for a synthetic area in India with an area of 2 sq. km and population of 10,000. The end use sectors considered are residential, hospitals, nursing homes and hotels. The estimated technical potential and market potential are 1700 m 2 and 350 m 2 of collector area, respectively. The annual energy savings for the technical potential in the area is estimated as 110 kW h/capita and 0.55 million-kW h/sq. km. area, with an annual average peak saving of 1 MW. The annual savings is 650-kW h per m 2 of collector area and accounts for approximately 3% of the total electricity consumption of the target area. Some of the salient features of the model are the factors considered for potential estimation; estimation of electrical usage pattern for typical day, amount of electricity savings and savings during the peak load. The framework is general and enables accurate estimation of potential of solar water heating for a city, block. Energy planners and policy makers can use this framework for tracking and promotion of diffusion of solar water heating systems. (author)

  10. Flow distribution in the accelerator-production-of-tritium target

    International Nuclear Information System (INIS)

    Siebe, D.A.; Spatz, T.L.; Pasamehmetoglu, K.O.; Sherman, M.P.

    1999-01-01

    Achieving nearly uniform flow distributions in the accelerator production of tritium (APT) target structures is an important design objective. Manifold effects tend to cause a nonuniform distribution in flow systems of this type, although nearly even distribution can be achieved. A program of hydraulic experiments is underway to provide a database for validation of calculational methodologies that may be used for analyzing this problem and to evaluate the approach with the most promise for achieving a nearly even flow distribution. Data from the initial three tests are compared to predictions made using four calculational methods. The data show that optimizing the ratio of the supply-to-return-manifold areas can produce an almost even flow distribution in the APT ladder assemblies. The calculations compare well with the data for ratios of the supply-to-return-manifold areas spanning the optimum value. Thus, the results to date show that a nearly uniform flow distribution can be achieved by carefully sizing the supply and return manifolds and that the calculational methods available are adequate for predicting the distributions through a range of conditions

  11. A Geology-Based Estimate of Connate Water Salinity Distribution

    Science.gov (United States)

    2014-09-01

    poses serious environmental concerns if connate water is mobilized into shallow aquifers or surface water systems. Estimating the distribution of...groundwater flow and salinity transport near the Herbert Hoover Dike (HHD) surrounding Lake Okeechobee in Florida . The simulations were conducted using the...on the geologic configuration at equilibrium, and the horizontal salinity distribution is strongly linked to aquifer connectivity because

  12. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  13. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  14. A new approach to the estimation of radiopharmaceutical radiation dose distributions

    International Nuclear Information System (INIS)

    Hetherington, E.L.R.; Wood, N.R.

    1975-03-01

    For a photon energy of 150 keV, the Monte Carlo technique of photon history simulation was used to obtain estimates of the dose distribution in a human phantom for three activity distributions relevant to diagnostic nuclear medicine. In this preliminary work, the number of photon histories considered was insufficient to produce complete dose contours and the dose distributions are presented in the form of colour-coded diagrams. The distribution obtained illustrate an important deficiency in the MIRD Schema for dose estimation. Although the Schema uses the same mathematical technique for calculating photon doses, the results are obtained as average values for the whole body and for complete organs. It is shown that the actual dose distributions, particularly those for the whole body may, differ significantly from the average value calculated using the MIRD Schema and published absorbed fractions. (author)

  15. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.

  16. Estimation of spectral distribution of sky radiance using a commercial digital camera.

    Science.gov (United States)

    Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao

    2016-01-10

    Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.

  17. Effect of Smart Meter Measurements Data On Distribution State Estimation

    DEFF Research Database (Denmark)

    Pokhrel, Basanta Raj; Nainar, Karthikeyan; Bak-Jensen, Birgitte

    2018-01-01

    in the physical grid can enforce significant stress not only on the communication infrastructure but also in the control algorithms. This paper aims to propose a methodology to analyze needed real time smart meter data from low voltage distribution grids and their applicability in distribution state estimation...

  18. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution.

    Science.gov (United States)

    Han, Fang; Liu, Han

    2017-02-01

    Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.

  19. Estimation of modal parameters using bilinear joint time frequency distributions

    Science.gov (United States)

    Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.

    2007-07-01

    In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.

  20. The current duration design for estimating the time to pregnancy distribution

    DEFF Research Database (Denmark)

    Gasbarra, Dario; Arjas, Elja; Vehtari, Aki

    2015-01-01

    This paper was inspired by the studies of Niels Keiding and co-authors on estimating the waiting time-to-pregnancy (TTP) distribution, and in particular on using the current duration design in that context. In this design, a cross-sectional sample of women is collected from those who are currently...... attempting to become pregnant, and then by recording from each the time she has been attempting. Our aim here is to study the identifiability and the estimation of the waiting time distribution on the basis of current duration data. The main difficulty in this stems from the fact that very short waiting...... times are only rarely selected into the sample of current durations, and this renders their estimation unstable. We introduce here a Bayesian method for this estimation problem, prove its asymptotic consistency, and compare the method to some variants of the non-parametric maximum likelihood estimators...

  1. Estimation of photon energy distribution in gamma calibration field

    International Nuclear Information System (INIS)

    Takahashi, Fumiaki; Shimizu, Shigeru; Yamaguchi, Yasuhiro

    1997-03-01

    Photon survey instruments used for radiation protection are usually calibrated at gamma radiation fields, which are traceable to the national standard with regard to exposure. Whereas scattered radiations as well as primary gamma-rays exit in the calibration field, no consideration for the effect of the scattered radiations on energy distribution is given in routine calibration works. The scattered radiations can change photon energy spectra in the field, and this can result in misinterpretations of energy-dependent instrument responses. Construction materials in the field affect the energy distribution and magnitude of the scattered radiations. The geometric relationship between a gamma source and an instrument can determine the energy distribution at the calibration point. Therefore, it is essential for the assurance of quality calibration to estimate the energy spectra at the gamma calibration fields. Then, photon energy distributions at some fields in the Facility of Radiation Standard of the Japan Atomic Energy Research Institute (JAERI) were estimated by measurements using a NaI(Tl) detector and Monte Carlo calculations. It was found that the use of collimator gives a different feature in photon energy distribution. The origin of scattered radiations and the ratio of the scattered radiations to the primary gamma-rays were obtained. The results can help to improve the calibration of photon survey instruments in the JAERI. (author)

  2. On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution

    Directory of Open Access Journals (Sweden)

    Navid Feroze

    2015-12-01

    Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.

  3. Estimating Non-Normal Latent Trait Distributions within Item Response Theory Using True and Estimated Item Parameters

    Science.gov (United States)

    Sass, D. A.; Schmitt, T. A.; Walker, C. M.

    2008-01-01

    Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…

  4. FrFT-CSWSF: Estimating cross-range velocities of ground moving targets using multistatic synthetic aperture radar

    Directory of Open Access Journals (Sweden)

    Li Chenlei

    2014-10-01

    Full Text Available Estimating cross-range velocity is a challenging task for space-borne synthetic aperture radar (SAR, which is important for ground moving target indication (GMTI. Because the velocity of a target is very small compared with that of the satellite, it is difficult to correctly estimate it using a conventional monostatic platform algorithm. To overcome this problem, a novel method employing multistatic SAR is presented in this letter. The proposed hybrid method, which is based on an extended space-time model (ESTIM of the azimuth signal, has two steps: first, a set of finite impulse response (FIR filter banks based on a fractional Fourier transform (FrFT is used to separate multiple targets within a range gate; second, a cross-correlation spectrum weighted subspace fitting (CSWSF algorithm is applied to each of the separated signals in order to estimate their respective parameters. As verified through computer simulation with the constellations of Cartwheel, Pendulum and Helix, this proposed time-frequency-subspace method effectively improves the estimation precision of the cross-range velocities of multiple targets.

  5. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    Science.gov (United States)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  6. Distributed Dynamic State Estimator, Generator Parameter Estimation and Stability Monitoring Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Meliopoulos, Sakis [Georgia Inst. of Technology, Atlanta, GA (United States); Cokkinides, George [Georgia Inst. of Technology, Atlanta, GA (United States); Fardanesh, Bruce [New York Power Authority, NY (United States); Hedrington, Clinton [U.S. Virgin Islands Water and Power Authority (WAPA), St. Croix (U.S. Virgin Islands)

    2013-12-31

    This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based

  7. Assessing the Adequacy of Probability Distributions for Estimating the Extreme Events of Air Temperature in Dabaa Region

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2015-01-01

    Assessing the adequacy of probability distributions for estimating the extreme events of air temperature in Dabaa region is one of the pre-requisite s for any design purpose at Dabaa site which can be achieved by probability approach. In the present study, three extreme value distributions are considered and compared to estimate the extreme events of monthly and annual maximum and minimum temperature. These distributions include the Gumbel/Frechet distributions for estimating the extreme maximum values and Gumbel /Weibull distributions for estimating the extreme minimum values. Lieblein technique and Method of Moments are applied for estimating the distribution para meters. Subsequently, the required design values with a given return period of exceedance are obtained. Goodness-of-Fit tests involving Kolmogorov-Smirnov and Anderson-Darling are used for checking the adequacy of fitting the method/distribution for the estimation of maximum/minimum temperature. Mean Absolute Relative Deviation, Root Mean Square Error and Relative Mean Square Deviation are calculated, as the performance indicators, to judge which distribution and method of parameters estimation are the most appropriate one to estimate the extreme temperatures. The present study indicated that the Weibull distribution combined with Method of Moment estimators gives the highest fit, most reliable, accurate predictions for estimating the extreme monthly and annual minimum temperature. The Gumbel distribution combined with Method of Moment estimators showed the highest fit, accurate predictions for the estimation of the extreme monthly and annual maximum temperature except for July, August, October and November. The study shows that the combination of Frechet distribution with Method of Moment is the most accurate for estimating the extreme maximum temperature in July, August and November months while t he Gumbel distribution and Lieblein technique is the best for October

  8. Estimation of particle size distribution of nanoparticles from electrical ...

    Indian Academy of Sciences (India)

    2018-02-02

    Feb 2, 2018 ... An indirect method of estimation of size distribution of nanoparticles in a nanocomposite is ... The present approach exploits DC electrical current–voltage ... the sizes of nanoparticles (NPs) by electrical characterization.

  9. On the distribution of estimators of diffusion constants for Brownian motion

    International Nuclear Information System (INIS)

    Boyer, Denis; Dean, David S

    2011-01-01

    We discuss the distribution of various estimators for extracting the diffusion constant of single Brownian trajectories obtained by fitting the squared displacement of the trajectory. The analysis of the problem can be framed in terms of quadratic functionals of Brownian motion that correspond to the Euclidean path integral for simple Harmonic oscillators with time dependent frequencies. Explicit analytical results are given for the distribution of the diffusion constant estimator in a number of cases and our results are confirmed by numerical simulations.

  10. Target Centroid Position Estimation of Phase-Path Volume Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Fengjun Hu

    2016-01-01

    Full Text Available For the problem of easily losing track target when obstacles appear in intelligent robot target tracking, this paper proposes a target tracking algorithm integrating reduced dimension optimal Kalman filtering algorithm based on phase-path volume integral with Camshift algorithm. After analyzing the defects of Camshift algorithm, compare the performance with the SIFT algorithm and Mean Shift algorithm, and Kalman filtering algorithm is used for fusion optimization aiming at the defects. Then aiming at the increasing amount of calculation in integrated algorithm, reduce dimension with the phase-path volume integral instead of the Gaussian integral in Kalman algorithm and reduce the number of sampling points in the filtering process without influencing the operational precision of the original algorithm. Finally set the target centroid position from the Camshift algorithm iteration as the observation value of the improved Kalman filtering algorithm to fix predictive value; thus to make optimal estimation of target centroid position and keep the target tracking so that the robot can understand the environmental scene and react in time correctly according to the changes. The experiments show that the improved algorithm proposed in this paper shows good performance in target tracking with obstructions and reduces the computational complexity of the algorithm through the dimension reduction.

  11. ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS

    Directory of Open Access Journals (Sweden)

    Dietrich Stoyan

    2011-05-01

    Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.

  12. Margin estimation and disturbances of irradiation field in layer-stacking carbon-ion beams for respiratory moving targets.

    Science.gov (United States)

    Tajiri, Shinya; Tashiro, Mutsumi; Mizukami, Tomohiro; Tsukishima, Chihiro; Torikoshi, Masami; Kanai, Tatsuaki

    2017-11-01

    Carbon-ion therapy by layer-stacking irradiation for static targets has been practised in clinical treatments. In order to apply this technique to a moving target, disturbances of carbon-ion dose distributions due to respiratory motion have been studied based on the measurement using a respiratory motion phantom, and the margin estimation given by the square root of the summation Internal margin2+Setup margin2 has been assessed. We assessed the volume in which the variation in the ratio of the dose for a target moving due to respiration relative to the dose for a static target was within 5%. The margins were insufficient for use with layer-stacking irradiation of a moving target, and an additional margin was required. The lateral movement of a target converts to the range variation, as the thickness of the range compensator changes with the movement of the target. Although the additional margin changes according to the shape of the ridge filter, dose uniformity of 5% can be achieved for a spherical target 93 mm in diameter when the upward range variation is limited to 5 mm and the additional margin of 2.5 mm is applied in case of our ridge filter. Dose uniformity in a clinical target largely depends on the shape of the mini-peak as well as on the bolus shape. We have shown the relationship between range variation and dose uniformity. In actual therapy, the upper limit of target movement should be considered by assessing the bolus shape. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  13. Evaluating system reliability and targeted hardening strategies of power distribution systems subjected to hurricanes

    International Nuclear Information System (INIS)

    Salman, Abdullahi M.; Li, Yue; Stewart, Mark G.

    2015-01-01

    Over the years, power distribution systems have been vulnerable to extensive damage from hurricanes which can cause power outage resulting in millions of dollars of economic losses and restoration costs. Most of the outage is as a result of failure of distribution support structures. Over the years, various methods of strengthening distribution systems have been proposed and studied. Some of these methods, such as undergrounding of the system, have been shown to be unjustified from an economic point of view. A potential cost-effective strategy is targeted hardening of the system. This, however, requires a method of determining critical parts of a system that when strengthened, will have greater impact on reliability. This paper presents a framework for studying the effectiveness of targeted hardening strategies on power distribution systems subjected to hurricanes. The framework includes a methodology for evaluating system reliability that relates failure of poles and power delivery, determination of critical parts of a system, hurricane hazard analysis, and consideration of decay of distribution poles. The framework also incorporates cost analysis that considers economic losses due to power outage. A notional power distribution system is used to demonstrate the framework by evaluating and comparing the effectiveness of three hardening measures. - Highlight: • Risk assessment of power distribution systems subjected to hurricanes is carried out. • Framework for studying effectiveness of targeted hardening strategies is presented. • A system reliability method is proposed. • Targeted hardening is cost effective for existing systems. • Economic losses due to power outage should be considered for cost analysis.

  14. Hybrid fuzzy charged system search algorithm based state estimation in distribution networks

    Directory of Open Access Journals (Sweden)

    Sachidananda Prasad

    2017-06-01

    Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.

  15. Probabilistic Reverse dOsimetry Estimating Exposure Distribution (PROcEED)

    Science.gov (United States)

    PROcEED is a web-based application used to conduct probabilistic reverse dosimetry calculations.The tool is used for estimating a distribution of exposure concentrations likely to have produced biomarker concentrations measured in a population.

  16. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  17. Distributive estimation of frequency selective channels for massive MIMO systems

    KAUST Repository

    Zaib, Alam

    2015-12-28

    We consider frequency selective channel estimation in the uplink of massive MIMO-OFDM systems, where our major concern is complexity. A low complexity distributed LMMSE algorithm is proposed that attains near optimal channel impulse response (CIR) estimates from noisy observations at receive antenna array. In proposed method, every antenna estimates the CIRs of its neighborhood followed by recursive sharing of estimates with immediate neighbors. At each step, every antenna calculates the weighted average of shared estimates which converges to near optimal LMMSE solution. The simulation results validate the near optimal performance of proposed algorithm in terms of mean square error (MSE). © 2015 EURASIP.

  18. About an adaptively weighted Kaplan-Meier estimate.

    Science.gov (United States)

    Plante, Jean-François

    2009-09-01

    The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.

  19. Kernel density estimation-based real-time prediction for respiratory motion

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  20. Estimating the Distribution of Dietary Consumption Patterns

    KAUST Repository

    Carroll, Raymond J.

    2014-02-01

    In the United States the preferred method of obtaining dietary intake data is the 24-hour dietary recall, yet the measure of most interest is usual or long-term average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. We were interested in estimating the population distribution of the Healthy Eating Index-2005 (HEI-2005), a multi-component dietary quality index involving ratios of interrelated dietary components to energy, among children aged 2-8 in the United States, using a national survey and incorporating survey weights. We developed a highly nonlinear, multivariate zero-inflated data model with measurement error to address this question. Standard nonlinear mixed model software such as SAS NLMIXED cannot handle this problem. We found that taking a Bayesian approach, and using MCMC, resolved the computational issues and doing so enabled us to provide a realistic distribution estimate for the HEI-2005 total score. While our computation and thinking in solving this problem was Bayesian, we relied on the well-known close relationship between Bayesian posterior means and maximum likelihood, the latter not computationally feasible, and thus were able to develop standard errors using balanced repeated replication, a survey-sampling approach.

  1. Anatomical distribution of estrogen target neurons in turtle brain

    International Nuclear Information System (INIS)

    Kim, Y.S.; Stumpf, W.E.; Sar, M.

    1981-01-01

    Autoradiographic studies with [ 3 H]estradiol-17β in red-eared turtle (Pseudemys scripta elegans) show concentration and retention of radioactivity in nuclei of neurons in certain regions. Accumulations of estrogen target neurons exist in the periventricular brain with relationships to ventral extensions of the forebrain ventricles, including parolfactory, amygdaloid, septal, preoptic, hypothalamic and thalamic areas, as well as the dorsal ventricular ridge, the piriform cortex, and midbrain-pontine periaqueductal structures. The general anatomical pattern of distribution of estrogen target neurons corresponds to those observed not only in another reptile (Anolis carolinensis), but also in birds and mammals, as well as in teleosts and cyclostomes. In Pseudemys, which appears to display an intermediate degree of phylogenetic differentiation, the amygdaloid-septal-preoptic groups of estrogen target neurons constitute a continuum. In phylogenetic ascendency, e.g. in mammals, these cell populations are increasingly separated and distinct, while in phylogenetic descendency, e.g. in teleosts and cyclostomes, an amygdaloid group appears to be absent or contained within the septal-preoptic target cell population. (Auth.)

  2. Anatomical distribution of estrogen target neurons in turtle brain

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y.S.; Stumpf, W.E.; Sar, M. (North Carolina Univ., Chapel Hill (USA))

    1981-12-28

    Autoradiographic studies with (/sup 3/H)estradiol-17..beta.. in red-eared turtle (Pseudemys scripta elegans) show concentration and retention of radioactivity in nuclei of neurons in certain regions. Accumulations of estrogen target neurons exist in the periventricular brain with relationships to ventral extensions of the forebrain ventricles, including parolfactory, amygdaloid, septal, preoptic, hypothalamic and thalamic areas, as well as the dorsal ventricular ridge, the piriform cortex, and midbrain-pontine periaqueductal structures. The general anatomical pattern of distribution of estrogen target neurons corresponds to those observed not only in another reptile (Anolis carolinensis), but also in birds and mammals, as well as in teleosts and cyclostomes. In Pseudemys, which appears to display an intermediate degree of phylogenetic differentiation, the amygdaloid-septal-preoptic groups of estrogen target neurons constitute a continuum. In phylogenetic ascendency, e.g. in mammals, these cell populations are increasingly separated and distinct, while in phylogenetic descendency, e.g. in teleosts and cyclostomes, an amygdaloid group appears to be absent or contained within the septal-preoptic target cell population.

  3. Branch current state estimation of three phase distribution networks suitable for paralellization

    NARCIS (Netherlands)

    Blaauwbroek, N.; Nguyen, H.P.; Gibescu, M.; Slootweg, J.G.

    2017-01-01

    The evolution of distribution networks from passive to active distribution systems puts new requirements on the monitoring and control capabilities of these systems. The development of state estimation algorithms to gain insight in the actual system state of a distribution network has resulted in a

  4. Targeted Learning

    CERN Document Server

    van der Laan, Mark J

    2011-01-01

    The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows (1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and (2) targeting the fitting of the probability distribution of the data toward the targe

  5. Nearest Neighbor Estimates of Entropy for Multivariate Circular Distributions

    Directory of Open Access Journals (Sweden)

    Neeraj Misra

    2010-05-01

    Full Text Available In molecular sciences, the estimation of entropies of molecules is important for the understanding of many chemical and biological processes. Motivated by these applications, we consider the problem of estimating the entropies of circular random vectors and introduce non-parametric estimators based on circular distances between n sample points and their k th nearest neighbors (NN, where k (≤ n – 1 is a fixed positive integer. The proposed NN estimators are based on two different circular distances, and are proven to be asymptotically unbiased and consistent. The performance of one of the circular-distance estimators is investigated and compared with that of the already established Euclidean-distance NN estimator using Monte Carlo samples from an analytic distribution of six circular variables of an exactly known entropy and a large sample of seven internal-rotation angles in the molecule of tartaric acid, obtained by a realistic molecular-dynamics simulation.

  6. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    Directory of Open Access Journals (Sweden)

    AMAD HAMZA

    2016-10-01

    Full Text Available In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time.For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation. Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods

  7. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    International Nuclear Information System (INIS)

    Hamza, A.; Jan, T.; Ali, A.

    2016-01-01

    In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time). For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation). Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods. (author)

  8. Nonparametric estimation of the stationary M/G/1 workload distribution function

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted

    2005-01-01

    In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary associ...

  9. Distributed ISAR Subimage Fusion of Nonuniform Rotating Target Based on Matching Fourier Transform.

    Science.gov (United States)

    Li, Yuanyuan; Fu, Yaowen; Zhang, Wenpeng

    2018-06-04

    In real applications, the image quality of the conventional monostatic Inverse Synthetic Aperture Radar (ISAR) for the maneuvering target is subject to the strong fluctuation of Radar Cross Section (RCS), as the target aspect varies enormously. Meanwhile, the maneuvering target introduces nonuniform rotation after translation motion compensation which degrades the imaging performance of the conventional Fourier Transform (FT)-based method in the cross-range dimension. In this paper, a method which combines the distributed ISAR technique and the Matching Fourier Transform (MFT) is proposed to overcome these problems. Firstly, according to the characteristics of the distributed ISAR, the multiple channel echoes of the nonuniform rotation target from different observation angles can be acquired. Then, by applying the MFT to the echo of each channel, the defocused problem of nonuniform rotation target which is inevitable by using the FT-based imaging method can be avoided. Finally, after preprocessing, scaling and rotation of all subimages, the noncoherent fusion image containing all the RCS information in all channels can be obtained. The accumulation coefficients of all subimages are calculated adaptively according to the their image qualities. Simulation and experimental data are used to validate the effectiveness of the proposed approach, and fusion image with improved recognizability can be obtained. Therefore, by using the distributed ISAR technique and MFT, subimages of high-maneuvering target from different observation angles can be obtained. Meanwhile, by employing the adaptive subimage fusion method, the RCS fluctuation can be alleviated and more recognizable final image can be obtained.

  10. Spatial distribution of carbon species in laser ablation of graphite target

    International Nuclear Information System (INIS)

    Ikegami, T.; Ishibashi, S.; Yamagata, Y.; Ebihara, K.; Thareja, R.K.; Narayan, J.

    2001-01-01

    We report on the temporal evolution and spatial distribution of C 2 and C 3 molecules produced by KrF laser ablation of a graphite target using laser induced fluorescence imaging and optical emission spectroscopy. Spatial density profiles of C 2 were measured using two-dimensional fluorescence in various pressures of different ambient (vacuum, nitrogen, oxygen, hydrogen, helium, and argon) gases at various ablation laser fluences and ablation area. A large yield of C 2 is observed in the central part of the plume and near the target surface and its density and distribution was affected by the laser fluence and ambient gas. Fluorescent C 3 was studied in Ar gas and the yield of C 3 is enhanced at higher gas pressure and longer delay times after ablation

  11. An Empirical Method to Fuse Partially Overlapping State Vectors for Distributed State Estimation

    NARCIS (Netherlands)

    Sijs, J.; Hanebeck, U.; Noack, B.

    2013-01-01

    State fusion is a method for merging multiple estimates of the same state into a single fused estimate. Dealing with multiple estimates is one of the main concerns in distributed state estimation, where an estimated value of the desired state vector is computed in each node of a networked system.

  12. Low complexity algorithms to independently and jointly estimate the location and range of targets using FMCW

    KAUST Repository

    Ahmed, Sajid

    2017-05-12

    The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.

  13. Low complexity algorithms to independently and jointly estimate the location and range of targets using FMCW

    KAUST Repository

    Ahmed, Sajid; Jardak, Seifallah; Alouini, Mohamed-Slim

    2017-01-01

    The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.

  14. Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.

    Science.gov (United States)

    Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E

    2017-12-11

    Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.

  15. Moving target tracking through distributed clustering in directional sensor networks.

    Science.gov (United States)

    Enayet, Asma; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif

    2014-12-18

    The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works.

  16. Efficiency of the estimators of multivariate distribution parameters from the one-dimensional observed frequencies

    International Nuclear Information System (INIS)

    Chernov, N.I.; Kurbatov, V.S.; Ososkov, G.A.

    1988-01-01

    Parameter estimation for multivariate probability distributions is studied in experiments where data are presented as one-dimensional hystograms. For this model a statistics defined as a quadratic form of the observed frequencies which has a limitig x 2 -distribution is proposed. The efficiency of the estimator minimizing the value of that statistics is proved whithin the class of all unibased estimates obtained via minimization of quadratic forms of observed frequencies. The elaborated method was applied to the physical problem of analysis of the secondary pion energy distribution in the isobar model of pion-nucleon interactions with the production of an additional pion. The numerical experiments showed that the accuracy of estimation is twice as much if comparing the conventional methods

  17. Improving Distribution Resiliency with Microgrids and State and Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Williams, Tess L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schneider, Kevin P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elizondo, Marcelo A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Yannan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Chen-Ching [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Yin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil Gup [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-09-30

    Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using

  18. Time difference of arrival estimation of microseismic signals based on alpha-stable distribution

    Science.gov (United States)

    Jia, Rui-Sheng; Gong, Yue; Peng, Yan-Jun; Sun, Hong-Mei; Zhang, Xing-Li; Lu, Xin-Ming

    2018-05-01

    Microseismic signals are generally considered to follow the Gauss distribution. A comparison of the dynamic characteristics of sample variance and the symmetry of microseismic signals with the signals which follow α-stable distribution reveals that the microseismic signals have obvious pulse characteristics and that the probability density curve of the microseismic signal is approximately symmetric. Thus, the hypothesis that microseismic signals follow the symmetric α-stable distribution is proposed. On the premise of this hypothesis, the characteristic exponent α of the microseismic signals is obtained by utilizing the fractional low-order statistics, and then a new method of time difference of arrival (TDOA) estimation of microseismic signals based on fractional low-order covariance (FLOC) is proposed. Upon applying this method to the TDOA estimation of Ricker wavelet simulation signals and real microseismic signals, experimental results show that the FLOC method, which is based on the assumption of the symmetric α-stable distribution, leads to enhanced spatial resolution of the TDOA estimation relative to the generalized cross correlation (GCC) method, which is based on the assumption of the Gaussian distribution.

  19. Estimation of particle size distribution of nanoparticles from electrical ...

    Indian Academy of Sciences (India)

    ... blockade (CB) phenomena of electrical conduction through atiny nanoparticle. Considering the ZnO nanocomposites to be spherical, Coulomb-blockade model of quantum dot isapplied here. The size distribution of particle is estimated from that model and compared with the results obtainedfrom AFM and XRD analyses.

  20. Multi-target consensus circle pursuit for multi-agent systems via a distributed multi-flocking method

    Science.gov (United States)

    Pei, Huiqin; Chen, Shiming; Lai, Qiang

    2016-12-01

    This paper studies the multi-target consensus pursuit problem of multi-agent systems. For solving the problem, a distributed multi-flocking method is designed based on the partial information exchange, which is employed to realise the pursuit of multi-target and the uniform distribution of the number of pursuing agents with the dynamic target. Combining with the proposed circle formation control strategy, agents can adaptively choose the target to form the different circle formation groups accomplishing a multi-target pursuit. The speed state of pursuing agents in each group converges to the same value. A Lyapunov approach is utilised to analyse the stability of multi-agent systems. In addition, a sufficient condition is given for achieving the dynamic target consensus pursuit, and which is then analysed. Finally, simulation results verify the effectiveness of the proposed approaches.

  1. Pedestrian count estimation using texture feature with spatial distribution

    Directory of Open Access Journals (Sweden)

    Hongyu Hu

    2016-12-01

    Full Text Available We present a novel pedestrian count estimation approach based on global image descriptors formed from multi-scale texture features that considers spatial distribution. For regions of interest, local texture features are represented based on histograms of multi-scale block local binary pattern, which jointly constitute the feature vector of the whole image. Therefore, to achieve an effective estimation of pedestrian count, principal component analysis is used to reduce the dimension of the global representation features, and a fitting model between image global features and pedestrian count is constructed via support vector regression. The experimental result shows that the proposed method exhibits high accuracy on pedestrian count estimation and can be applied well in the real world.

  2. Distortion-Rate Bounds for Distributed Estimation Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Nihar Jindal

    2008-03-01

    Full Text Available We deal with centralized and distributed rate-constrained estimation of random signal vectors performed using a network of wireless sensors (encoders communicating with a fusion center (decoder. For this context, we determine lower and upper bounds on the corresponding distortion-rate (D-R function. The nonachievable lower bound is obtained by considering centralized estimation with a single-sensor which has all observation data available, and by determining the associated D-R function in closed-form. Interestingly, this D-R function can be achieved using an estimate first compress afterwards (EC approach, where the sensor (i forms the minimum mean-square error (MMSE estimate for the signal of interest; and (ii optimally (in the MSE sense compresses and transmits it to the FC that reconstructs it. We further derive a novel alternating scheme to numerically determine an achievable upper bound of the D-R function for general distributed estimation using multiple sensors. The proposed algorithm tackles an analytically intractable minimization problem, while it accounts for sensor data correlations. The obtained upper bound is tighter than the one determined by having each sensor performing MSE optimal encoding independently of the others. Numerical examples indicate that the algorithm performs well and yields D-R upper bounds which are relatively tight with respect to analytical alternatives obtained without taking into account the cross-correlations among sensor data.

  3. Investigating the impact of uneven magnetic flux density distribution on core loss estimation

    DEFF Research Database (Denmark)

    Niroumand, Farideh Javidi; Nymand, Morten; Wang, Yiren

    2017-01-01

    is calculated according to an effective flux density value and the macroscopic dimensions of the cores. However, the flux distribution in the core can alter by core shapes and/or operating conditions due to nonlinear material properties. This paper studies the element-wise estimation of the loss in magnetic......There are several approaches for loss estimation in magnetic cores, and all these approaches highly rely on accurate information about flux density distribution in the cores. It is often assumed that the magnetic flux density evenly distributes throughout the core and the overall core loss...

  4. Estimating distribution of hidden objects with drones: from tennis balls to manatees.

    Directory of Open Access Journals (Sweden)

    Julien Martin

    Full Text Available Unmanned aerial vehicles (UAV, or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection. UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348. The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.

  5. Estimating distribution of hidden objects with drones: from tennis balls to manatees.

    Science.gov (United States)

    Martin, Julien; Edwards, Holly H; Burgess, Matthew A; Percival, H Franklin; Fagan, Daniel E; Gardner, Beth E; Ortega-Ortiz, Joel G; Ifju, Peter G; Evers, Brandon S; Rambo, Thomas J

    2012-01-01

    Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.

  6. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-21

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.

  7. Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dexin; Yang, Liuqing; Florita, Anthony; Alam, S.M. Shafiul; Elgindy, Tarek; Hodge, Bri-Mathias

    2016-08-01

    The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the help of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.

  8. Methods to estimate distribution and range extent of grizzly bears in the Greater Yellowstone Ecosystem

    Science.gov (United States)

    Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.

    2014-01-01

    The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.

  9. Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation

    KAUST Repository

    Jardak, Seifallah

    2014-04-01

    Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location

  10. Private and Secure Distribution of Targeted Advertisements to Mobile Phones

    Directory of Open Access Journals (Sweden)

    Stylianos S. Mamais

    2017-05-01

    Full Text Available Online Behavioural Advertising (OBA enables promotion companies to effectively target users with ads that best satisfy their purchasing needs. This is highly beneficial for both vendors and publishers who are the owners of the advertising platforms, such as websites and app developers, but at the same time creates a serious privacy threat for users who expose their consumer interests. In this paper, we categorize the available ad-distribution methods and identify their limitations in terms of security, privacy, targeting effectiveness and practicality. We contribute our own system, which utilizes opportunistic networking in order to distribute targeted adverts within a social network. We improve upon previous work by eliminating the need for trust among the users (network nodes while at the same time achieving low memory and bandwidth overhead, which are inherent problems of many opportunistic networks. Our protocol accomplishes this by identifying similarities between the consumer interests of users and then allows them to share access to the same adverts, which need to be downloaded only once. Although the same ads may be viewed by multiple users, privacy is preserved as the users do not learn each other’s advertising interests. An additional contribution is that malicious users cannot alter the ads in order to spread malicious content, and also, they cannot launch impersonation attacks.

  11. A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2018-02-01

    Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.

  12. Voltage Estimation in Active Distribution Grids Using Neural Networks

    DEFF Research Database (Denmark)

    Pertl, Michael; Heussen, Kai; Gehrke, Oliver

    2016-01-01

    the observability of distribution systems has to be improved. To increase the situational awareness of the power system operator data driven methods can be employed. These methods benefit from newly available data sources such as smart meters. This paper presents a voltage estimation method based on neural networks...

  13. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  14. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants.

    Science.gov (United States)

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.

  15. Moving Target Tracking through Distributed Clustering in Directional Sensor Networks

    Directory of Open Access Journals (Sweden)

    Asma Enayet

    2014-12-01

    Full Text Available The problem of moving target tracking in directional sensor networks (DSNs introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target’s location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works.

  16. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution

    OpenAIRE

    Han, Fang; Liu, Han

    2016-01-01

    Correlation matrices play a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, it is not an effective estimator when facing heavy-tailed distributions. As a robust alternative, Han and Liu [J. Am. Stat. Assoc. 109 (2015) 275-2...

  17. Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation.

    Science.gov (United States)

    Zhao, Wei; Wang, Han

    2016-06-28

    Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages.

  18. Experimental design and estimation of growth rate distributions in size-structured shrimp populations

    International Nuclear Information System (INIS)

    Banks, H T; Davis, Jimena L; Ernstberger, Stacey L; Hu, Shuhua; Artimovich, Elena; Dhar, Arun K

    2009-01-01

    We discuss inverse problem results for problems involving the estimation of probability distributions using aggregate data for growth in populations. We begin with a mathematical model describing variability in the early growth process of size-structured shrimp populations and discuss a computational methodology for the design of experiments to validate the model and estimate the growth-rate distributions in shrimp populations. Parameter-estimation findings using experimental data from experiments so designed for shrimp populations cultivated at Advanced BioNutrition Corporation are presented, illustrating the usefulness of mathematical and statistical modeling in understanding the uncertainty in the growth dynamics of such populations

  19. Application of the Unbounded Probability Distribution of the Johnson System for Floods Estimation

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2015-09-01

    Full Text Available Floods designs constitute a key to estimate the sizing of new water works and to review the hydrological security of existing ones. The most reliable method for estimating their magnitudes associated with certain return periods is to fit a probabilistic model to available records of maximum annual flows. Since such model is at first unknown, several models need to be tested in order to select the most appropriate one according to an arbitrary statistical index, commonly the standard error of fit. Several probability distributions have shown versatility and consistency of results when processing floods records and therefore, its application has been established as a norm or precept. The Johnson System has three families of distributions, one of which is the Log–Normal model with three parameters of fit, which is also the border between the bounded distributions and those with no upper limit. These families of distributions have four adjustment parameters and converge to the standard normal distribution, so that their predictions are obtained with such a model. Having contrasted the three probability distributions established by precept in 31 historical records of hydrological events, the Johnson system is applied to such data. The results of the unbounded distribution of the Johnson system (SJU are compared to the optimal results from the three distributions. It was found that the predictions of the SJU distribution are similar to those obtained with the other models in the low return periods ( 1000 years. Because of its theoretical support, the SJU model is recommended in flood estimation.

  20. Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.

    Science.gov (United States)

    Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing

    2017-10-11

    In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.

  1. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  2. Dasymetric high resolution population distribution estimates for improved decision making, with a case study of sea-level rise vulnerability in Boca Raton, Florida

    Science.gov (United States)

    Ziegler, Hannes Moritz

    Planners and managers often rely on coarse population distribution data from the census for addressing various social, economic, and environmental problems. In the analysis of physical vulnerabilities to sea-level rise, census units such as blocks or block groups are coarse relative to the required decision-making application. This study explores the benefits offered from integrating image classification and dasymetric mapping at the household level to provide detailed small area population estimates at the scale of residential buildings. In a case study of Boca Raton, FL, a sea-level rise inundation grid based on mapping methods by NOAA is overlaid on the highly detailed population distribution data to identify vulnerable residences and estimate population displacement. The enhanced spatial detail offered through this method has the potential to better guide targeted strategies for future development, mitigation, and adaptation efforts.

  3. Impact of dose-distribution uncertainties on rectal ntcp modeling I: Uncertainty estimates

    International Nuclear Information System (INIS)

    Fenwick, John D.; Nahum, Alan E.

    2001-01-01

    A trial of nonescalated conformal versus conventional radiotherapy treatment of prostate cancer has been carried out at the Royal Marsden NHS Trust (RMH) and Institute of Cancer Research (ICR), demonstrating a significant reduction in the rate of rectal bleeding reported for patients treated using the conformal technique. The relationship between planned rectal dose-distributions and incidences of bleeding has been analyzed, showing that the rate of bleeding falls significantly as the extent of the rectal wall receiving a planned dose-level of more than 57 Gy is reduced. Dose-distributions delivered to the rectal wall over the course of radiotherapy treatment inevitably differ from planned distributions, due to sources of uncertainty such as patient setup error, rectal wall movement and variation in the absolute rectal wall surface area. In this paper estimates of the differences between planned and treated rectal dose-distribution parameters are obtained for the RMH/ICR nonescalated conformal technique, working from a distribution of setup errors observed during the RMH/ICR trial, movement data supplied by Lebesque and colleagues derived from repeat CT scans, and estimates of rectal circumference variations extracted from the literature. Setup errors and wall movement are found to cause only limited systematic differences between mean treated and planned rectal dose-distribution parameter values, but introduce considerable uncertainties into the treated values of some dose-distribution parameters: setup errors lead to 22% and 9% relative uncertainties in the highly dosed fraction of the rectal wall and the wall average dose, respectively, with wall movement leading to 21% and 9% relative uncertainties. Estimates obtained from the literature of the uncertainty in the absolute surface area of the distensible rectal wall are of the order of 13%-18%. In a subsequent paper the impact of these uncertainties on analyses of the relationship between incidences of bleeding

  4. Determination of the axial 235U distribution in target fuel rods

    International Nuclear Information System (INIS)

    Huettig, G.; Bernhard, G.; Niese, U.

    1989-01-01

    The homogenity of the axial 235 U distribution in target fuel rods is an important quality criterion for the production of 99 Mo. The 235 U distribution has been analyzed automatically and nondestructively by measuring the 235 U gamma ray peak at 285.7 keV. For the quantitative assessment a calibration curve was prepared by the help of X-ray fluorescence analysis, colorimetry, and photometric titration. The accuracy of the method is ≤ 1.5% uranium per centimeter of the fuel rod

  5. Estimating Traveler Populations at Airport and Cruise Terminals for Population Distribution and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Jochem, Warren C [ORNL; Sims, Kelly M [ORNL; Bright, Eddie A [ORNL; Urban, Marie L [ORNL; Rose, Amy N [ORNL; Coleman, Phil R [ORNL; Bhaduri, Budhendra L [ORNL

    2013-01-01

    In recent years, uses of high-resolution population distribution databases are increasing steadily for environmental, socioeconomic, public health, and disaster-related research and operations. With the development of daytime population distribution, temporal resolution of such databases has been improved. However, the lack of incorporation of transitional population, namely business and leisure travelers, leaves a significant population unaccounted for within the critical infrastructure networks, such as at transportation hubs. This paper presents two general methodologies for estimating passenger populations in airport and cruise port terminals at a high temporal resolution which can be incorporated into existing population distribution models. The methodologies are geographically scalable and are based on, and demonstrate how, two different transportation hubs with disparate temporal population dynamics can be modeled utilizing publicly available databases including novel data sources of flight activity from the Internet which are updated in near-real time. The airport population estimation model shows great potential for rapid implementation for a large collection of airports on a national scale, and the results suggest reasonable accuracy in the estimated passenger traffic. By incorporating population dynamics at high temporal resolutions into population distribution models, we hope to improve the estimates of populations exposed to or at risk to disasters, thereby improving emergency planning and response, and leading to more informed policy decisions.

  6. Fission-fragment mass distribution and estimation of the cluster emission probability in the γ + 232Th and 181Ta reactions

    International Nuclear Information System (INIS)

    Karamyan, S.A.; Adam, J.; Belov, A.G.; Chaloun, P.; Norseev, Yu.V.; Stegajlov, V.I.

    1997-01-01

    Fission-fragment mass distribution has been measured by the cumulative yields of radionuclides detected in the 232 Th(γ,f)-reaction at the Bremsstrahlung endpoint energies of 12 and 24 MeV. The yield upper limits have been estimated for the light nuclei 24 Na, 28 Mg, 38 S etc. at the Th and Ta targets exposure to the 24 MeV Bremsstrahlung. The results are discussed in terms of the multimodal fission phenomena and cluster emission >from a deformed fissioning system or from a compound nucleus

  7. Anthropogenic CO2 in the oceans estimated using transit time distributions

    International Nuclear Information System (INIS)

    Waugh, D.W.; McNeil, B.I.

    2006-01-01

    The distribution of anthropogenic carbon (Cant) in the oceans is estimated using the transit time distribution (TTD) method applied to global measurements of chlorofluorocarbon-12 (CFC12). Unlike most other inference methods, the TTD method does not assume a single ventilation time and avoids the large uncertainty incurred by attempts to correct for the large natural carbon background in dissolved inorganic carbon measurements. The highest concentrations and deepest penetration of anthropogenic carbon are found in the North Atlantic and Southern Oceans. The estimated total inventory in 1994 is 134 Pg-C. To evaluate uncertainties the TTD method is applied to output from an ocean general circulation model (OGCM) and compared the results to the directly simulated Cant. Outside of the Southern Ocean the predicted Cant closely matches the directly simulated distribution, but in the Southern Ocean the TTD concentrations are biased high due to the assumption of 'constant disequilibrium'. The net result is a TTD overestimate of the global inventory by about 20%. Accounting for this bias and other centred uncertainties, an inventory range of 94-121 Pg-C is obtained. This agrees with the inventory of Sabine et al., who applied the DeltaC* method to the same data. There are, however, significant differences in the spatial distributions: The TTD estimates are smaller than DeltaC* in the upper ocean and larger at depth, consistent with biases expected in DeltaC* given its assumption of a single parcel ventilation time

  8. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  9. Integrated detection, estimation, and guidance in pursuit of a maneuvering target

    Science.gov (United States)

    Dionne, Dany

    The thesis focuses on efficient solutions of non-cooperative pursuit-evasion games with imperfect information on the state of the system. This problem is important in the context of interception of future maneuverable ballistic missiles. However, the theoretical developments are expected to find application to a broad class of hybrid control and estimation problems in industry. The validity of the results is nevertheless confirmed using a benchmark problem in the area of terminal guidance. A specific interception scenario between an incoming target with no information and a single interceptor missile with noisy measurements is analyzed in the form of a linear hybrid system subject to additive abrupt changes. The general research is aimed to achieve improved homing accuracy by integrating ideas from detection theory, state estimation theory and guidance. The results achieved can be summarized as follows. (i) Two novel maneuver detectors are developed to diagnose abrupt changes in a class of hybrid systems (detection and isolation of evasive maneuvers): a new implementation of the GLR detector and the novel adaptive- H0 GLR detector. (ii) Two novel state estimators for target tracking are derived using the novel maneuver detectors. The state estimators employ parameterized family of functions to described possible evasive maneuvers. (iii) A novel adaptive Bayesian multiple model predictor of the ballistic miss is developed which employs semi-Markov models and ideas from detection theory. (iv) A novel integrated estimation and guidance scheme that significantly improves the homing accuracy is also presented. The integrated scheme employs banks of estimators and guidance laws, a maneuver detector, and an on-line governor; the scheme is adaptive with respect to the uncertainty affecting the probability density function of the filtered state. (v) A novel discretization technique for the family of continuous-time, game theoretic, bang-bang guidance laws is introduced. The

  10. Estimation of the inverse Weibull distribution based on progressively censored data: Comparative study

    International Nuclear Information System (INIS)

    Musleh, Rola M.; Helu, Amal

    2014-01-01

    In this article we consider statistical inferences about the unknown parameters of the Inverse Weibull distribution based on progressively type-II censoring using classical and Bayesian procedures. For classical procedures we propose using the maximum likelihood; the least squares methods and the approximate maximum likelihood estimators. The Bayes estimators are obtained based on both the symmetric and asymmetric (Linex, General Entropy and Precautionary) loss functions. There are no explicit forms for the Bayes estimators, therefore, we propose Lindley's approximation method to compute the Bayes estimators. A comparison between these estimators is provided by using extensive simulation and three criteria, namely, Bias, mean squared error and Pitman nearness (PN) probability. It is concluded that the approximate Bayes estimators outperform the classical estimators most of the time. Real life data example is provided to illustrate our proposed estimators. - Highlights: • We consider progressively type-II censored data from the Inverse Weibull distribution (IW). • We derive MLEs, approximate MLEs, LS and Bayes estimate methods of scale and shape parameters of the IW. • Bayes estimator of shape parameter cannot be expressed in closed forms. • We suggest using Lindley's approximation. • We conclude that the Bayes estimates outperform the classical methods

  11. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  12. Joint angle and Doppler frequency estimation of coherent targets in monostatic MIMO radar

    Science.gov (United States)

    Cao, Renzheng; Zhang, Xiaofei

    2015-05-01

    This paper discusses the problem of joint direction of arrival (DOA) and Doppler frequency estimation of coherent targets in a monostatic multiple-input multiple-output radar. In the proposed algorithm, we perform a reduced dimension (RD) transformation on the received signal first and then use forward spatial smoothing (FSS) technique to decorrelate the coherence and obtain joint estimation of DOA and Doppler frequency by exploiting the estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm. The joint estimated parameters of the proposed RD-FSS-ESPRIT are automatically paired. Compared with the conventional FSS-ESPRIT algorithm, our RD-FSS-ESPRIT algorithm has much lower complexity and better estimation performance of both DOA and frequency. The variance of the estimation error and the Cramer-Rao Bound of the DOA and frequency estimation are derived. Simulation results show the effectiveness and improvement of our algorithm.

  13. Parametric X-rays from a polycrystalline target

    International Nuclear Information System (INIS)

    Lobach, Ihar; Benediktovitch, Andrei; Feranchuk, Ilya; Lobko, Alexander

    2015-01-01

    Highlights: • X-ray radiation from relativistic electrons in a polycrystal is described. • Analytical results are found for two models of the polycrystal texture. • Characteristic number of emitted photons for real accelerator is 10 6 s −1 . • Intensity distribution at fixed frequency resembles a set of rings. • Radiation intensities in monocrystals and polycrystals are compared. - Abstract: A theoretical description of parametric X-ray radiation (PXR) from a nanocrystal powder target is presented in terms of the orientation distribution function (ODF). Two models of ODF resulting in the analytical solution for the PXR intensity distribution are used and the characteristic features of this distribution are considered. A promising estimate of the number of the emitted photons is obtained for the case of a nanodiamond powder target using the parameters of ASTA Facility at Fermilab. The PXR spectra from polycrystal and single crystal targets are compared. The application scenarios of PXR from nanocrystals are discussed.

  14. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  15. DOA Estimation of Low Altitude Target Based on Adaptive Step Glowworm Swarm Optimization-multiple Signal Classification Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Hao

    2015-06-01

    Full Text Available The traditional MUltiple SIgnal Classification (MUSIC algorithm requires significant computational effort and can not be employed for the Direction Of Arrival (DOA estimation of targets in a low-altitude multipath environment. As such, a novel MUSIC approach is proposed on the basis of the algorithm of Adaptive Step Glowworm Swarm Optimization (ASGSO. The virtual spatial smoothing of the matrix formed by each snapshot is used to realize the decorrelation of the multipath signal and the establishment of a fullorder correlation matrix. ASGSO optimizes the function and estimates the elevation of the target. The simulation results suggest that the proposed method can overcome the low altitude multipath effect and estimate the DOA of target readily and precisely without radar effective aperture loss.

  16. Bayesian Estimation of the Scale Parameter of Inverse Weibull Distribution under the Asymmetric Loss Functions

    Directory of Open Access Journals (Sweden)

    Farhad Yahgmaei

    2013-01-01

    Full Text Available This paper proposes different methods of estimating the scale parameter in the inverse Weibull distribution (IWD. Specifically, the maximum likelihood estimator of the scale parameter in IWD is introduced. We then derived the Bayes estimators for the scale parameter in IWD by considering quasi, gamma, and uniform priors distributions under the square error, entropy, and precautionary loss functions. Finally, the different proposed estimators have been compared by the extensive simulation studies in corresponding the mean square errors and the evolution of risk functions.

  17. Structure Learning and Statistical Estimation in Distribution Networks - Part I

    Energy Technology Data Exchange (ETDEWEB)

    Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-13

    Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of the metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presents algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time– which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.

  18. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    Science.gov (United States)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  19. Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party

    Science.gov (United States)

    Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

    The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.

  20. Angular distributions of target black fragments in nucleus–nucleus collisions at high energy

    International Nuclear Information System (INIS)

    Liu, Fuhu; Abd Allah, N.N.; Zhang, Donghai; Duan, Maiying

    2003-01-01

    The experimental results of space, azimuthal, and projected angular distributions of target black fragments produced in silicon-emulsion collisions at 4.5A GeV/c (the Dubna energy) are reported. A multi-source ideal gas model is suggested to describe the experimental angular distributions. The Monte Carlo calculated results are in agreement with the experimental data. (author)

  1. Inhomogeneous target-dose distributions: a dimension more for optimization?

    International Nuclear Information System (INIS)

    Gersem, Werner R.T. de; Derycke, Sylvie; Colle, Christophe O.; Wagter, Carlos de; Neve, Wilfried J. de

    1999-01-01

    Purpose: To evaluate if the use of inhomogeneous target-dose distributions, obtained by 3D conformal radiotherapy plans with or without beam intensity modulation, offers the possibility to decrease indices of toxicity to normal tissues and/or increase indices of tumor control stage III non-small cell lung cancer (NSCLC). Methods and Materials: Ten patients with stage III NSCLC were planned using a conventional 3D technique and a technique involving noncoplanar beam intensity modulation (BIM). Two planning target volumes (PTVs) were defined: PTV1 included macroscopic tumor volume and PTV2 included macroscopic and microscopic tumor volume. Virtual simulation defined the beam shapes and incidences as well as the wedge orientations (3D) and segment outlines (BIM). Weights of wedged beams, unwedged beams, and segments were determined by optimization using an objective function with a biological and a physical component. The biological component included tumor control probability (TCP) for PTV1 (TCP1), PTV2 (TCP2), and normal tissue complication probability (NTCP) for lung, spinal cord, and heart. The physical component included the maximum and minimum dose as well as the standard deviation of the dose at PTV1. The most inhomogeneous target-dose distributions were obtained by using only the biological component of the objective function (biological optimization). By enabling the physical component in addition to the biological component, PTV1 inhomogeneity was reduced (biophysical optimization). As indices for toxicity to normal tissues, NTCP-values as well as maximum doses or dose levels to relevant fractions of the organ's volume were used. As indices for tumor control, TCP-values as well as minimum doses to the PTVs were used. Results: When optimization was performed with the biophysical as compared to the biological objective function, the PTV1 inhomogeneity decreased from 13 (8-23)% to 4 (2-9)% for the 3D-(p = 0.00009) and from 44 (33-56)% to 20 (9-34)% for the BIM

  2. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  3. Brand market positions estimation and defining the strategic targets of its development

    OpenAIRE

    S.M. Makhnusha

    2010-01-01

    In this article the author generalizes the concept of brand characteristics which influenceits profitability and market positions. An approach to brand market positions estimation anddefining the strategic targets of its development is proposed.Keywords: brand, brand expansion, brand extension, brand value, brand power, brandrelevance, brand awareness.

  4. Experiment of ambient temperature distribution in ICF driver's target building

    International Nuclear Information System (INIS)

    Zhou Yi; He Jie; Yang Shujuan; Zhang Junwei; Zhou Hai; Feng Bin; Xie Na; Lin Donghui

    2009-01-01

    An experiment is designed to explore the ambient temperature distribution in an ICF driver's target building, Multi-channel PC-2WS temperature monitoring recorders and PTWD-2A precision temperature sensors are used to measure temperatures on the three vertical cross-sections in the building, and the collected data have been handled by MATLAB. The experiment and analysis show that the design of the heating ventilation and air conditioning (HVAC) system can maintain the temperature stability throughout the building. However, because of the impact of heat in the target chamber, larger local environmental temperature gradients appear near the marshalling yard, the staff region on the middle floor, and equipments on the lower floor which needs to be controlled. (authors)

  5. Exoplanet Population Distribution from Kepler Data

    Science.gov (United States)

    Traub, Wesley A.

    2015-08-01

    The underlying population of exoplanets around stars in the Kepler sample can be inferred by binning the Kepler planets in radius and period, invoking an empirical noise model, assuming a model exoplanet distribution function, randomly assigning planets to each of the Kepler target stars, asking whether each planet’s transit signal could be detected by Kepler, binning the resulting simulated detections, comparing the simulations with the observed data sample, and iterating on the model parameters until a satisfactory fit is obtained. The process is designed to simulate Kepler’s observing procedure. The key assumption is that the distribution function is continuous and the product of separable functions of period and radius. Any additional suspected biases in the sample can be handled by adjusting the noise model. The first advantage of this overall procedure is that the actual detection process is simulated as closely as possible, on a target by target basis, so the resulting estimated population should be closer to the actual population than by any other method of analysis. The second advantage is that the resulting distribution function can be extended to values of period and radius that go beyond the sample space, including, for example, application to estimating eta-sub-Earth, and also estimating the expected science yields of future direct-imaging exoplanet missions such as WFIRST-AFTA.

  6. Atomic displacement distributions for light energetic atoms incident on heavy atom targets

    International Nuclear Information System (INIS)

    Brice, D.K.

    1975-01-01

    The depth distributions of atomic displacements produced by 4 to 100 keV H, D, and He ions incident on Cr, Mo, and W targets have been calculated using a sharp displacement threshold, E/sub d/ = 35 eV, and a previously described calculational procedure. These displacement depth distributions have been compared with the depth distributions of energy deposited into atomic processes to determine if a proportionality (modified Kinchin--Pease relationship) can be established. Such a relationship does exist for He ions and D ions incident on these metals at energies above 4 keV and 20 keV, respectively. For H ions the two distributions have significantly different shapes at all incident energies considered

  7. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    Science.gov (United States)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of

  8. The potential distributions, and estimated spatial requirements and population sizes, of the medium to large-sized mammals in the planning domain of the Greater Addo Elephant National Park project

    Directory of Open Access Journals (Sweden)

    A.F. Boshoff

    2002-12-01

    Full Text Available The Greater Addo Elephant National Park project (GAENP involves the establishment of a mega biodiversity reserve in the Eastern Cape, South Africa. Conservation planning in the GAENP planning domain requires systematic information on the potential distributions and estimated spatial requirements, and population sizes of the medium to largesized mammals. The potential distribution of each species is based on a combination of literature survey, a review of their ecological requirements, and consultation with conservation scientists and managers. Spatial requirements were estimated within 21 Mammal Habitat Classes derived from 43 Land Classes delineated by expert-based vegetation and river mapping procedures. These estimates were derived from spreadsheet models based on forage availability estimates and the metabolic requirements of the respective mammal species, and that incorporate modifications of the agriculture-based Large Stock Unit approach. The potential population size of each species was calculated by multiplying its density estimate with the area of suitable habitat. Population sizes were calculated for pristine, or near pristine, habitats alone, and then for these habitats together with potentially restorable habitats for two park planning domain scenarios. These data will enable (a the measurement of the effectiveness of the GAENP in achieving predetermined demographic, genetic and evolutionary targets for mammals that can potentially occur in selected park sizes and configurations, (b decisions regarding acquisition of additional land to achieve these targets to be informed, (c the identification of species for which targets can only be met through metapopulation management,(d park managers to be guided regarding the re-introduction of appropriate species, and (e the application of realistic stocking rates. Where possible, the model predictions were tested by comparison with empirical data, which in general corroborated the

  9. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  10. Re-estimation of Motion and Reconstruction for Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Raket, Lars Lau; Forchhammer, Søren

    2014-01-01

    Transform domain Wyner-Ziv (TDWZ) video coding is an efficient approach to distributed video coding (DVC), which provides low complexity encoding by exploiting the source statistics at the decoder side. The DVC coding efficiency depends mainly on side information and noise modeling. This paper...... proposes a motion re-estimation technique based on optical flow to improve side information and noise residual frames by taking partially decoded information into account. To improve noise modeling, a noise residual motion re-estimation technique is proposed. Residual motion compensation with motion...

  11. Effects of target size on the comparison of photon and charged particle dose distributions

    International Nuclear Information System (INIS)

    Phillips, M.H.; Frankel, K.A.; Tjoa, T.; Lyman, J.T.; Fabrikant, J.I.; Levy, R.P.

    1989-12-01

    The work presented here is part of an ongoing project to quantify and evaluate the differences in the use of different radiation types and irradiation geometries in radiosurgery. We are examining dose distributions for photons using the ''Gamma Knife'' and the linear accelerator arc methods, as well as different species of charged particles from protons to neon ions. A number of different factors need to be studied to accurately compare the different modalities such as target size, shape and location, the irradiation geometry, and biological response. This presentation focuses on target size, which has a large effect on the dose distributions in normal tissue surrounding the lesion. This work concentrates on dose distributions found in radiosurgery, as opposed to those usually found in radiotherapy. 5 refs., 2 figs

  12. An Estimation of the Gamma-Ray Burst Afterglow Apparent Optical Brightness Distribution Function

    Science.gov (United States)

    Akerlof, Carl W.; Swan, Heather F.

    2007-12-01

    By using recent publicly available observational data obtained in conjunction with the NASA Swift gamma-ray burst (GRB) mission and a novel data analysis technique, we have been able to make some rough estimates of the GRB afterglow apparent optical brightness distribution function. The results suggest that 71% of all burst afterglows have optical magnitudes with mRa strong indication that the apparent optical magnitude distribution function peaks at mR~19.5. Such estimates may prove useful in guiding future plans to improve GRB counterpart observation programs. The employed numerical techniques might find application in a variety of other data analysis problems in which the intrinsic distributions must be inferred from a heterogeneous sample.

  13. Energy flow models for the estimation of technical losses in distribution network

    International Nuclear Information System (INIS)

    Au, Mau Teng; Tan, Chin Hooi

    2013-01-01

    This paper presents energy flow models developed to estimate technical losses in distribution network. Energy flow models applied in this paper is based on input energy and peak demand of distribution network, feeder length and peak demand, transformer loading capacity, and load factor. Two case studies, an urban distribution network and a rural distribution network are used to illustrate application of the energy flow models. Results on technical losses obtained for the two distribution networks are consistent and comparable to network of similar types and characteristics. Hence, the energy flow models are suitable for practical application.

  14. An Energy-Efficient Target Tracking Framework in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhijun Yu

    2009-01-01

    Full Text Available This study devises and evaluates an energy-efficient distributed collaborative signal and information processing framework for acoustic target tracking in wireless sensor networks. The distributed processing algorithm is based on mobile agent computing paradigm and sequential Bayesian estimation. At each time step, the short detection reports of cluster members will be collected by cluster head, and a sensor node with the highest signal-to-noise ratio (SNR is chosen there as reference node for time difference of arrive (TDOA calculation. During the mobile agent migration, the target state belief is transmitted among nodes and updated using the TDOA measurement of these fusion nodes one by one. The computing and processing burden is evenly distributed in the sensor network. To decrease the wireless communications, we propose to represent the belief by parameterized methods such as Gaussian approximation or Gaussian mixture model approximation. Furthermore, we present an attraction force function to handle the mobile agent migration planning problem, which is a combination of the node residual energy, useful information, and communication cost. Simulation examples demonstrate the estimation effectiveness and energy efficiency of the proposed distributed collaborative target tracking framework.

  15. Small-Area Estimation with Zero-Inflated Data – a Simulation Study

    Directory of Open Access Journals (Sweden)

    Krieg Sabine

    2016-12-01

    Full Text Available Many target variables in official statistics follow a semicontinuous distribution with a mixture of zeros and continuously distributed positive values. Such variables are called zero inflated. When reliable estimates for subpopulations with small sample sizes are required, model-based small-area estimators can be used, which improve the accuracy of the estimates by borrowing information from other subpopulations. In this article, three small-area estimators are investigated. The first estimator is the EBLUP, which can be considered the most common small-area estimator and is based on a linear mixed model that assumes normal distributions. Therefore, the EBLUP is model misspecified in the case of zero-inflated variables. The other two small-area estimators are based on a model that takes zero inflation explicitly into account. Both the Bayesian and the frequentist approach are considered. These small-area estimators are compared with each other and with design-based estimation in a simulation study with zero-inflated target variables. Both a simulation with artificial data and a simulation with real data from the Dutch Household Budget Survey are carried out. It is found that the small-area estimators improve the accuracy compared to the design-based estimator. The amount of improvement strongly depends on the properties of the population and the subpopulations of interest.

  16. Gridded rainfall estimation for distributed modeling in western mountainous areas

    Science.gov (United States)

    Moreda, F.; Cong, S.; Schaake, J.; Smith, M.

    2006-05-01

    Estimation of precipitation in mountainous areas continues to be problematic. It is well known that radar-based methods are limited due to beam blockage. In these areas, in order to run a distributed model that accounts for spatially variable precipitation, we have generated hourly gridded rainfall estimates from gauge observations. These estimates will be used as basic data sets to support the second phase of the NWS-sponsored Distributed Hydrologic Model Intercomparison Project (DMIP 2). One of the major foci of DMIP 2 is to better understand the modeling and data issues in western mountainous areas in order to provide better water resources products and services to the Nation. We derive precipitation estimates using three data sources for the period of 1987-2002: 1) hourly cooperative observer (coop) gauges, 2) daily total coop gauges and 3) SNOw pack TELemetry (SNOTEL) daily gauges. The daily values are disaggregated using the hourly gauge values and then interpolated to approximately 4km grids using an inverse-distance method. Following this, the estimates are adjusted to match monthly mean values from the Parameter-elevation Regressions on Independent Slopes Model (PRISM). Several analyses are performed to evaluate the gridded estimates for DMIP 2 experiments. These gridded inputs are used to generate mean areal precipitation (MAPX) time series for comparison to the traditional mean areal precipitation (MAP) time series derived by the NWS' California-Nevada River Forecast Center for model calibration. We use two of the DMIP 2 basins in California and Nevada: the North Fork of the American River (catchment area 885 sq. km) and the East Fork of the Carson River (catchment area 922 sq. km) as test areas. The basins are sub-divided into elevation zones. The North Fork American basin is divided into two zones above and below an elevation threshold. Likewise, the Carson River basin is subdivided in to four zones. For each zone, the analyses include: a) overall

  17. AKaplan-Meier estimators of distance distributions for spatial point processes

    NARCIS (Netherlands)

    Baddeley, A.J.; Gill, R.D.

    1997-01-01

    When a spatial point process is observed through a bounded window, edge effects hamper the estimation of characteristics such as the empty space function $F$, the nearest neighbour distance distribution $G$, and the reduced second order moment function $K$. Here we propose and study product-limit

  18. Structure Learning and Statistical Estimation in Distribution Networks - Part II

    Energy Technology Data Exchange (ETDEWEB)

    Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-13

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.

  19. Adaptive estimation for control of uncertain nonlinear systems with applications to target tracking

    Science.gov (United States)

    Madyastha, Venkatesh Kattigari

    2005-08-01

    Design of nonlinear observers has received considerable attention since the early development of methods for linear state estimation. The most popular approach is the extended Kalman filter (EKF), that goes through significant degradation in the presence of nonlinearities, particularly if unmodeled dynamics are coupled to the process and the measurement. For uncertain nonlinear systems, adaptive observers have been introduced to estimate the unknown state variables where no priori information about the unknown parameters is available. While establishing global results, these approaches are applicable only to systems transformable to output feedback form. Over the recent years, neural network (NN) based identification and estimation schemes have been proposed that relax the assumptions on the system at the price of sacrificing on the global nature of the results. However, most of the NN based adaptive observer approaches in the literature require knowledge of the full dimension of the system, therefore may not be suitable for systems with unmodeled dynamics. We first propose a novel approach to nonlinear state estimation from the perspective of augmenting a linear time invariant observer with an adaptive element. The class of nonlinear systems treated here are finite but of otherwise unknown dimension. The objective is to improve the performance of the linear observer when applied to a nonlinear system. The approach relies on the ability of the NNs to approximate the unknown dynamics from finite time histories of available measurements. Next we investigate nonlinear state estimation from the perspective of adaptively augmenting an existing time varying observer, such as an EKF. EKFs find their applications mostly in target tracking problems. The proposed approaches are robust to unmodeled dynamics, including unmodeled disturbances. Lastly, we consider the problem of adaptive estimation in the presence of feedback control for a class of uncertain nonlinear systems

  20. Comparing performance level estimation of safety functions in three distributed structures

    International Nuclear Information System (INIS)

    Hietikko, Marita; Malm, Timo; Saha, Heikki

    2015-01-01

    The capability of a machine control system to perform a safety function is expressed using performance levels (PL). This paper presents the results of a study where PL estimation was carried out for a safety function implemented using three different distributed control system structures. Challenges relating to the process of estimating PLs for safety related distributed machine control functions are highlighted. One of these examines the use of different cabling schemes in the implementation of a safety function and its effect on the PL evaluation. The safety function used as a generic example in PL calculations relates to a mobile work machine. It is a safety stop function where different technologies (electrical, hydraulic and pneumatic) can be utilized. It was detected that by replacing analogue cables with digital communication the system structure becomes simpler with less number of failing components, which can better the PL of the safety function. - Highlights: • Integration in distributed systems enables systems with less components. • It offers high reliability and diagnostic properties. • Analogue signals create uncertainty in signal reliability and difficult diagnostics

  1. An MCMC Algorithm for Target Estimation in Real-Time DNA Microarrays

    Directory of Open Access Journals (Sweden)

    Vikalo Haris

    2010-01-01

    Full Text Available DNA microarrays detect the presence and quantify the amounts of nucleic acid molecules of interest. They rely on a chemical attraction between the target molecules and their Watson-Crick complements, which serve as biological sensing elements (probes. The attraction between these biomolecules leads to binding, in which probes capture target analytes. Recently developed real-time DNA microarrays are capable of observing kinetics of the binding process. They collect noisy measurements of the amount of captured molecules at discrete points in time. Molecular binding is a random process which, in this paper, is modeled by a stochastic differential equation. The target analyte quantification is posed as a parameter estimation problem, and solved using a Markov Chain Monte Carlo technique. In simulation studies where we test the robustness with respect to the measurement noise, the proposed technique significantly outperforms previously proposed methods. Moreover, the proposed approach is tested and verified on experimental data.

  2. Adaptive Metropolis Sampling with Product Distributions

    Science.gov (United States)

    Wolpert, David H.; Lee, Chiu Fan

    2005-01-01

    The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.

  3. Estimating the Spatial Distribution of Groundwater Age Using Synoptic Surveys of Environmental Tracers in Streams

    Science.gov (United States)

    Gardner, W. P.

    2017-12-01

    A model which simulates tracer concentration in surface water as a function the age distribution of groundwater discharge is used to characterize groundwater flow systems at a variety of spatial scales. We develop the theory behind the model and demonstrate its application in several groundwater systems of local to regional scale. A 1-D stream transport model, which includes: advection, dispersion, gas exchange, first-order decay and groundwater inflow is coupled a lumped parameter model that calculates the concentration of environmental tracers in discharging groundwater as a function of the groundwater residence time distribution. The lumped parameters, which describe the residence time distribution, are allowed to vary spatially, and multiple environmental tracers can be simulated. This model allows us to calculate the longitudinal profile of tracer concentration in streams as a function of the spatially variable groundwater age distribution. By fitting model results to observations of stream chemistry and discharge, we can then estimate the spatial distribution of groundwater age. The volume of groundwater discharge to streams can be estimated using a subset of environmental tracers, applied tracers, synoptic stream gauging or other methods, and the age of groundwater then estimated using the previously calculated groundwater discharge and observed environmental tracer concentrations. Synoptic surveys of SF6, CFC's, 3H and 222Rn, along with measured stream discharge are used to estimate the groundwater inflow distribution and mean age for regional scale surveys of the Berland River in west-central Alberta. We find that groundwater entering the Berland has observable age, and that the age estimated using our stream survey is of similar order to limited samples from groundwater wells in the region. Our results show that the stream can be used as an easily accessible location to constrain the regional scale spatial distribution of groundwater age.

  4. Distributed sensor networks

    CERN Document Server

    Rubin, Donald B; Carlin, John B; Iyengar, S Sitharama; Brooks, Richard R; University, Clemson

    2014-01-01

    An Overview, S.S. Iyengar, Ankit Tandon, and R.R. BrooksMicrosensor Applications, David ShepherdA Taxonomy of Distributed Sensor Networks, Shivakumar Sastry and S.S. IyengarContrast with Traditional Systems, R.R. BrooksDigital Signal Processing Background, Yu Hen HuImage-Processing Background Lynne Grewe and Ben ShahshahaniObject Detection and Classification, Akbar M. SayeedParameter Estimation David FriedlanderTarget Tracking with Self-Organizing Distributed Sensors R.R. Brooks, C. Griffin, D.S. Friedlander, and J.D. KochCollaborative Signal and Information Processing: AnInformation-Directed Approach Feng Zhao, Jie Liu, Juan Liu, Leonidas Guibas, and James ReichEnvironmental Effects, David C. SwansonDetecting and Counteracting Atmospheric Effects Lynne L. GreweSignal Processing and Propagation for Aeroacoustic Sensor Networks, Richard J. Kozick, Brian M. Sadler, and D. Keith WilsonDistributed Multi-Target Detection in Sensor Networks Xiaoling Wang, Hairong Qi, and Steve BeckFoundations of Data Fusion f...

  5. Inferring uncertainty from interval estimates: Effects of alpha level and numeracy

    Directory of Open Access Journals (Sweden)

    Luke F. Rinne

    2013-05-01

    Full Text Available Interval estimates are commonly used to descriptively communicate the degree of uncertainty in numerical values. Conventionally, low alpha levels (e.g., .05 ensure a high probability of capturing the target value between interval endpoints. Here, we test whether alpha levels and individual differences in numeracy influence distributional inferences. In the reported experiment, participants received prediction intervals for fictitious towns' annual rainfall totals (assuming approximately normal distributions. Then, participants estimated probabilities that future totals would be captured within varying margins about the mean, indicating the approximate shapes of their inferred probability distributions. Results showed that low alpha levels (vs. moderate levels; e.g., .25 more frequently led to inferences of over-dispersed approximately normal distributions or approximately uniform distributions, reducing estimate accuracy. Highly numerate participants made more accurate estimates overall, but were more prone to inferring approximately uniform distributions. These findings have important implications for presenting interval estimates to various audiences.

  6. Thermophysical Property Estimation by Transient Experiments: The Effect of a Biased Initial Temperature Distribution

    Directory of Open Access Journals (Sweden)

    Federico Scarpa

    2015-01-01

    Full Text Available The identification of thermophysical properties of materials in dynamic experiments can be conveniently performed by the inverse solution of the associated heat conduction problem (IHCP. The inverse technique demands the knowledge of the initial temperature distribution within the material. As only a limited number of temperature sensors (or no sensor at all are arranged inside the test specimen, the knowledge of the initial temperature distribution is affected by some uncertainty. This uncertainty, together with other possible sources of bias in the experimental procedure, will propagate in the estimation process and the accuracy of the reconstructed thermophysical property values could deteriorate. In this work the effect on the estimated thermophysical properties due to errors in the initial temperature distribution is investigated along with a practical method to quantify this effect. Furthermore, a technique for compensating this kind of bias is proposed. The method consists in including the initial temperature distribution among the unknown functions to be estimated. In this way the effect of the initial bias is removed and the accuracy of the identified thermophysical property values is highly improved.

  7. Release the BEESTS: Bayesian Estimation of Ex-Gaussian STop-Signal Reaction Time Distributions

    Directory of Open Access Journals (Sweden)

    Dora eMatzke

    2013-12-01

    Full Text Available The stop-signal paradigm is frequently used to study response inhibition. Inthis paradigm, participants perform a two-choice response time task wherethe primary task is occasionally interrupted by a stop-signal that promptsparticipants to withhold their response. The primary goal is to estimatethe latency of the unobservable stop response (stop signal reaction timeor SSRT. Recently, Matzke, Dolan, Logan, Brown, and Wagenmakers (inpress have developed a Bayesian parametric approach that allows for theestimation of the entire distribution of SSRTs. The Bayesian parametricapproach assumes that SSRTs are ex-Gaussian distributed and uses Markovchain Monte Carlo sampling to estimate the parameters of the SSRT distri-bution. Here we present an efficient and user-friendly software implementa-tion of the Bayesian parametric approach —BEESTS— that can be appliedto individual as well as hierarchical stop-signal data. BEESTS comes withan easy-to-use graphical user interface and provides users with summarystatistics of the posterior distribution of the parameters as well various diag-nostic tools to assess the quality of the parameter estimates. The softwareis open source and runs on Windows and OS X operating systems. In sum,BEESTS allows experimental and clinical psychologists to estimate entiredistributions of SSRTs and hence facilitates the more rigorous analysis ofstop-signal data.

  8. Exact run length distribution of the double sampling x-bar chart with estimated process parameters

    Directory of Open Access Journals (Sweden)

    Teoh, W. L.

    2016-05-01

    Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.

  9. Design wave estimation considering directional distribution of waves

    Digital Repository Service at National Institute of Oceanography (India)

    SanilKumar, V.; Deo, M.C

    .elsevier.com/locate/oceaneng Technical Note Design wave estimation considering directional distribution of waves V. Sanil Kumar a,C3 , M.C. Deo b a OceanEngineeringDivision,NationalInstituteofOceanography,Donapaula,Goa-403004,India b Civil... of Physical Oceanography Norway, Report method for the routine 18, 1020–1034. ocean waves. Division of No. UR-80-09, 187 p. analysis of pitch and roll Conference on Coastal Engineering, 1. ASCE, Taiwan, pp. 136–149. Deo, M.C., Burrows, R., 1986. Extreme wave...

  10. An experimental and theoretical model of children’s search behavior in relation to target conspicuity and spatial distribution

    Science.gov (United States)

    Rosetti, Marcos Francisco; Pacheco-Cobos, Luis; Larralde, Hernán; Hudson, Robyn

    2010-11-01

    This work explores search trajectories of children attempting to find targets distributed on a playing field. This task, of ludic nature, was developed to test the effect of conspicuity and spatial distribution of targets on the searcher’s performance. The searcher’s path was recorded by a Global Positioning System (GPS) device attached to the child’s waist. Participants were not rewarded nor their performance rated. Variation in the conspicuity of the targets influenced search performance as expected; cryptic targets resulted in slower searches and longer, more tortuous paths. Extracting the main features of the paths showed that the children: (1) paid little attention to the spatial distribution and at least in the conspicuous condition approximately followed a nearest neighbor pattern of target collection, (2) were strongly influenced by the conspicuity of the targets. We implemented a simple statistical model for the search rules mimicking the children’s behavior at the level of individual (coarsened) steps. The model reproduced the main features of the children’s paths without the participation of memory or planning.

  11. Impact of smart metering data aggregation on distribution system state estimation

    OpenAIRE

    Chen, Qipeng; Kaleshi, Dritan; Fan, Zhong; Armour, Simon

    2016-01-01

    Pseudo medium/low voltage (MV/LV) transformer loads are usually used as partial inputs to the distribution system state estimation (DSSE) in MV systems. Such pseudo load can be represented by the aggregation of smart metering (SM) data. This follows the government restriction that distribution network operators (DNOs) can only use aggregated SM data. Therefore, we assess the subsequent performance of the DSSE, which shows the impact of this restriction - it affects the voltage angle estimatio...

  12. Estimation of monthly solar radiation distribution for solar energy system analysis

    International Nuclear Information System (INIS)

    Coskun, C.; Oktay, Z.; Dincer, I.

    2011-01-01

    The concept of probability density frequency, which is successfully used for analyses of wind speed and outdoor temperature distributions, is now modified and proposed for estimating solar radiation distributions for design and analysis of solar energy systems. In this study, global solar radiation distribution is comprehensively analyzed for photovoltaic (PV) panel and thermal collector systems. In this regard, a case study is conducted with actual global solar irradiation data of the last 15 years recorded by the Turkish State Meteorological Service. It is found that intensity of global solar irradiance greatly affects energy and exergy efficiencies and hence the performance of collectors. -- Research highlights: → The first study to apply global solar radiation distribution in solar system analyzes. → The first study showing global solar radiation distribution as a parameter of the solar irradiance intensity. → Time probability intensity frequency and probability power distribution do not have similar distribution patterns for each month. → There is no relation between the distribution of annual time lapse and solar energy with the intensity of solar irradiance.

  13. ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD

    Directory of Open Access Journals (Sweden)

    Yuri Gulbin

    2011-05-01

    Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.

  14. Influence of boundary effects on electron beam dose distribution formation in multilayer targets

    International Nuclear Information System (INIS)

    Kaluska, I.; Zimek, Z.; Lazurik, V.T.; Lazurik, V.M.; Popov, G.F.; Rogov, Y.V.

    2010-01-01

    Computational dosimetry play a significant role in an industrial radiation processing at dose measurements in the product irradiated with electron beams (EB), X-ray and gamma ray from radionuclide sources. Accurate and validated programs for absorbed dose calculations are required for computational dosimetry. The program ModeStEB (modelling of EB processing in a three-dimensional (3D) multilayer flat targets) was designed specially for simulation and optimization of industrial radiation processing, calculation of the 3D absorbed dose distribution within multilayer packages. The package is irradiated with scanned EB on an industrial radiation facility that is based on the pulsed or continuous type of electron accelerators in the electron energy range from 0.1 to 25 MeV. Simulation of EB dose distributions in the multilayer targets was accomplished using the Monte Carlo (MC) method. Experimental verification of MC simulation prediction for EB dose distribution formation in a stack of plates interleaved with polyvinylchloride (PVC) dosimetric films (DF), within a packing box, and irradiated with a scanned 10 MeV EB on a moving conveyer is discussed. (authors)

  15. Validation of a Robust Neural Real-Time Voltage Estimator for Active Distribution Grids on Field Data

    DEFF Research Database (Denmark)

    Pertl, Michael; Douglass, Philip James; Heussen, Kai

    2018-01-01

    network approach for voltage estimation in active distribution grids by means of measured data from two feeders of a real low voltage distribution grid. The approach enables a real-time voltage estimation at locations in the distribution grid, where otherwise only non-real-time measurements are available......The installation of measurements in distribution grids enables the development of data driven methods for the power system. However, these methods have to be validated in order to understand the limitations and capabilities for their use. This paper presents a systematic validation of a neural...

  16. Processing statistics: an examination of focused and distributed attention using event related potentials.

    Science.gov (United States)

    Baijal, Shruti; Nakatani, Chie; van Leeuwen, Cees; Srinivasan, Narayanan

    2013-06-07

    Human observers show remarkable efficiency in statistical estimation; they are able, for instance, to estimate the mean size of visual objects, even if their number exceeds the capacity limits of focused attention. This ability has been understood as the result of a distinct mode of attention, i.e. distributed attention. Compared to the focused attention mode, working memory representations under distributed attention are proposed to be more compressed, leading to reduced working memory loads. An alternate proposal is that distributed attention uses less structured, feature-level representations. These would fill up working memory (WM) more, even when target set size is low. Using event-related potentials, we compared WM loading in a typical distributed attention task (mean size estimation) to that in a corresponding focused attention task (object recognition), using a measure called contralateral delay activity (CDA). Participants performed both tasks on 2, 4, or 8 different-sized target disks. In the recognition task, CDA amplitude increased with set size; notably, however, in the mean estimation task the CDA amplitude was high regardless of set size. In particular for set-size 2, the amplitude was higher in the mean estimation task than in the recognition task. The result showed that the task involves full WM loading even with a low target set size. This suggests that in the distributed attention mode, representations are not compressed, but rather less structured than under focused attention conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Distributional effects of the Australian Renewable Energy Target (RET) through wholesale and retail electricity price impacts

    International Nuclear Information System (INIS)

    Cludius, Johanna; Forrest, Sam; MacGill, Iain

    2014-01-01

    The Australian Renewable Energy Target (RET) has spurred significant investment in renewable electricity generation, notably wind power, over the past decade. This paper considers distributional implications of the RET for different energy users. Using time-series regression, we show that the increasing amount of wind energy has placed considerable downward pressure on wholesale electricity prices through the so-called merit order effect. On the other hand, RET costs are passed on to consumers in the form of retail electricity price premiums. Our findings highlight likely significant redistributive transfers between different energy user classes under current RET arrangements. In particular, some energy-intensive industries are benefiting from lower wholesale electricity prices whilst being largely exempted from contributing to the costs of the scheme. By contrast, many households are paying significant RET pass through costs whilst not necessarily benefiting from lower wholesale prices. A more equitable distribution of RET costs and benefits could be achieved by reviewing the scope and extent of industry exemptions and ensuring that methodologies to estimate wholesale price components in regulated electricity tariffs reflect more closely actual market conditions. More generally, these findings support the growing international appreciation that policy makers need to integrate distributional assessments into policy design and implementation. - Highlights: • The Australian RET has complex yet important distributional impacts on different energy users. • Likely wealth transfers from residential and small business consumers to large energy-intensive industry. • Merit order effects of wind likely overcompensate exempt industry for contribution to RET costs. • RET costs for households could be reduced if merit order effects were adequately passed through. • Need for distributional impact assessments when designing and implementing clean energy policy

  18. TRMM Satellite Algorithm Estimates to Represent the Spatial Distribution of Rainstorms

    Directory of Open Access Journals (Sweden)

    Patrick Marina

    2017-01-01

    Full Text Available On-site measurements from rain gauge provide important information for the design, construction, and operation of water resources engineering projects, groundwater potentials, and the water supply and irrigation systems. A dense gauging network is needed to accurately characterize the variation of rainfall over a region, unfitting for conditions with limited networks, such as in Sarawak, Malaysia. Hence, satellite-based algorithm estimates are introduced as an innovative solution to these challenges. With accessibility to dataset retrievals from public domain websites, it has become a useful source to measure rainfall for a wider coverage area at finer temporal resolution. This paper aims to investigate the rainfall estimates prepared by Tropical Rainfall Measuring Mission (TRMM to explain whether it is suitable to represent the distribution of extreme rainfall in Sungai Sarawak Basin. Based on the findings, more uniform correlations for the investigated storms can be observed for low to medium altitude (>40 MASL. It is found for the investigated events of Jan 05-11, 2009: the normalized root mean square error (NRMSE = 36.7 %; and good correlation (CC = 0.9. These findings suggest that satellite algorithm estimations from TRMM are suitable to represent the spatial distribution of extreme rainfall.

  19. Target micro-displacement measurement by a "comb" structure of intensity distribution in laser plasma propulsion

    Science.gov (United States)

    Zheng, Z. Y.; Zhang, S. Q.; Gao, L.; Gao, H.

    2015-05-01

    A "comb" structure of beam intensity distribution is designed and achieved to measure a target displacement of micrometer level in laser plasma propulsion. Base on the "comb" structure, the target displacement generated by nanosecond laser ablation solid target is measured and discussed. It is found that the "comb" structure is more suitable for a thin film target with a velocity lower than tens of millimeters per second. Combing with a light-electric monitor, the `comb' structure can be used to measure a large range velocity.

  20. Design of the solid target structure and the study on the coolant flow distribution in the solid target using the 2-dimensional flow analysis

    International Nuclear Information System (INIS)

    Haga, Katsuhiro; Terada, Atsuhiko; Ishikura, Shuichi; Teshigawara, Makoto; Kinoshita, Hidetaka; Kobayashi, Kaoru; Kaminaga, Masaki; Hino, Ryutaro; Susuki, Akira

    1999-11-01

    A solid target cooled by heavy water is presently under development under the Neutron Science Research Project of the Japan Atomic Energy Research Institute (JAERI). Target plates of several millimeters thickness made of heavy metal are used as the spallation target material and they are put face to face in a row with one to two millimeters gaps in between though which heavy water flows, as the coolant. Based on the design criteria regarding the target plate cooling, the volume percentage of the coolant, and the thermal stress produced in the target plates, we conducted thermal and hydraulic analysis with a one dimensional target plate model. We choosed tungsten as the target material, and decided on various target plate thicknesses. We then calculated the temperature and the thermal stress in the target plates using a two dimensional model, and confirmed the validity of the target plate thicknesses. Based on these analytical results, we proposed a target structure in which forty target plates are divided into six groups and each group is cooled using a single pass of coolant. In order to investigate the relationship between the distribution of the coolant flow, the pressure drop, and the coolant velocity, we conducted a hydraulic analysis using the general purpose hydraulic analysis code. As a result, we realized that an uniform coolant flow distribution can be achieved under a wide range of flow velocity conditions in the target plate cooling channels from 1 m/s to 10 m/s. The pressure drop along the coolant path was 0.09 MPa and 0.17 MPa when the coolant flow velocity was 5 m/s and 7 m/s respectively, which is required to cool the 1.5 MW and 2.5 MW solid targets. (author)

  1. Likelihood Estimation of Gamma Ray Bursts Duration Distribution

    OpenAIRE

    Horvath, Istvan

    2005-01-01

    Two classes of Gamma Ray Bursts have been identified so far, characterized by T90 durations shorter and longer than approximately 2 seconds. It was shown that the BATSE 3B data allow a good fit with three Gaussian distributions in log T90. In the same Volume in ApJ. another paper suggested that the third class of GRBs is may exist. Using the full BATSE catalog here we present the maximum likelihood estimation, which gives us 0.5% probability to having only two subclasses. The MC simulation co...

  2. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Sharawi, Mohammad S.; Alouini, Mohamed-Slim

    2017-01-01

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  3. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain

    2017-01-09

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  4. A "total parameter estimation" method in the varification of distributed hydrological models

    Science.gov (United States)

    Wang, M.; Qin, D.; Wang, H.

    2011-12-01

    Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in

  5. Bayesian Estimation of Two-Parameter Weibull Distribution Using Extension of Jeffreys' Prior Information with Three Loss Functions

    Directory of Open Access Journals (Sweden)

    Chris Bambey Guure

    2012-01-01

    Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.

  6. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  7. A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.

    Science.gov (United States)

    Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin

    2018-04-12

    This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.

  8. Box-Particle Cardinality Balanced Multi-Target Multi-Bernoulli Filter

    OpenAIRE

    L. Song; X. Zhao

    2014-01-01

    As a generalized particle filtering, the box-particle filter (Box-PF) has a potential to process the measurements affected by bounded error of unknown distributions and biases. Inspired by the Box-PF, a novel implementation for multi-target tracking, called box-particle cardinality balanced multi-target multi-Bernoulli (Box-CBMeMBer) filter is presented in this paper. More important, to eliminate the negative effect of clutters in the estimation of the numbers of targets, an improved generali...

  9. PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Samir Kamel Ashour

    2010-12-01

    Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.

  10. Inverse estimation of the particle size distribution using the Fruit Fly Optimization Algorithm

    International Nuclear Information System (INIS)

    He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming

    2015-01-01

    The Fruit Fly Optimization Algorithm (FOA) is applied to retrieve the particle size distribution (PSD) for the first time. The direct problems are solved by the modified Anomalous Diffraction Approximation (ADA) and the Lambert–Beer Law. Firstly, three commonly used monomodal PSDs, i.e. the Rosin–Rammer (R–R) distribution, the normal (N–N) distribution and the logarithmic normal (L–N) distribution, and the bimodal Rosin–Rammer distribution function are estimated in the dependent model. All the results show that the FOA can be used as an effective technique to estimate the PSDs under the dependent model. Then, an optimal wavelength selection technique is proposed to improve the retrieval results of bimodal PSD. Finally, combined with two general functions, i.e. the Johnson's S B (J-S B ) function and the modified beta (M-β) function, the FOA is employed to recover actual measurement aerosol PSDs over Beijing and Hangzhou obtained from the aerosol robotic network (AERONET). All the numerical simulations and experiment results demonstrate that the FOA can be used to retrieve actual measurement PSDs, and more reliable and accurate results can be obtained, if the J-S B function is employed

  11. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    OpenAIRE

    Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...

  12. Preliminary estimation of minimum target dose in intracavitary radiotherapy for cervical cancer

    Energy Technology Data Exchange (ETDEWEB)

    Ohara, Kiyoshi; Oishi-Tanaka, Yumiko; Sugahara, Shinji; Itai, Yuji [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine

    2001-08-01

    In intracavitary radiotherapy (ICRT) for cervical cancer, minimum target dose (D{sub min}) will pertain to local disease control more directly than will reference point A dose (D{sub A}). However, ICRT has been performed traditionally without specifying D{sub min} since the target volume was not identified. We have estimated D{sub min} retrospectively by identifying tumors using magnetic resonance (MR) images. Pre- and posttreatment MR images of 31 patients treated with high-dose-rate ICRT were used. ICRT was performed once weekly at 6.0 Gy D{sub A}, and involved 2-5 insertions for each patient, 119 insertions in total. D{sub min} was calculated arbitrarily simply at the point A level using the tumor width (W{sub A}) to compare with D{sub A}. W{sub A} at each insertion was estimated by regression analysis with pre- and posttreatment W{sub A}. D{sub min} for each insertion varied from 3.0 to 46.0 Gy, a 16-fold difference. The ratio of total D{sub min} to total D{sub A} for each patient varied from 0.5 to 6.5. Intrapatient D{sub min} difference between the initial insertion and final insertion varied from 1.1 to 3.4. Preliminary estimation revealed that D{sub min} varies widely under generic dose prescription. Thorough D{sub min} specification will be realized when ICRT-applicator insertion is performed under MR imaging. (author)

  13. Experimental study on energy distribution of the hot electrons generated by femtosecond laser interacting with solid targets

    International Nuclear Information System (INIS)

    Gu Yuqiu; Zheng Zhijian; Zhou Weimin; Wen Tianshu; Chunyu Shutai; Cai Dafeng; Sichuan Univ., Chengdu; Neijiang Teachers College, Neijiang; Jiao Chunye; Chen Hao; Sichuan Univ., Chengdu; Yang Xiangdong

    2005-01-01

    This paper reports the results of the experiment of hot electron energy distribution during the femtosecond laser-solid target interaction. The hot electrons formed an anisotropic energy distribution. In the direction of the target normal, the energy spectrum of the hot electron was a Maxwellian-like distribution with an effective temperature of 206 keV, which was due to the resonance absorption. In the direction of the specular reflection of laser, there appeared a local plateau of hot electron energy spectrum at the beginning and then it was decreased gradually, which maybe produced by several acceleration mechanisms. The effective temperature and the yield of hot electrons in the direction of the target normal is larger than those in the direction of the specular reflection of laser, which proves that the resonance absorption mechanism is more effective than others. (authors)

  14. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  15. Estimation of distributed Fermat-point location for wireless sensor networking.

    Science.gov (United States)

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  16. Methods for obtaining distributions of uranium occurrence from estimates of geologic features

    International Nuclear Information System (INIS)

    Ford, C.E.; McLaren, R.A.

    1980-04-01

    The problem addressed in this paper is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data require obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs

  17. Methods for obtaining distributions of uranium occurrence from estimates of geologic features

    International Nuclear Information System (INIS)

    Ford, C.E.; McLaren, R.A.

    1980-04-01

    The problem addressed in this report is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data requires obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs

  18. Estimation of neutron energy distributions from prompt gamma emissions

    Science.gov (United States)

    Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.

    2017-11-01

    A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.

  19. Estimation of dose distribution in occupationally exposed individuals to FDG-18F

    International Nuclear Information System (INIS)

    Lacerda, Isabelle V. Batista de; Cabral, Manuela O. Monteiro; Vieira, Jose Wilson

    2014-01-01

    The use of unsealed radiation sources in nuclear medicine can lead to important incorporation of radionuclides, especially for occupationally exposed individuals (OEIs) during production and handling of radiopharmaceuticals. In this study, computer simulation was proposed as an alternative methodology for evaluation of the absorbed dose distribution and for the effective dose value in OEIs. For this purpose, the Exposure Computational Model (ECM) which is named as FSUP (Female Adult Mesh - supine) were used. This ECM is composed of: voxel phantom FASH (Female Adult MeSH) in the supine position, the MC code EGSnrc and an algorithm simulator of general internal source. This algorithm was modified to adapt to specific needs of the positron emission from FDG- 18 F. The obtained results are presented as absorbed dose/accumulated activity. To obtain the absorbed dose distribution it was necessary to use accumulative activity data from the in vivo bioassay. The absorbed dose distribution and the value of estimated effective dose in this study did not exceed the limits for occupational exposure. Therefore, the creation of a database with the distribution of accumulated activity is suggested in order to estimate the absorbed dose in radiosensitive organs and the effective dose for OEI in similar environment. (author)

  20. Angular distributions of particles sputtered from multicomponent targets with gas cluster ions

    Energy Technology Data Exchange (ETDEWEB)

    Ieshkin, A.E. [Faculty of Physics, Lomonosov Moscow State University, Leninskie Gory, Moscow 119991 (Russian Federation); Ermakov, Yu.A., E-mail: yuriermak@yandex.ru [Skobeltsyn Nuclear Physics Research Institute, Lomonosov Moscow State University, Leninskie Gory, Moscow 119991 (Russian Federation); Chernysh, V.S. [Faculty of Physics, Lomonosov Moscow State University, Leninskie Gory, Moscow 119991 (Russian Federation)

    2015-07-01

    The experimental angular distributions of atoms sputtered from polycrystalline W, Cd and Ni based alloys with 10 keV Ar cluster ions are presented. RBS was used to analyze a material deposited on a collector. It has been found that the mechanism of sputtering, connected with elastic properties of materials, has a significant influence on the angular distributions of sputtered components. The effect of non-stoichiometric sputtering at different emission angles has been found for the alloys under cluster ion bombardment. Substantial smoothing of the surface relief was observed for all targets irradiated with cluster ions.

  1. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    Science.gov (United States)

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  2. Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations

    Science.gov (United States)

    Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong

    2017-08-01

    Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.

  3. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    Science.gov (United States)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  4. The Spatial Distribution of Poverty in Vietnam and the Potential for Targeting

    OpenAIRE

    Minot, Nicholas; Baulch, Bob

    2002-01-01

    The authors combine household survey and census data to construct a provincial poverty map of Vietnam and evaluate the accuracy of geographically targeted antipoverty programs. First, they estimate per capita expenditure as a function of selected household and geographic characteristics using the 1998 Vietnam Living Standards Survey. Next, they combine the results with data on the same hou...

  5. BAYESIAN ESTIMATION OF THE SHAPE PARAMETER OF THE GENERALISED EXPONENTIAL DISTRIBUTION UNDER DIFFERENT LOSS FUNCTIONS

    Directory of Open Access Journals (Sweden)

    SANKU DEY

    2010-11-01

    Full Text Available The generalized exponential (GE distribution proposed by Gupta and Kundu (1999 is an important lifetime distribution in survival analysis. In this article, we propose to obtain Bayes estimators and its associated risk based on a class of  non-informative prior under the assumption of three loss functions, namely, quadratic loss function (QLF, squared log-error loss function (SLELF and general entropy loss function (GELF. The motivation is to explore the most appropriate loss function among these three loss functions. The performances of the estimators are, therefore, compared on the basis of their risks obtained under QLF, SLELF and GELF separately. The relative efficiency of the estimators is also obtained. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different situations.

  6. Multiobjective memetic estimation of distribution algorithm based on an incremental tournament local searcher.

    Science.gov (United States)

    Yang, Kaifeng; Mu, Li; Yang, Dongdong; Zou, Feng; Wang, Lei; Jiang, Qiaoyong

    2014-01-01

    A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.

  7. Multiobjective Memetic Estimation of Distribution Algorithm Based on an Incremental Tournament Local Searcher

    Directory of Open Access Journals (Sweden)

    Kaifeng Yang

    2014-01-01

    Full Text Available A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.

  8. Characterization of spherical core–shell particles by static light scattering. Estimation of the core- and particle-size distributions

    International Nuclear Information System (INIS)

    Clementi, Luis A.; Vega, Jorge R.; Gugliotta, Luis M.; Quirantes, Arturo

    2012-01-01

    A numerical method is proposed for the characterization of core–shell spherical particles from static light scattering (SLS) measurements. The method is able to estimate the core size distribution (CSD) and the particle size distribution (PSD), through the following two-step procedure: (i) the estimation of the bivariate core–particle size distribution (C–PSD), by solving a linear ill-conditioned inverse problem through a generalized Tikhonov regularization strategy, and (ii) the calculation of the CSD and the PSD from the estimated C–PSD. First, the method was evaluated on the basis of several simulated examples, with polystyrene–poly(methyl methacrylate) core–shell particles of different CSDs and PSDs. Then, two samples of hematite–Yttrium basic carbonate core–shell particles were successfully characterized. In all analyzed examples, acceptable estimates of the PSD and the average diameter of the CSD were obtained. Based on the single-scattering Mie theory, the proposed method is an effective tool for characterizing core–shell colloidal particles larger than their Rayleigh limits without requiring any a-priori assumption on the shapes of the size distributions. Under such conditions, the PSDs can always be adequately estimated, while acceptable CSD estimates are obtained when the core/shell particles exhibit either a high optical contrast, or a moderate optical contrast but with a high ‘average core diameter’/‘average particle diameter’ ratio. -- Highlights: ► Particles with core–shell morphology are characterized by static light scattering. ► Core size distribution and particle size distribution are successfully estimated. ► Simulated and experimental examples are used to validate the numerical method. ► The positive effect of a large core/shell optical contrast is investigated. ► No a-priori assumption on the shapes of the size distributions is required.

  9. Estimation of current density distribution of PAFC by analysis of cell exhaust gas

    Energy Technology Data Exchange (ETDEWEB)

    Kato, S.; Seya, A. [Fuji Electric Co., Ltd., Ichihara-shi (Japan); Asano, A. [Fuji Electric Corporate, Ltd., Yokosuka-shi (Japan)

    1996-12-31

    To estimate distributions of Current densities, voltages, gas concentrations, etc., in phosphoric acid fuel cell (PAFC) stacks, is very important for getting fuel cells with higher quality. In this work, we leave developed a numerical simulation tool to map out the distribution in a PAFC stack. And especially to Study Current density distribution in the reaction area of the cell, we analyzed gas composition in several positions inside a gas outlet manifold of the PAFC stack. Comparing these measured data with calculated data, the current density distribution in a cell plane calculated by the simulation, was certified.

  10. Estimating the transmission potential of supercritical processes based on the final size distribution of minor outbreaks.

    Science.gov (United States)

    Nishiura, Hiroshi; Yan, Ping; Sleeman, Candace K; Mode, Charles J

    2012-02-07

    Use of the final size distribution of minor outbreaks for the estimation of the reproduction numbers of supercritical epidemic processes has yet to be considered. We used a branching process model to derive the final size distribution of minor outbreaks, assuming a reproduction number above unity, and applying the method to final size data for pneumonic plague. Pneumonic plague is a rare disease with only one documented major epidemic in a spatially limited setting. Because the final size distribution of a minor outbreak needs to be normalized by the probability of extinction, we assume that the dispersion parameter (k) of the negative-binomial offspring distribution is known, and examine the sensitivity of the reproduction number to variation in dispersion. Assuming a geometric offspring distribution with k=1, the reproduction number was estimated at 1.16 (95% confidence interval: 0.97-1.38). When less dispersed with k=2, the maximum likelihood estimate of the reproduction number was 1.14. These estimates agreed with those published from transmission network analysis, indicating that the human-to-human transmission potential of the pneumonic plague is not very high. Given only minor outbreaks, transmission potential is not sufficiently assessed by directly counting the number of offspring. Since the absence of a major epidemic does not guarantee a subcritical process, the proposed method allows us to conservatively regard epidemic data from minor outbreaks as supercritical, and yield estimates of threshold values above unity. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  11. Estimation of Distributed Fermat-Point Location for Wireless Sensor Networking

    Directory of Open Access Journals (Sweden)

    Yanuarius Teofilus Larosa

    2011-04-01

    Full Text Available This work presents a localization scheme for use in wireless sensor networks (WSNs that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE. DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  12. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  13. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  14. Adaptive distributed video coding with correlation estimation using expectation propagation

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  15. Estimation of direction of arrival of a moving target using subspace based approaches

    Science.gov (United States)

    Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2016-05-01

    In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.

  16. Parameter Estimations and Optimal Design of Simple Step-Stress Model for Gamma Dual Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Hamdy Mohamed Salem

    2018-03-01

    Full Text Available This paper considers life-testing experiments and how it is effected by stress factors: namely temperature, electricity loads, cycling rate and pressure. A major type of accelerated life tests is a step-stress model that allows the experimenter to increase stress levels more than normal use during the experiment to see the failure items. The test items are assumed to follow Gamma Dual Weibull distribution. Different methods for estimating the parameters are discussed. These include Maximum Likelihood Estimations and Confidence Interval Estimations which is based on asymptotic normality generate narrow intervals to the unknown distribution parameters with high probability. MathCAD (2001 program is used to illustrate the optimal time procedure through numerical examples.

  17. Collaborative In-Network Processing for Target Tracking

    Directory of Open Access Journals (Sweden)

    Juan Liu

    2003-03-01

    Full Text Available This paper presents a class of signal processing techniques for collaborative signal processing in ad hoc sensor networks, focusing on a vehicle tracking application. In particular, we study two types of commonly used sensors—acoustic-amplitude sensors for target distance estimation and direction-of-arrival sensors for bearing estimation—and investigate how networks of such sensors can collaborate to extract useful information with minimal resource usage. The information-driven sensor collaboration has several advantages: tracking is distributed, and the network is energy-efficient, activated only on a when-needed basis. We demonstrate the effectiveness of the approach to target tracking using both simulation and field data.

  18. Empirical Estimates in Stochastic Optimization via Distribution Tails

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta

    2010-01-01

    Roč. 46, č. 3 (2010), s. 459-471 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR GA402/07/1113; GA ČR(CZ) GA402/08/0107; GA MŠk(CZ) LC06075 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic programming problems * Stability * Wasserstein metric * L_1 norm * Lipschitz property * Empirical estimates * Convergence rate * Exponential tails * Heavy tails * Pareto distribution * Risk functional * Empirical quantiles Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010

  19. Performance of Estimation of distribution algorithm for initial core loading optimization of AHWR-LEU

    International Nuclear Information System (INIS)

    Thakur, Amit; Singh, Baltej; Gupta, Anurag; Duggal, Vibhuti; Bhatt, Kislay; Krishnani, P.D.

    2016-01-01

    Highlights: • EDA has been applied to optimize initial core of AHWR-LEU. • Suitable value of weighing factor ‘α’ and population size in EDA was estimated. • The effect of varying initial distribution function on optimized solution was studied. • For comparison, Genetic algorithm was also applied. - Abstract: Population based evolutionary algorithms now form an integral part of fuel management in nuclear reactors and are frequently being used for fuel loading pattern optimization (LPO) problems. In this paper we have applied Estimation of distribution algorithm (EDA) to optimize initial core loading pattern (LP) of AHWR-LEU. In EDA, new solutions are generated by sampling the probability distribution model estimated from the selected best candidate solutions. The weighing factor ‘α’ decides the fraction of current best solution for updating the probability distribution function after each generation. A wider use of EDA warrants a comprehensive study on parameters like population size, weighing factor ‘α’ and initial probability distribution function. In the present study, we have done an extensive analysis on these parameters (population size, weighing factor ‘α’ and initial probability distribution function) in EDA. It is observed that choosing a very small value of ‘α’ may limit the search of optimized solutions in the near vicinity of initial probability distribution function and better loading patterns which are away from initial distribution function may not be considered with due weightage. It is also observed that increasing the population size improves the optimized loading pattern, however the algorithm still fails if the initial distribution function is not close to the expected optimized solution. We have tried to find out the suitable values for ‘α’ and population size to be considered for AHWR-LEU initial core loading pattern optimization problem. For sake of comparison and completeness, we have also addressed the

  20. Analysis of Maneuvering Targets with Complex Motions by Two-Dimensional Product Modified Lv's Distribution for Quadratic Frequency Modulation Signals.

    Science.gov (United States)

    Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu

    2017-06-21

    For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed-referred to as the Two-Dimensional product modified Lv's distribution (2D-PMLVD)-for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.

  1. Improving control and estimation for distributed parameter systems utilizing mobile actuator-sensor network.

    Science.gov (United States)

    Mu, Wenying; Cui, Baotong; Li, Wen; Jiang, Zhengxian

    2014-07-01

    This paper proposes a scheme for non-collocated moving actuating and sensing devices which is unitized for improving performance in distributed parameter systems. By Lyapunov stability theorem, each moving actuator/sensor agent velocity is obtained. To enhance state estimation of a spatially distributes process, two kinds of filters with consensus terms which penalize the disagreement of the estimates are considered. Both filters can result in the well-posedness of the collective dynamics of state errors and can converge to the plant state. Numerical simulations demonstrate that the effectiveness of such a moving actuator-sensor network in enhancing system performance and the consensus filters converge faster to the plant state when consensus terms are included. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Distributed Road Grade Estimation for Heavy Duty Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Sahlholm, Per

    2011-07-01

    An increasing need for goods and passenger transportation drives continued worldwide growth in traffic. As traffic increases environmental concerns, traffic safety, and cost efficiency become ever more important. Advancements in microelectronics open the possibility to address these issues through new advanced driver assistance systems. Applications such as predictive cruise control, automated gearbox control, predictive front lighting control, and hybrid vehicle state-of-charge control decrease the energy consumption of vehicles and increase the safety. These control systems can benefit significantly from preview road grade information. This information is currently obtained using specialized survey vehicles, and is not widely available. This thesis proposes new methods to obtain road grade information using on-board sensors. The task of creating road grade maps is addressed by the proposal of a framework where vehicles using a road network collect the necessary data for estimating the road grade. The estimation can then be carried out locally in the vehicle, or in the presence of a communication link to the infrastructure, centrally. In either case the accuracy of the map increases over time, and costly road surveys can be avoided. This thesis presents a new distributed method for creating accurate road grade maps for vehicle control applications. Standard heavy duty vehicles in normal operation are used to collect measurements. Estimates from multiple passes along a road segment are merged to form a road grade map, which improves each time a vehicle retraces a route. The design and implementation of the road grade estimator are described, and the performance is experimentally evaluated using real vehicles. Three different grade estimation methods, based on different assumption on the road grade signal, are proposed and compared. They all use data from sensors that are standard equipment in heavy duty vehicles. Measurements of the vehicle speed and the engine

  3. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    Science.gov (United States)

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  4. Estimation of four-dimensional dose distribution using electronic portal imaging device in radiation therapy

    International Nuclear Information System (INIS)

    Mizoguchi, Asumi; Arimura, Hidetaka; Shioyama, Yoshiyuki

    2013-01-01

    We are developing a method to evaluate four-dimensional radiation dose distribution in a patient body based upon the animated image of EPID (electronic portal imaging device) which is an image of beam-direction at the irradiation. In the first place, we have obtained the image of the dose which is emitted from patient body at therapy planning using therapy planning CT image and dose evaluation algorism. In the second place, we have estimated the emission dose image at the irradiation using EPID animated image which is obtained at the irradiation. In the third place, we have got an affine transformation matrix including respiratory movement in the body by performing linear registration on the emission dose image at therapy planning to get the one at the irradiation. In the fourth place, we have applied the affine transformation matrix on the therapy planning CT image and estimated the CT image 'at irradiation'. Finally we have evaluated four-dimensional dose distribution by calculating dose distribution in the CT image 'at irradiation' which has been estimated for each frame of the EPID animated-image. This scheme may be useful for evaluating therapy results and risk management. (author)

  5. A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions

    Science.gov (United States)

    Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver

    2016-01-01

    Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities

  6. In vivo estimation of target registration errors during augmented reality laparoscopic surgery.

    Science.gov (United States)

    Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2018-06-01

    Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.

  7. Estimation of the location parameter of distributions with known coefficient of variation by record values

    Directory of Open Access Journals (Sweden)

    N. K. Sajeevkumar

    2014-09-01

    Full Text Available In this article, we derived the Best Linear Unbiased Estimator (BLUE of the location parameter of certain distributions with known coefficient of variation by record values. Efficiency comparisons are also made on the proposed estimator with some of the usual estimators. Finally we give a real life data to explain the utility of results developed in this article.

  8. Distribution functions to estimate radionuclide solid-liquid distribution coefficients in soils: the case of Cs

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)

    2014-07-01

    In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for

  9. Estimation of thermochemical behavior of spallation products in mercury target

    International Nuclear Information System (INIS)

    Kobayashi, Kaoru; Kaminaga, Masanori; Haga, Katsuhiro; Kinoshita, Hidetaka; Aso, Tomokazu; Teshigawara, Makoto; Hino, Ryutaro

    2002-02-01

    In order to examine the radiation safety of a spallation mercury target system, especially source term evaluation, it is necessary to clarify the chemical forms of spallation products generated by spallation reaction with proton beam. As for the chemical forms of spallation products in mercury that involves large amounts of spallation products, these forms were estimated by using the binary phase diagrams and the thermochemical equilibrium calculation based on the amounts of spallation product. Calculation results showed that the mercury would dissolve Al, As, B, Be, Bi, C, Co, Cr, Fe, Ga, Ge, Ir, Mo, Nb, Os, Re, Ru, Sb, Si, Ta, Tc, V and W in the element state, and Ag, Au, Ba, Br, Ca, Cd, Ce, Cl, Cs, Cu, Dy, Er, Eu, F, Gd, Hf, Ho, I, In, K, La, Li, Lu, Mg, Mn, Na, Nd, Ni, O, Pb, Pd, Pr, Pt, Rb, Rh, S, Sc, Se, Sm, Sn, Sr, Tb, Te, Ti, Tl, Tm, Y, Yb, Zn and Zr in the form of inorganic mercury compounds. As for As, Be, Co, Cr, Fe, Ge, Ir, Mo, Nb, Os, Pt, Re, Ru, Se, Ta, V, W and Zr, precipitation could be occurred when increasing the amounts of spallation products with operation time of the spallation target system. On the other hand, beryllium-7 (Be-7), which is produced by spallation reaction of oxygen in the cooling water of a safety hull, becomes the main factor of the external exposure to maintain the cooling loop. Based on the thermochemical equilibrium calculation to Be-H 2 O binary system, the chemical forms of Be in the cooling water were estimated. Then the Be could exist in the form of cations such as BeOH + , BeO + and Be 2+ under the condition of less than 10 -8 of the Be mole fraction in the cooling water. (author)

  10. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Xu

    2014-01-01

    Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  11. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    Science.gov (United States)

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  12. Spatial distribution of neutrons in paraffin moderator surrounding a lead target irradiated with protons at intermediate energies

    International Nuclear Information System (INIS)

    Adam, J.; Barabanov, M.Yu.; Bradnova, V.

    2002-01-01

    The distribution of neutrons emitted during the irradiation with 0.65, 1.0 and 1.5 GeV protons from a lead target (O / = 8 cm, l = 20 cm) and moderated by a surrounding paraffin moderator of 6 cm thick was studied with a radiochemical sensor along the beam axis on top of the moderator. Small 139 La-sensors of approximately 1 g were used to measure essentially the thermal neutron fluence at different depths near the surface: i.e., on top of the moderator, in 10 mm deep holes and in 20 mm deep holes. The reaction 139 La(n, γ) 140 La (τ 1/2 = 40.27 h) was studied using standard procedures of gamma spectroscopy and data analysis. The neutron induced activity of 140 La increases strongly with the depth of the hole inside the moderator, its activity distribution along the beam direction on top of the moderator has its maximum about 10 cm downstream the entrance of the protons into the lead and the induced activity increases about linearity with the proton energy. Some comparisons of the experimental results with model estimations based on the LAHET code are also presented. The experiments were carried out using the Nuclotron accelerator of the Laboratory of High Energies (JINR)

  13. Converting dose distributions into tumour control probability

    International Nuclear Information System (INIS)

    Nahum, A.E.

    1996-01-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s a can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s a . The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs

  14. Converting dose distributions into tumour control probability

    Energy Technology Data Exchange (ETDEWEB)

    Nahum, A E [The Royal Marsden Hospital, London (United Kingdom). Joint Dept. of Physics

    1996-08-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s{sub a} can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s{sub a}. The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs.

  15. Estimation of dislocations density and distribution of dislocations during ECAP-Conform process

    Science.gov (United States)

    Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza

    2018-01-01

    Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.

  16. Research on Key Technologies of Network Centric System Distributed Target Track Fusion

    Directory of Open Access Journals (Sweden)

    Yi Mao

    2017-01-01

    Full Text Available To realize common tactical picture in network-centered system, this paper proposes a layered architecture for distributed information processing and a method for distributed track fusion on the basis of analyzing the characteristics of network-centered systems. Basing on the noncorrelation of three-dimensional measurement of surveillance and reconnaissance sensors under polar coordinates, it also puts forward an algorithm for evaluating track quality (TQ using statistical decision theory. According to simulation results, the TQ value is associated with the measurement accuracy of sensors and the motion state of targets, which is well matched with the convergence process of tracking filters. Besides, the proposed algorithm has good reliability and timeliness in track quality evaluation.

  17. Estimation of dose distribution in occupationally exposed individuals to FDG-{sup 18}F

    Energy Technology Data Exchange (ETDEWEB)

    Lacerda, Isabelle V. Batista de; Cabral, Manuela O. Monteiro; Vieira, Jose Wilson, E-mail: ilacerda.bolsista@cnen.gov.br, E-mail: manuela.omc@gmail.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Oliveira, Mercia Liane de; Andrade Lima, Fernando R. de, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2014-07-01

    The use of unsealed radiation sources in nuclear medicine can lead to important incorporation of radionuclides, especially for occupationally exposed individuals (OEIs) during production and handling of radiopharmaceuticals. In this study, computer simulation was proposed as an alternative methodology for evaluation of the absorbed dose distribution and for the effective dose value in OEIs. For this purpose, the Exposure Computational Model (ECM) which is named as FSUP (Female Adult Mesh - supine) were used. This ECM is composed of: voxel phantom FASH (Female Adult MeSH) in the supine position, the MC code EGSnrc and an algorithm simulator of general internal source. This algorithm was modified to adapt to specific needs of the positron emission from FDG-{sup 18}F. The obtained results are presented as absorbed dose/accumulated activity. To obtain the absorbed dose distribution it was necessary to use accumulative activity data from the in vivo bioassay. The absorbed dose distribution and the value of estimated effective dose in this study did not exceed the limits for occupational exposure. Therefore, the creation of a database with the distribution of accumulated activity is suggested in order to estimate the absorbed dose in radiosensitive organs and the effective dose for OEI in similar environment. (author)

  18. Distributed Fusion Estimation for Multisensor Multirate Systems with Stochastic Observation Multiplicative Noises

    Directory of Open Access Journals (Sweden)

    Peng Fangfang

    2014-01-01

    Full Text Available This paper studies the fusion estimation problem of a class of multisensor multirate systems with observation multiplicative noises. The dynamic system is sampled uniformly. Sampling period of each sensor is uniform and the integer multiple of the state update period. Moreover, different sensors have the different sampling rates and observations of sensors are subject to the stochastic uncertainties of multiplicative noises. At first, local filters at the observation sampling points are obtained based on the observations of each sensor. Further, local estimators at the state update points are obtained by predictions of local filters at the observation sampling points. They have the reduced computational cost and a good real-time property. Then, the cross-covariance matrices between any two local estimators are derived at the state update points. At last, using the matrix weighted optimal fusion estimation algorithm in the linear minimum variance sense, the distributed optimal fusion estimator is obtained based on the local estimators and the cross-covariance matrices. An example shows the effectiveness of the proposed algorithms.

  19. Research on Radar Micro-Doppler Feature Parameter Estimation of Propeller Aircraft

    Science.gov (United States)

    He, Zhihua; Tao, Feixiang; Duan, Jia; Luo, Jingsheng

    2018-01-01

    The micro-motion modulation effect of the rotated propellers to radar echo can be a steady feature for aircraft target recognition. Thus, micro-Doppler feature parameter estimation is a key to accurate target recognition. In this paper, the radar echo of rotated propellers is modelled and simulated. Based on which, the distribution characteristics of the micro-motion modulation energy in time, frequency and time-frequency domain are analyzed. The micro-motion modulation energy produced by the scattering points of rotating propellers is accumulated using the Inverse-Radon (I-Radon) transform, which can be used to accomplish the estimation of micro-modulation parameter. Finally, it is proved that the proposed parameter estimation method is effective with measured data. The micro-motion parameters of aircraft can be used as the features of radar target recognition.

  20. Estimation of aerosol particle number distribution with Kalman Filtering – Part 2: Simultaneous use of DMPS, APS and nephelometer measurements

    Directory of Open Access Journals (Sweden)

    T. Viskari

    2012-12-01

    Full Text Available Extended Kalman Filter (EKF is used to estimate particle size distributions from observations. The focus here is on the practical application of EKF to simultaneously merge information from different types of experimental instruments. Every 10 min, the prior state estimate is updated with size-segregating measurements from Differential Mobility Particle Sizer (DMPS and Aerodynamic Particle Sizer (APS as well as integrating measurements from a nephelometer. Error covariances are approximate in our EKF implementation. The observation operator assumes a constant particle density and refractive index. The state estimates are compared to particle size distributions that are a composite of DMPS and APS measurements. The impact of each instrument on the size distribution estimate is studied. Kalman Filtering of DMPS and APS yielded a temporally consistent state estimate. This state estimate is continuous over the overlapping size range of DMPS and APS. Inclusion of the integrating measurements further reduces the effect of measurement noise. Even with the present approximations, EKF is shown to be a very promising method to estimate particle size distribution with observations from different types of instruments.

  1. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    International Nuclear Information System (INIS)

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-01-01

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  2. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    Science.gov (United States)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  3. Estimation of the effective distribution coefficient from the solubility constant

    International Nuclear Information System (INIS)

    Wang, Yug-Yea; Yu, C.

    1994-01-01

    An updated version of RESRAD has been developed by Argonne National Laboratory for the US Department of Energy to derive site-specific soil guidelines for residual radioactive material. In this updated version, many new features have been added to the, RESRAD code. One of the options is that a user can input a solubility constant to limit the leaching of contaminants. The leaching model used in the code requires the input of an empirical distribution coefficient, K d , which represents the ratio of the solute concentration in soil to that in solution under equilibrium conditions. This paper describes the methodology developed to estimate an effective distribution coefficient, Kd, from the user-input solubility constant and the use of the effective K d for predicting the leaching of contaminants

  4. On the estimation of the spherical contact distribution Hs(y) for spatial point processes

    International Nuclear Information System (INIS)

    Doguwa, S.I.

    1990-08-01

    RIPLEY (1977, Journal of the Royal Statistical Society, B39 172-212) proposed an estimator for the spherical contact distribution H s (s), of a spatial point process observed in a bounded planar region. However, this estimator is not defined for some distances of interest, in this bounded region. A new estimator for H s (y), is proposed for use with regular grid of sampling locations. This new estimator is defined for all distances of interest. It also appears to have a smaller bias and a smaller mean squared error than the previously suggested alternative. (author). 11 refs, 4 figs, 1 tab

  5. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-02-15

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.

  6. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    International Nuclear Information System (INIS)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup

    2015-01-01

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF

  7. Wireless Power Transfer for Distributed Estimation in Sensor Networks

    Science.gov (United States)

    Mai, Vien V.; Shin, Won-Yong; Ishibashi, Koji

    2017-04-01

    This paper studies power allocation for distributed estimation of an unknown scalar random source in sensor networks with a multiple-antenna fusion center (FC), where wireless sensors are equipped with radio-frequency based energy harvesting technology. The sensors' observation is locally processed by using an uncoded amplify-and-forward scheme. The processed signals are then sent to the FC, and are coherently combined at the FC, at which the best linear unbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve the following two power allocation problems: 1) minimizing distortion under various power constraints; and 2) minimizing total transmit power under distortion constraints, where the distortion is measured in terms of mean-squared error of the BLUE. Two iterative algorithms are developed to solve the non-convex problems, which converge at least to a local optimum. In particular, the above algorithms are designed to jointly optimize the amplification coefficients, energy beamforming, and receive filtering. For each problem, a suboptimal design, a single-antenna FC scenario, and a common harvester deployment for colocated sensors, are also studied. Using the powerful semidefinite relaxation framework, our result is shown to be valid for any number of sensors, each with different noise power, and for an arbitrarily number of antennas at the FC.

  8. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    Directory of Open Access Journals (Sweden)

    Álvaro Gutiérrez

    2011-11-01

    Full Text Available Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA, previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce.

  9. A three-dimensional dose-distribution estimation system using computerized image reconstruction

    International Nuclear Information System (INIS)

    Nishijima, Akihiko; Kidoya, Eiji; Komuro, Hiroyuki; Tanaka, Masato; Asada, Naoki.

    1990-01-01

    In radiotherapy planning, three dimensional (3-D) estimation of dose distribution has been very troublesome and time-consuming. To solve this problem, a simple and fast 3-D dose distribution image using a computer and Charged Couple Device (CCD) camera was developed. A series of X-ray films inserted in the phantom using a linear accelerator unit was exposed. The degree of film density was degitized with a CCD camera and a minicomputer (VAX 11-750). After that these results were compared with the present depth dose obtained by a JARP type dosimeter, with a dose error being less than 2%. The 3-D dose distribution image could accurately depict the density changes created by aluminum and air put into the phantom. The contrast resolution of the CCD camera seemed to be superior to the convention densitometer in the low-to-intermediate contrast range. In conclusion, our method seem to be very fast and simple for obtaining 3-D dose distribution images and is very effective when compared with the conventional method. (author)

  10. Angular distributions of target fragments from the reactions of 292 MeV - 25.2 GeV 12C with 197Au and 238U

    International Nuclear Information System (INIS)

    Morita, Y.

    1983-01-01

    The angular distributions of the 197 Au target fragments were all forwardly peaked. Extensively forward peaked angular distributions were observed at the non-relativistic projectile energies (292 MeV, 1.0 GeV). No obvious differences were observed in the angular distributions at the different relativistic projectile energies of 3.0 GeV, 12.0 GeV and 25.2 GeV. The characteristic angular distribution pattern from the relativistic projectile energy experiments was also observed in the non-relativistic energy experiments. Maximum degree of forward-peaking in the angular distributions at each projectile energy was observed at the product mass number (A) around 190 from the 292 MeV projectile energy, at A = 180 from 1.0 GeV and at A =175 from 3.0 GeV and 12.0 GeV. In general, two different types of angular distributions were observed in the relativistic projectile energy experiments with the 238 U target. Isotropic angular distributions were observed for the fission product nuclides. The angular distributions of the fission products at the intermediate (292 MeV) energy showed slightly forward peaked angular distributions. Because of the long projectile-target interaction time in the primary nuclear reaction, larger momentum was transferred from the projectile to the target nucleus. Steep forward-peaked angular distributions were also observed with the 238 U target

  11. A revival of the autoregressive distributed lag model in estimating energy demand relationships

    Energy Technology Data Exchange (ETDEWEB)

    Bentzen, J.; Engsted, T.

    1999-07-01

    The findings in the recent energy economics literature that energy economic variables are non-stationary, have led to an implicit or explicit dismissal of the standard autoregressive distribution lag (ARDL) model in estimating energy demand relationships. However, Pesaran and Shin (1997) show that the ARDL model remains valid when the underlying variables are non-stationary, provided the variables are co-integrated. In this paper we use the ARDL approach to estimate a demand relationship for Danish residential energy consumption, and the ARDL estimates are compared to the estimates obtained using co-integration techniques and error-correction models (ECM's). It turns out that both quantitatively and qualitatively, the ARDL approach and the co-integration/ECM approach give very similar results. (au)

  12. A revival of the autoregressive distributed lag model in estimating energy demand relationships

    Energy Technology Data Exchange (ETDEWEB)

    Bentzen, J; Engsted, T

    1999-07-01

    The findings in the recent energy economics literature that energy economic variables are non-stationary, have led to an implicit or explicit dismissal of the standard autoregressive distribution lag (ARDL) model in estimating energy demand relationships. However, Pesaran and Shin (1997) show that the ARDL model remains valid when the underlying variables are non-stationary, provided the variables are co-integrated. In this paper we use the ARDL approach to estimate a demand relationship for Danish residential energy consumption, and the ARDL estimates are compared to the estimates obtained using co-integration techniques and error-correction models (ECM's). It turns out that both quantitatively and qualitatively, the ARDL approach and the co-integration/ECM approach give very similar results. (au)

  13. An apparatus to study the energy and angular distributions of electron-bremsstrahlung photons from gaseous targets

    International Nuclear Information System (INIS)

    Yadav, Namita; Bhatt, Pragya; Singh, Raj; Singh, B.K.; Quarles, C.A.; Shanker, R.

    2014-01-01

    An apparatus is developed to measure the energy- and angular distributions of bremsstrahlung generated from collisions of energetic electrons with isolated atoms and molecules. A considerable reduction of thick target bremsstrahlung (TTB) background produced by scattered electrons from the chamber wall is achieved. Details of the experimental setup with regard to design of its components, experimental technique, data acquisition and analysis etc. are given and discussed. The reliability and performance of the setup are demonstrated by obtaining some test results on angular- and energy distributions of bremsstrahlung produced in collisions of 4.0 keV electrons with free argon atoms. These results are compared with the theoretical predictions of the ordinary- and the polarization bremsstrahlung emissions. In this comparison, the experimental data for energy distributions of BS photons are found to be in reasonable agreement while they are found to have noticeable differences in shape of angular distributions. - Highlights: • Experimental setup is developed to study DDCS of electron-bremsstrahlung from gaseous targets. • TTB from scattering chamber's wall is reduced appreciably by using a teflon cylinder. • Shape of DDCS of bremsstrahlung compared with the theories shows a satisfactory match. • Angular distributions of bremsstrahlung show anisotropy but still affected by TTB background photons

  14. Joint Direction-of-Departure and Direction-of-Arrival Estimation in a UWB MIMO Radar Detecting Targets with Fluctuating Radar Cross Sections

    Directory of Open Access Journals (Sweden)

    Idnin Pasya

    2014-01-01

    Full Text Available This paper presents a joint direction-of-departure (DOD and direction-of-arrival (DOA estimation in a multiple-input multiple-output (MIMO radar utilizing ultra wideband (UWB signals in detecting targets with fluctuating radar cross sections (RCS. The UWB MIMO radar utilized a combination of two-way MUSIC and majority decision based on angle histograms of estimated DODs and DOAs at each frequency of the UWB signal. The proposed angle estimation scheme was demonstrated to be effective in detecting targets with fluctuating RCS, compared to conventional spectra averaging method used in subband angle estimations. It was found that a wider bandwidth resulted in improved estimation performance. Numerical simulations along with experimental evaluations in a radio anechoic chamber are presented.

  15. Incident energy and target dependence of interaction cross sections and density distribution of neutron drip-line nuclei

    International Nuclear Information System (INIS)

    Shimoura, S.

    1992-01-01

    The relation between nuclear density distribution and interaction cross section is discussed in terms of Glauber model. Based on the model, density distribution of neutron drip-line nucleus 11 Be and 11 Li is determined experimentally from incident energy dependence of interaction cross sections of 11 Be and 11 Li on light targets. The obtained distributions have long tails corresponding to neutron halos of loosely bound neutrons. (Author)

  16. The Spatial Distribution of Forest Biomass in the Brazilian Amazon: A Comparison of Estimates

    Science.gov (United States)

    Houghton, R. A.; Lawrence, J. L.; Hackler, J. L.; Brown, S.

    2001-01-01

    The amount of carbon released to the atmosphere as a result of deforestation is determined, in part, by the amount of carbon held in the biomass of the forests converted to other uses. Uncertainty in forest biomass is responsible for much of the uncertainty in current estimates of the flux of carbon from land-use change. We compared several estimates of forest biomass for the Brazilian Amazon, based on spatial interpolations of direct measurements, relationships to climatic variables, and remote sensing data. We asked three questions. First, do the methods yield similar estimates? Second, do they yield similar spatial patterns of distribution of biomass? And, third, what factors need most attention if we are to predict more accurately the distribution of forest biomass over large areas? Amazonian forests (including dead and below-ground biomass) vary by more than a factor of two, from a low of 39 PgC to a high of 93 PgC. Furthermore, the estimates disagree as to the regions of high and low biomass. The lack of agreement among estimates confirms the need for reliable determination of aboveground biomass over large areas. Potential methods include direct measurement of biomass through forest inventories with improved allometric regression equations, dynamic modeling of forest recovery following observed stand-replacing disturbances (the approach used in this research), and estimation of aboveground biomass from airborne or satellite-based instruments sensitive to the vertical structure plant canopies.

  17. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  18. Distributed Channel Estimation and Pilot Contamination Analysis for Massive MIMO-OFDM Systems

    KAUST Repository

    Zaib, Alam

    2016-07-22

    By virtue of large antenna arrays, massive MIMO systems have a potential to yield higher spectral and energy efficiency in comparison with the conventional MIMO systems. This paper addresses uplink channel estimation in massive MIMO-OFDM systems with frequency selective channels. We propose an efficient distributed minimum mean square error (MMSE) algorithm that can achieve near optimal channel estimates at low complexity by exploiting the strong spatial correlation among antenna array elements. The proposed method involves solving a reduced dimensional MMSE problem at each antenna followed by a repetitive sharing of information through collaboration among neighboring array elements. To further enhance the channel estimates and/or reduce the number of reserved pilot tones, we propose a data-aided estimation technique that relies on finding a set of most reliable data carriers. Furthermore, we use stochastic geometry to quantify the pilot contamination, and in turn use this information to analyze the effect of pilot contamination on channel MSE. The simulation results validate our analysis and show near optimal performance of the proposed estimation algorithms.

  19. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    Science.gov (United States)

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  20. The distribution of alternative agents for targeted radiotherapy within human neuroblastoma spheroids

    International Nuclear Information System (INIS)

    Mairs, R.J.; Gaze, M.N.; Murray, T.; Reid, R.; McSharry, C.; Babich, J.W.

    1991-01-01

    This study aims to select the radiopharmaceutical vehicle for targeted radiotherapy of neuroblastoma which is most likely to penetrate readily the centre of micrometastases in vivo. The human neuroblastoma cell line NB1-G, grown as multicellular spheroids provided an in vitro model for micrometastases. The radiopharmaceuticals studied were the catecholamine analogue metaiodobenzyl guanidine (mIBG), a specific neuroectodermal monoclonal antibody (UJ13A) and β nerve growth factor (βNGF). Following incubation of each drug with neuroblastoma spheroids, autoradiographs of frozen sections were prepared to demonstrate their relative distributions. mIBG and βNGF were found to penetrate the centre of spheroids readily although the concentration of mIBG greatly exceeded that of βNGF. In contrast, UJ13A was only bound peripherally. We conclude that mIBG is the best available vehicle for targeted radiotherapy of neuroblastoma cells with active uptake mechanisms for catecholimines. It is suggested that radionuclides with a shorter range of emissions than 131 I may be conjugated to benzyl guanidine to constitute more effective targeting agents with potentially less toxicity to adjacent normal tissues. (author)

  1. Signal and data processing of small targets 1989; Proceedings of the Meeting, Orlando, FL, Mar. 27-29, 1989

    Science.gov (United States)

    Drummond, Oliver E. (Editor)

    1989-01-01

    The present conference on digital signal processing, association and filtering techniques, and multiple-sensor/multiple-tracking techniques, discusses single-frame velocity estimation, efficient target extraction for laser radar imagery, precision target tracking for small extended objects, IR clutter partitioning for matched filter design, the maximum-likelihood approach to gamma circumvention, position estimation for optical point targets using staring detector arrays, and a multiple-scan signal processing technique for area-moving target indication. Also discussed are a proportional integral estimator, the prediction of track purity in tracking performance evaluations, synchronization and fault-tolerance in a distributed tracker, the benefits of soft sensors and probabilistic fusion, and testing track initiation algorithms fusing two-dimensional tracks.

  2. Novel probabilistic and distributed algorithms for guidance, control, and nonlinear estimation of large-scale multi-agent systems

    Science.gov (United States)

    Bandyopadhyay, Saptarshi

    guidance algorithms using results from numerical simulations and closed-loop hardware experiments on multiple quadrotors. In the second part of this dissertation, we present two novel discrete-time algorithms for distributed estimation, which track a single target using a network of heterogeneous sensing agents. The Distributed Bayesian Filtering (DBF) algorithm, the sensing agents combine their normalized likelihood functions using the logarithmic opinion pool and the discrete-time dynamic average consensus algorithm. Each agent's estimated likelihood function converges to an error ball centered on the joint likelihood function of the centralized multi-sensor Bayesian filtering algorithm. Using a new proof technique, the convergence, stability, and robustness properties of the DBF algorithm are rigorously characterized. The explicit bounds on the time step of the robust DBF algorithm are shown to depend on the time-scale of the target dynamics. Furthermore, the DBF algorithm for linear-Gaussian models can be cast into a modified form of the Kalman information filter. In the Bayesian Consensus Filtering (BCF) algorithm, the agents combine their estimated posterior pdfs multiple times within each time step using the logarithmic opinion pool scheme. Thus, each agent's consensual pdf minimizes the sum of Kullback-Leibler divergences with the local posterior pdfs. The performance and robust properties of these algorithms are validated using numerical simulations. In the third part of this dissertation, we present an attitude control strategy and a new nonlinear tracking controller for a spacecraft carrying a large object, such as an asteroid or a boulder. If the captured object is larger or comparable in size to the spacecraft and has significant modeling uncertainties, conventional nonlinear control laws that use exact feed-forward cancellation are not suitable because they exhibit a large resultant disturbance torque. The proposed nonlinear tracking control law guarantees

  3. Distributed Input and State Estimation Using Local Information in Heterogeneous Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dzung Tran

    2017-07-01

    Full Text Available A new distributed input and state estimation architecture is introduced and analyzed for heterogeneous sensor networks. Specifically, nodes of a given sensor network are allowed to have heterogeneous information roles in the sense that a subset of nodes can be active (that is, subject to observations of a process of interest and the rest can be passive (that is, subject to no observation. Both fixed and varying active and passive roles of sensor nodes in the network are investigated. In addition, these nodes are allowed to have non-identical sensor modalities under the common underlying assumption that they have complimentary properties distributed over the sensor network to achieve collective observability. The key feature of our framework is that it utilizes local information not only during the execution of the proposed distributed input and state estimation architecture but also in its design in that global uniform ultimate boundedness of error dynamics is guaranteed once each node satisfies given local stability conditions independent from the graph topology and neighboring information of these nodes. As a special case (e.g., when all nodes are active and a positive real condition is satisfied, the asymptotic stability can be achieved with our algorithm. Several illustrative numerical examples are further provided to demonstrate the efficacy of the proposed architecture.

  4. ESTIMATION OF PARAMETERS AND RELIABILITY FUNCTION OF EXPONENTIATED EXPONENTIAL DISTRIBUTION: BAYESIAN APPROACH UNDER GENERAL ENTROPY LOSS FUNCTION

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Singh

    2011-06-01

    Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.

  5. On the estimation of channel power distribution for PHWRs (Paper No. HMT-66-87)

    International Nuclear Information System (INIS)

    Parikh, M.V.; Kumar, A.N.; Krishnamohan, B.; Bhaskara Rao, P.

    1987-01-01

    In the case of PHWRs the estimation of channel power distribution is an important safety criteria. In this paper two methods based on theoretical estimation and the measured parameter are described. The comparison made shows good agreement in the prediction of channel power by both the methods. A parametric study in one of the measured parameters is also made which gives better agreement in results obtained. (author). 3 tabs

  6. Temperature distribution in target tumor tissue and photothermal tissue destruction during laser immunotherapy

    Science.gov (United States)

    Doughty, Austin; Hasanjee, Aamr; Pettitt, Alex; Silk, Kegan; Liu, Hong; Chen, Wei R.; Zhou, Feifan

    2016-03-01

    Laser Immunotherapy is a novel cancer treatment modality that has seen much success in treating many different types of cancer, both in animal studies and in clinical trials. The treatment consists of the synergistic interaction between photothermal laser irradiation and the local injection of an immunoadjuvant. As a result of the therapy, the host immune system launches a systemic antitumor response. The photothermal effect induced by the laser irradiation has multiple effects at different temperature elevations which are all required for optimal response. Therefore, determining the temperature distribution in the target tumor during the laser irradiation in laser immunotherapy is crucial to facilitate the treatment of cancers. To investigate the temperature distribution in the target tumor, female Wistar Furth rats were injected with metastatic mammary tumor cells and, upon sufficient tumor growth, underwent laser irradiation and were monitored using thermocouples connected to locally-inserted needle probes and infrared thermography. From the study, we determined that the maximum central tumor temperature was higher for tumors of less volume. Additionally, we determined that the temperature near the edge of the tumor as measured with a thermocouple had a strong correlation with the maximum temperature value in the infrared camera measurement.

  7. Multiple Maneuvering Target Tracking by Improved Particle Filter Based on Multiscan JPDA

    Directory of Open Access Journals (Sweden)

    Jing Liu

    2012-01-01

    Full Text Available The multiple maneuvering target tracking algorithm based on a particle filter is addressed. The equivalent-noise approach is adopted, which uses a simple dynamic model consisting of target state and equivalent noise which accounts for the combined effects of the process noise and maneuvers. The equivalent-noise approach converts the problem of maneuvering target tracking to that of state estimation in the presence of nonstationary process noise with unknown statistics. A novel method for identifying the nonstationary process noise is proposed in the particle filter framework. Furthermore, a particle filter based multiscan Joint Probability Data Association (JPDA filter is proposed to deal with the data association problem in a multiple maneuvering target tracking. In the proposed multiscan JPDA algorithm, the distributions of interest are the marginal filtering distributions for each of the targets, and these distributions are approximated with particles. The multiscan JPDA algorithm examines the joint association events in a multiscan sliding window and calculates the marginal posterior probability based on the multiscan joint association events. The proposed algorithm is illustrated via an example involving the tracking of two highly maneuvering, at times closely spaced and crossed, targets, based on resolved measurements.

  8. Optimal allocation of sensors for state estimation of distributed parameter systems

    International Nuclear Information System (INIS)

    Sunahara, Yoshifumi; Ohsumi, Akira; Mogami, Yoshio.

    1978-01-01

    The purpose of this paper is to present a method for finding the optimal allocation of sensors for state estimation of linear distributed parameter systems. This method is based on the criterion that the error covariance associated with the state estimate becomes minimal with respect to the allocation of the sensors. A theorem is established, giving the sufficient condition for optimizing the allocation of sensors to make minimal the error covariance approximated by a modal expansion. The remainder of this paper is devoted to illustrate important phases of the general theory of the optimal measurement allocation problem. To do this, several examples are demonstrated, including extensive discussions on the mutual relation between the optimal allocation and the dynamics of sensors. (author)

  9. OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE.

    Science.gov (United States)

    Xie, Xianchao; Kou, S C; Brown, Lawrence

    2016-03-01

    This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results.

  10. Estimation of Leakage Ratio Using Principal Component Analysis and Artificial Neural Network in Water Distribution Systems

    Directory of Open Access Journals (Sweden)

    Dongwoo Jang

    2018-03-01

    Full Text Available Leaks in a water distribution network (WDS constitute losses of water supply caused by pipeline failure, operational loss, and physical factors. This has raised the need for studies on the factors affecting the leakage ratio and estimation of leakage volume in a water supply system. In this study, principal component analysis (PCA and artificial neural network (ANN were used to estimate the volume of water leakage in a WDS. For the study, six main effective parameters were selected and standardized data obtained through the Z-score method. The PCA-ANN model was devised and the leakage ratio was estimated. An accuracy assessment was performed to compare the measured leakage ratio to that of the simulated model. The results showed that the PCA-ANN method was more accurate for estimating the leakage ratio than a single ANN simulation. In addition, the estimation results differed according to the number of neurons in the ANN model’s hidden layers. In this study, an ANN with multiple hidden layers was found to be the best method for estimating the leakage ratio with 12–12 neurons. This suggested approaches to improve the accuracy of leakage ratio estimation, as well as a scientific approach toward the sustainable management of water distribution systems.

  11. Estimation of temperature distribution in a reactor shield

    International Nuclear Information System (INIS)

    Agarwal, R.A.; Goverdhan, P.; Gupta, S.K.

    1989-01-01

    Shielding is provided in a nuclear reactor to absorb the radiations emanating from the core. The energy of these radiations appear in the form of heat. Concrete which is commonly used as a shielding material in nuclear power plants must be able to withstand the temperatures and temperature gradients appearing in the shield due to this heat. High temperatures lead to dehydration of the concrete and in turn reduce the shielding effectiveness of the material. Adequate cooling needs to be provided in these shields in order to limit the maximum temperature. This paper describes a method to estimate steady state and transient temperature distribution in reactor shields. The results due to loss of coolant in the coolant tubes have been studied and presented in the paper. (author). 5 figs

  12. Estimation of thermochemical behavior of spallation products in mercury target

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Kaoru; Kaminaga, Masanori; Haga, Katsuhiro; Kinoshita, Hidetaka; Aso, Tomokazu; Teshigawara, Makoto; Hino, Ryutaro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-02-01

    In order to examine the radiation safety of a spallation mercury target system, especially source term evaluation, it is necessary to clarify the chemical forms of spallation products generated by spallation reaction with proton beam. As for the chemical forms of spallation products in mercury that involves large amounts of spallation products, these forms were estimated by using the binary phase diagrams and the thermochemical equilibrium calculation based on the amounts of spallation product. Calculation results showed that the mercury would dissolve Al, As, B, Be, Bi, C, Co, Cr, Fe, Ga, Ge, Ir, Mo, Nb, Os, Re, Ru, Sb, Si, Ta, Tc, V and W in the element state, and Ag, Au, Ba, Br, Ca, Cd, Ce, Cl, Cs, Cu, Dy, Er, Eu, F, Gd, Hf, Ho, I, In, K, La, Li, Lu, Mg, Mn, Na, Nd, Ni, O, Pb, Pd, Pr, Pt, Rb, Rh, S, Sc, Se, Sm, Sn, Sr, Tb, Te, Ti, Tl, Tm, Y, Yb, Zn and Zr in the form of inorganic mercury compounds. As for As, Be, Co, Cr, Fe, Ge, Ir, Mo, Nb, Os, Pt, Re, Ru, Se, Ta, V, W and Zr, precipitation could be occurred when increasing the amounts of spallation products with operation time of the spallation target system. On the other hand, beryllium-7 (Be-7), which is produced by spallation reaction of oxygen in the cooling water of a safety hull, becomes the main factor of the external exposure to maintain the cooling loop. Based on the thermochemical equilibrium calculation to Be-H{sub 2}O binary system, the chemical forms of Be in the cooling water were estimated. Then the Be could exist in the form of cations such as BeOH{sup +}, BeO{sup +} and Be{sup 2+} under the condition of less than 10{sup -8} of the Be mole fraction in the cooling water. (author)

  13. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  14. Angular distribution of thick-target bremsstrahlung produced by electrons with initial energies ranging from 10 to 20 keV incident on Ag

    Energy Technology Data Exchange (ETDEWEB)

    Gonzales, D.; Cavness, B.; Williams, S. [Department of Physics, Angelo State University, San Angelo, Texas 76909 (United States)

    2011-11-15

    Experimental results are presented comparing the intensities of the bremsstrahlung produced by electrons with initial energies ranging from 10 to 20 keV incident on a thick Ag target, measured at forward angles in the range of 0 degree sign to 55 degree sign . When the data are corrected for attenuation due to photon absorption within the target, the results indicate that the detected radiation is distributed anisotropically only at photon energies k that are approximately equal to the initial energy of the incident electrons E{sub 0}. The results of our experiments suggest that, as k/E{sub 0}{yields} 0, the detected radiation essentially becomes isotropic due primarily to the scattering of electrons within the target. A comparison to the theory of Kissel et al.[At. Data Nucl. Data Tables 28, 381 (1983)] suggests that the angular distribution of bremsstrahlung emitted by electrons incident on thick targets is similar to the angular distribution of bremsstrahlung emitted by electrons incident on free-atom targets only when k/E{sub 0}{approx_equal} 1. The experimental data also are in approximate agreement with the angular distribution predictions of the Monte Carlo program penelope.

  15. Preventive strike vs. false targets and protection in defense strategy

    International Nuclear Information System (INIS)

    Levitin, Gregory; Hausken, Kjell

    2011-01-01

    A defender allocates its resource between defending an object passively and striking preventively against an attacker seeking to destroy the object. With no preventive strike the defender distributes its entire resource between deploying false targets, which the attacker cannot distinguish from the genuine object, and protecting the object. If the defender strikes preventively, the attacker's vulnerability depends on its protection and on the defender's resource allocated to the strike. If the attacker survives, the object's vulnerability depends on the attacker's revenge attack resource allocated to the attacked object. The optimal defense resource distribution between striking preventively, deploying the false targets and protecting the object is analyzed. Two cases of the attacker strategy are considered: when the attacker attacks all of the targets and when it chooses a number of targets to attack. An optimization model is presented for making a decision about the efficiency of the preventive strike based on the estimated attack probability, dependent on a variety of model parameters.

  16. Estimating Loan-to-value Distributions

    DEFF Research Database (Denmark)

    Korteweg, Arthur; Sørensen, Morten

    2016-01-01

    We estimate a model of house prices, combined loan-to-value ratios (CLTVs) and trade and foreclosure behavior. House prices are only observed for traded properties and trades are endogenous, creating sample-selection problems for existing approaches to estimating CLTVs. We use a Bayesian filtering...

  17. Comparison of regional index flood estimation procedures based on the extreme value type I distribution

    DEFF Research Database (Denmark)

    Kjeldsen, Thomas Rodding; Rosbjerg, Dan

    2002-01-01

    the prediction uncertainty and that the presence of intersite correlation tends to increase the uncertainty. A simulation study revealed that in regional index-flood estimation the method of probability weighted moments is preferable to method of moment estimation with regard to bias and RMSE.......A comparison of different methods for estimating T-year events is presented, all based on the Extreme Value Type I distribution. Series of annual maximum flood from ten gauging stations at the New Zealand South island have been used. Different methods of predicting the 100-year event...... and the connected uncertainty have been applied: At-site estimation and regional index-flood estimation with and without accounting for intersite correlation using either the method of moments or the method of probability weighted moments for parameter estimation. Furthermore, estimation at ungauged sites were...

  18. Target-fragment angular distributions for the interaction of 86 MeV/A 12C with 197Au

    International Nuclear Information System (INIS)

    Kraus, R.H. Jr.; Loveland, W.; McGaughey, P.L.; Seaborg, G.T.; Morita, Y.; Hageboe, E.; Haldorsen, I.R.; Sugihara, T.T.

    1985-01-01

    Target-fragment angular distributions were measured using radiochemical techniques for 69 different fragments (44 12 C with 197 Au. The angular distributions in the laboratory system are forward-peaked with some distributions also showing a backward peaking. The shapes of the laboratory system distributions were compared with the predictions of the nuclear firestreak model. The measured angular distributions differed markedly from the predictions of the firestreak model in most cases. This discrepancy could be due, in part, to overestimation of the transferred longitudinal momentum by the firestreak model, the assumption of isotropic angular distributions for fission and particle emission in the moving frame and incorrect assumptions about how the lightest (A 145) fragment distributions were symmetric about 90 0 . (orig.)

  19. Estimating interevent time distributions from finite observation periods in communication networks

    Science.gov (United States)

    Kivelä, Mikko; Porter, Mason A.

    2015-11-01

    A diverse variety of processes—including recurrent disease episodes, neuron firing, and communication patterns among humans—can be described using interevent time (IET) distributions. Many such processes are ongoing, although event sequences are only available during a finite observation window. Because the observation time window is more likely to begin or end during long IETs than during short ones, the analysis of such data is susceptible to a bias induced by the finite observation period. In this paper, we illustrate how this length bias is born and how it can be corrected without assuming any particular shape for the IET distribution. To do this, we model event sequences using stationary renewal processes, and we formulate simple heuristics for determining the severity of the bias. To illustrate our results, we focus on the example of empirical communication networks, which are temporal networks that are constructed from communication events. The IET distributions of such systems guide efforts to build models of human behavior, and the variance of IETs is very important for estimating the spreading rate of information in networks of temporal interactions. We analyze several well-known data sets from the literature, and we find that the resulting bias can lead to systematic underestimates of the variance in the IET distributions and that correcting for the bias can lead to qualitatively different results for the tails of the IET distributions.

  20. Passive Sonar Target Detection Using Statistical Classifier and Adaptive Threshold

    Directory of Open Access Journals (Sweden)

    Hamed Komari Alaie

    2018-01-01

    Full Text Available This paper presents the results of an experimental investigation about target detecting with passive sonar in Persian Gulf. Detecting propagated sounds in the water is one of the basic challenges of the researchers in sonar field. This challenge will be complex in shallow water (like Persian Gulf and noise less vessels. Generally, in passive sonar, the targets are detected by sonar equation (with constant threshold that increases the detection error in shallow water. The purpose of this study is proposed a new method for detecting targets in passive sonars using adaptive threshold. In this method, target signal (sound is processed in time and frequency domain. For classifying, Bayesian classification is used and posterior distribution is estimated by Maximum Likelihood Estimation algorithm. Finally, target was detected by combining the detection points in both domains using Least Mean Square (LMS adaptive filter. Results of this paper has showed that the proposed method has improved true detection rate by about 24% when compared other the best detection method.

  1. The probability of a tornado missile hitting a target

    International Nuclear Information System (INIS)

    Goodman, J.; Koch, J.E.

    1983-01-01

    It is shown that tornado missile transportation is a diffusion Markovian process. Therefore, the Green's function method is applied for the estimation of the probability of hitting a unit target area. This propability is expressed through a joint density of tornado intensity and path area, a probability of tornado missile injection and a tornado missile height distribution. (orig.)

  2. Analysis of Maneuvering Targets with Complex Motions by Two-Dimensional Product Modified Lv’s Distribution for Quadratic Frequency Modulation Signals

    Directory of Open Access Journals (Sweden)

    Fulong Jing

    2017-06-01

    Full Text Available For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR and the quadratic chirp rate (QCR are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF and modified scaled Fourier transform (mSFT, an effective parameter estimation algorithm is proposed—referred to as the Two-Dimensional product modified Lv’s distribution (2D-PMLVD—for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.

  3. Nuclear sizes and intranuclear matter distribution -- from hadron-nucleus collisions

    International Nuclear Information System (INIS)

    Strugalska-Gola, E.; Strugalski, Z.

    1999-01-01

    The method of intranuclear matter studies by hadronic projectiles is found and worked out. It is tested on the pion-xenon nucleus collision events. Target-nucleus size and nucleon density distributions in it were estimated and described by formulas prompted experimentally

  4. Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Raket, Lars Lau; Brites, Catarina

    2014-01-01

    Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and c...

  5. Why liquid displacement methods are sometimes wrong in estimating the pore-size distribution

    NARCIS (Netherlands)

    Gijsbertsen-Abrahamse, A.J.; Boom, R.M.; Padt, van der A.

    2004-01-01

    The liquid displacement method is a commonly used method to determine the pore size distribution of micro- and ultrafiltration membranes. One of the assumptions for the calculation of the pore sizes is that the pores are parallel and thus are not interconnected. To show that the estimated pore size

  6. MIPAS ESA v7 carbon tetrachloride data: distribution, trend and atmospheric lifetime estimation

    Science.gov (United States)

    Valeri, M.; Barbara, F.; Boone, C. D.; Ceccherini, S.; Gai, M.; Maucher, G.; Raspollini, P.; Ridolfi, M.; Sgheri, L.; Wetzel, G.; Zoppetti, N.

    2017-12-01

    Carbon tetrachloride (CCl4) is a strong ozone-depleting atmospheric gas regulated by the Montreal protocol. Recently it received increasing interest due to the so called "mystery of CCl4": it was found that its atmospheric concentration at the surface declines with a rate significantly smaller than its lifetime-limited rate. Indeed there is a discrepancy between atmospheric observations and the estimated distribution based on the reported production and consumption. Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) measurements are used to estimate CCl4 distributions, its trend, and atmospheric lifetime in the upper troposphere / lower stratosphere (UTLS) region. In particular, here we use MIPAS product generated with Version 7 of the Level 2 algorithm operated by the European Space Agency. The CCl4 distribution shows features typical of long-lived species of anthropogenic origin: higher concentrations in the troposphere, decreasing with altitude due to the photolysis. We compare MIPAS CCl4 data with independent observations from Atmospheric Chemistry Experiment - Fourier Transform Spectrometer (ACE - FTS) and stratospheric balloon version of MIPAS (MIPAS-B). The comparison shows a general good agreement between the different datasets. CCl4 trends are evaluated as a function of both latitude and altitude: negative trends (-10/ -15 pptv/decade, -10/ -30 %/decade) are found at all latitudes in the UTLS, apart from a region in the Southern mid-latitudes between 50 and 10 hPa where the trend is slightly positive (5/10 pptv/decade, 15/20 %/decade). At the lowest altitudes sounded by the MIPAS scan we find trend values consistent with those determined on the basis of the Advanced Global Atmospheric Gases Experiment (AGAGE) and the National Oceanic and Atmospheric Administration / Earth System Research Laboratory / Halocarbons and other Atmospheric Trace Species (NOAA / ESRL / HATS) networks. CCl4 global average lifetime of 47(39 - 61) years has been

  7. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  8. Projectile fragmentation of neutron-rich nuclei on light target (momentum distribution and nucleon-removal cross section)

    International Nuclear Information System (INIS)

    Kobayashi, T.; Tanihata, I.; Suzuki, T.

    1992-01-01

    Transverse momentum distributions of the projectile fragments from β-unstable nuclei have been measured with various projectile and target combinations. The momentum correlation of two neutrons in the neutron halo is extracted from the P c t distribution of 9 Li and hat of the neutrons. It is found that the two neutrons are moving in the same direction on average and thus strongly suggests the formation of a di-neutron in 11 Li. (Author)

  9. Reduced complexity FFT-based DOA and DOD estimation for moving target in bistatic MIMO radar

    KAUST Repository

    Ali, Hussain

    2016-06-24

    In this paper, we consider a bistatic multiple-input multiple-output (MIMO) radar. We propose a reduced complexity algorithm to estimate the direction-of-arrival (DOA) and direction-of-departure (DOD) for moving target. We show that the calculation of parameter estimation can be expressed in terms of one-dimensional fast-Fourier-transforms which drastically reduces the complexity of the optimization algorithm. The performance of the proposed algorithm is compared with the two-dimension multiple signal classification (2D-MUSIC) and reduced-dimension MUSIC (RD-MUSIC) algorithms. It is shown by simulations, our proposed algorithm has better estimation performance and lower computational complexity compared to the 2D-MUSIC and RD-MUSIC algorithms. Moreover, simulation results also show that the proposed algorithm achieves the Cramer-Rao lower bound. © 2016 IEEE.

  10. Multi-objective optimization with estimation of distribution algorithm in a noisy environment.

    Science.gov (United States)

    Shim, Vui Ann; Tan, Kay Chen; Chia, Jun Yong; Al Mamun, Abdullah

    2013-01-01

    Many real-world optimization problems are subjected to uncertainties that may be characterized by the presence of noise in the objective functions. The estimation of distribution algorithm (EDA), which models the global distribution of the population for searching tasks, is one of the evolutionary computation techniques that deals with noisy information. This paper studies the potential of EDAs; particularly an EDA based on restricted Boltzmann machines that handles multi-objective optimization problems in a noisy environment. Noise is introduced to the objective functions in the form of a Gaussian distribution. In order to reduce the detrimental effect of noise, a likelihood correction feature is proposed to tune the marginal probability distribution of each decision variable. The EDA is subsequently hybridized with a particle swarm optimization algorithm in a discrete domain to improve its search ability. The effectiveness of the proposed algorithm is examined via eight benchmark instances with different characteristics and shapes of the Pareto optimal front. The scalability, hybridization, and computational time are rigorously studied. Comparative studies show that the proposed approach outperforms other state of the art algorithms.

  11. Reconsidering the smart metering data collection frequency for distribution state estimation

    OpenAIRE

    Chen, Qipeng; Kaleshi, Dritan; Armour, Simon; Fan, Zhong

    2015-01-01

    The current UK Smart Metering Technical Specification requires smart meter readings to be collected once a day, primarily to support accurate billing without violating users' privacy. In this paper we consider the use of Smart Metering data for Distribution State Estimation (DSE), and compare the effectiveness of daily data collection strategy with a more frequent, half-hourly SM data collection strategy. We first assess the suitability of using the data for load forecasting at Low Voltage (L...

  12. Setting population targets for mammals using body mass as a predictor of population persistence.

    Science.gov (United States)

    Hilbers, Jelle P; Santini, Luca; Visconti, Piero; Schipper, Aafke M; Pinto, Cecilia; Rondinini, Carlo; Huijbregts, Mark A J

    2017-04-01

    Conservation planning and biodiversity assessments need quantitative targets to optimize planning options and assess the adequacy of current species protection. However, targets aiming at persistence require population-specific data, which limit their use in favor of fixed and nonspecific targets, likely leading to unequal distribution of conservation efforts among species. We devised a method to derive equitable population targets; that is, quantitative targets of population size that ensure equal probabilities of persistence across a set of species and that can be easily inferred from species-specific traits. In our method, we used models of population dynamics across a range of life-history traits related to species' body mass to estimate minimum viable population targets. We applied our method to a range of body masses of mammals, from 2 g to 3825 kg. The minimum viable population targets decreased asymptotically with increasing body mass and were on the same order of magnitude as minimum viable population estimates from species- and context-specific studies. Our approach provides a compromise between pragmatic, nonspecific population targets and detailed context-specific estimates of population viability for which only limited data are available. It enables a first estimation of species-specific population targets based on a readily available trait and thus allows setting equitable targets for population persistence in large-scale and multispecies conservation assessments and planning. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  13. Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation

    Science.gov (United States)

    Zhan, Hanyu; Voelz, David G.

    2016-12-01

    The polarimetric bidirectional reflectance distribution function (pBRDF) describes the relationships between incident and scattered Stokes parameters, but the familiar surface-only microfacet pBRDF cannot capture diffuse scattering contributions and depolarization phenomena. We propose a modified pBRDF model with a diffuse scattering component developed from the Kubelka-Munk and Le Hors et al. theories, and apply it in the development of a method to jointly estimate refractive index, slope variance, and diffuse scattering parameters from a series of Stokes parameter measurements of a surface. An application of the model and estimation approach to experimental data published by Priest and Meier shows improved correspondence with measurements of normalized Mueller matrix elements. By converting the Stokes/Mueller calculus formulation of the model to a degree of polarization (DOP) description, the estimation results of the parameters from measured DOP values are found to be consistent with a previous DOP model and results.

  14. On the effect of correlated measurements on the performance of distributed estimation

    KAUST Repository

    Ahmed, Mohammed

    2013-06-01

    We address the distributed estimation of an unknown scalar parameter in Wireless Sensor Networks (WSNs). Sensor nodes transmit their noisy observations over multiple access channel to a Fusion Center (FC) that reconstructs the source parameter. The received signal is corrupted by noise and channel fading, so that the FC objective is to minimize the Mean-Square Error (MSE) of the estimate. In this paper, we assume sensor node observations to be correlated with the source signal and correlated with each other as well. The correlation coefficient between two observations is exponentially decaying with the distance separation. The effect of the distance-based correlation on the estimation quality is demonstrated and compared with the case of unity correlated observations. Moreover, a closed-form expression for the outage probability is derived and its dependency on the correlation coefficients is investigated. Numerical simulations are provided to verify our analytic results. © 2013 IEEE.

  15. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  16. Estimating alarm thresholds and the number of components in mixture distributions

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom, E-mail: tburr@lanl.gov [Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States); Hamada, Michael S. [Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States)

    2012-09-01

    Mixtures of probability distributions arise in many nuclear assay and forensic applications, including nuclear weapon detection, neutron multiplicity counting, and in solution monitoring (SM) for nuclear safeguards. SM data is increasingly used to enhance nuclear safeguards in aqueous reprocessing facilities having plutonium in solution form in many tanks. This paper provides background for mixture probability distributions and then focuses on mixtures arising in SM data. SM data can be analyzed by evaluating transfer-mode residuals defined as tank-to-tank transfer differences, and wait-mode residuals defined as changes during non-transfer modes. A previous paper investigated impacts on transfer-mode and wait-mode residuals of event marking errors which arise when the estimated start and/or stop times of tank events such as transfers are somewhat different from the true start and/or stop times. Event marking errors contribute to non-Gaussian behavior and larger variation than predicted on the basis of individual tank calibration studies. This paper illustrates evidence for mixture probability distributions arising from such event marking errors and from effects such as condensation or evaporation during non-transfer modes, and pump carryover during transfer modes. A quantitative assessment of the sample size required to adequately characterize a mixture probability distribution arising in any context is included.

  17. Estimating the spatial distribution of artificial groundwater recharge using multiple tracers.

    Science.gov (United States)

    Moeck, Christian; Radny, Dirk; Auckenthaler, Adrian; Berg, Michael; Hollender, Juliane; Schirmer, Mario

    2017-10-01

    Stable isotopes of water, organic micropollutants and hydrochemistry data are powerful tools for identifying different water types in areas where knowledge of the spatial distribution of different groundwater is critical for water resource management. An important question is how the assessments change if only one or a subset of these tracers is used. In this study, we estimate spatial artificial infiltration along an infiltration system with stage-discharge relationships and classify different water types based on the mentioned hydrochemistry data for a drinking water production area in Switzerland. Managed aquifer recharge via surface water that feeds into the aquifer creates a hydraulic barrier between contaminated groundwater and drinking water wells. We systematically compare the information from the aforementioned tracers and illustrate differences in distribution and mixing ratios. Despite uncertainties in the mixing ratios, we found that the overall spatial distribution of artificial infiltration is very similar for all the tracers. The highest infiltration occurred in the eastern part of the infiltration system, whereas infiltration in the western part was the lowest. More balanced infiltration within the infiltration system could cause the elevated groundwater mound to be distributed more evenly, preventing the natural inflow of contaminated groundwater. Dedicated to Professor Peter Fritz on the occasion of his 80th birthday.

  18. Estimation of Bimodal Urban Link Travel Time Distribution and Its Applications in Traffic Analysis

    Directory of Open Access Journals (Sweden)

    Yuxiong Ji

    2015-01-01

    Full Text Available Vehicles travelling on urban streets are heavily influenced by traffic signal controls, pedestrian crossings, and conflicting traffic from cross streets, which would result in bimodal travel time distributions, with one mode corresponding to travels without delays and the other travels with delays. A hierarchical Bayesian bimodal travel time model is proposed to capture the interrupted nature of urban traffic flows. The travel time distributions obtained from the proposed model are then considered to analyze traffic operations and estimate travel time distribution in real time. The advantage of the proposed bimodal model is demonstrated using empirical data, and the results are encouraging.

  19. Spatial distribution of moderated neutrons along a Pb target irradiated by high-energy protons

    International Nuclear Information System (INIS)

    Fragopoulou, M.; Manolopoulou, M.; Stoulos, S.; Brandt, R.; Westmeier, W.; Kulakov, B.A.; Krivopustov, M.I.; Sosnin, A.N.; Debeauvais, M.; Adloff, J.C.; Zamani Valasiadou, M.

    2006-01-01

    High-energy protons in the range of 0.5-7.4 GeV have irradiated an extended Pb target covered with a paraffin moderator. The moderator was used in order to shift the hard Pb spallation neutron spectrum to lower energies and to increase the transmutation efficiency via (n,γ) reactions. Neutron distributions along and inside the paraffin moderator were measured. An analysis of the experimental results was performed based on particle production by high-energy interactions with heavy targets and neutron spectrum shifting by the paraffin. Conclusions about the spallation neutron production in the target and moderation through the paraffin are presented. The study of the total neutron fluence on the moderator surface as a function of the proton beam energy shows that neutron cost is improved up to 1 GeV. For higher proton beam energies it remains constant with a tendency to decline

  20. Estimation of effective population size in continuously distributed populations: There goes the neighborhood

    Science.gov (United States)

    M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples

    2013-01-01

    Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...

  1. Sequential fitting-and-separating reflectance components for analytical bidirectional reflectance distribution function estimation.

    Science.gov (United States)

    Lee, Yu; Yu, Chanki; Lee, Sang Wook

    2018-01-10

    We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.

  2. Estimation of Inflationary Expectations and the Effectiveness of Inflation Targeting Strategy

    Directory of Open Access Journals (Sweden)

    Amalia CRISTESCU

    2011-02-01

    Full Text Available The credibility and accountability of a central bank, acting in an inflation targeting regime, are essential because they allow a sustainable anchoring of the inflationary anticipation of economic agents. Their decisions and behavior will increasingly be grounded on information provided by the central bank, especially if it shows transparency in the process of communicating with the public. Thus, inflationary anticipations are one of the most important channels through which the monetary policy affects the economic activity. They are crucial in the formation of the consumer prices among producers and traders, especially since it is relatively expensive for the economic agents to adjust their prices at short intervals. That is why many central banks use response functions containing inflationary anticipations, in their inflation targeting models. The most frequently problem in relation to these anticipations is that they are based on the assumption of optimal forecasts of future inflation, which are, implicitly, rational anticipations. In fact, the economic agents’ inflationary anticipations are most often adaptive or even irrational. Thus, rational anticipations cannot be used to estimate equations for the Romanian economy because the agents who form their expectations do not have sufficient information and an inflationary environment stable enough to fully anticipate the inflation evolution. The inflation evolution in the Romanian economy helps to calculate adaptive forecasts for which the weight of the "forward looking" component has to be rather important. The economic agents form their inflation expectations for periods of time that, usually, coincide with a production cycle (one year and consider the official and unofficial inflation forecasts present on the market in order to make strategic decisions. Thus, in recent research on inflation modeling, actual inflationary anticipations of economic agents which are revealed based on national

  3. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  4. Filling the gaps: Using count survey data to predict bird density distribution patterns and estimate population sizes

    NARCIS (Netherlands)

    Sierdsema, H.; van Loon, E.E.

    2008-01-01

    Birds play an increasingly prominent role in politics, nature conservation and nature management. As a consequence, up-to-date and reliable spatial estimates of bird distributions over large areas are in high demand. The requested bird distribution maps are however not easily obtained. Intensive

  5. Fast neutron forward distributions from C, Be and U thick targets bombarded by deuterons

    International Nuclear Information System (INIS)

    Menard, S.; Clapier, F.; Pauwels, N.; Proust, J.; Donzaud, C.; Guillemaud-Mueller, D.; Lhenry, I.; Mueller, A.C.; Scarpaci, J.A.; Sorlin, O.; Mirea, M.

    1999-01-01

    In principle, to produce neutron rich radioactive beams with sufficient intensities, a source of isotopes far from the valley of β--stability can be obtained through the fission of 238 U induced by fast neutrons. A very promising way to assess the feasibility of these very intense neutron beams is to break an intense 2 H beam in a dedicated converter. The main objective of SPIRAL and PARRNe R - D projects is the investigation of the optimum parameters for a neutron rich isotope source in accordance with the scheme presented above. In such conditions, the charge particle energy loss can prevent the destruction of the fission target. In the frame of these project, a special attention is dedicated to the energetic and angular distributions of the neutrons emerging from a set of converters at a series of 2 H incident energies. Deuteron beams at energies less than 30 MeV are particularly interesting because it is expected that, after the decay in the 238 U target, the neutron rich radioactive fission products are cold enough, thus avoiding the evaporation of a too large number of neutrons. For such purposes, one needs experimental angular distributions at given energies for different types of converters and to elaborate a theoretical tool in order to estimate accurately the characteristics of the secondary neutron beam. In this paper, the experimental results were obtained with 17, 20 and 28 MeV deuteron energies on Be, C and U converters using the time of flight method. These data are compared to results given by a model valid at higher energy in order to obtain pertinent simulations in a large range of incident energies. Many theoretical tools were developed to characterize the properties of the neutron beams emerging from thick targets. In this contribution the Serber's model, considered with its improvements which account for the Coulomb deflection and the mean straggling of the beam in the material, is compared to experimental data in order to verify the validity

  6. The estimation of 3D SAR distributions in the human head from mobile phone compliance testing data for epidemiological studies

    International Nuclear Information System (INIS)

    Wake, Kanako; Watanabe, Soichi; Taki, Masao; Varsier, Nadege; Wiart, Joe; Mann, Simon; Deltour, Isabelle; Cardis, Elisabeth

    2009-01-01

    A worldwide epidemiological study called 'INTERPHONE' has been conducted to estimate the hypothetical relationship between brain tumors and mobile phone use. In this study, we proposed a method to estimate 3D distribution of the specific absorption rate (SAR) in the human head due to mobile phone use to provide the exposure gradient for epidemiological studies. 3D SAR distributions due to exposure to an electromagnetic field from mobile phones are estimated from mobile phone compliance testing data for actual devices. The data for compliance testing are measured only on the surface in the region near the device and in a small 3D region around the maximum on the surface in a homogeneous phantom with a specific shape. The method includes an interpolation/extrapolation and a head shape conversion. With the interpolation/extrapolation, SAR distributions in the whole head are estimated from the limited measured data. 3D SAR distributions in the numerical head models, where the tumor location is identified in the epidemiological studies, are obtained from measured SAR data with the head shape conversion by projection. Validation of the proposed method was performed experimentally and numerically. It was confirmed that the proposed method provided good estimation of 3D SAR distribution in the head, especially in the brain, which is the tissue of major interest in epidemiological studies. We conclude that it is possible to estimate 3D SAR distributions in a realistic head model from the data obtained by compliance testing measurements to provide a measure for the exposure gradient in specific locations of the brain for the purpose of exposure assessment in epidemiological studies. The proposed method has been used in several studies in the INTERPHONE.

  7. Performance of Distributed CFAR Processors in Pearson Distributed Clutter

    Directory of Open Access Journals (Sweden)

    Messali Zoubeida

    2007-01-01

    Full Text Available This paper deals with the distributed constant false alarm rate (CFAR radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA, order statistics (OS, and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO and the smallest of (SO CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the "OR" fusion rule.

  8. Performance of Distributed CFAR Processors in Pearson Distributed Clutter

    Directory of Open Access Journals (Sweden)

    Faouzi Soltani

    2007-01-01

    Full Text Available This paper deals with the distributed constant false alarm rate (CFAR radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA, order statistics (OS, and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO and the smallest of (SO CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the “OR” fusion rule.

  9. Fast Parabola Detection Using Estimation of Distribution Algorithms

    Directory of Open Access Journals (Sweden)

    Jose de Jesus Guerrero-Turrubiates

    2017-01-01

    Full Text Available This paper presents a new method based on Estimation of Distribution Algorithms (EDAs to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications.

  10. Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models

    Science.gov (United States)

    Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.

    2014-12-01

    We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.

  11. Inverse heat transfer analysis of a functionally graded fin to estimate time-dependent base heat flux and temperature distributions

    International Nuclear Information System (INIS)

    Lee, Haw-Long; Chang, Win-Jin; Chen, Wen-Lih; Yang, Yu-Ching

    2012-01-01

    Highlights: ► Time-dependent base heat flux of a functionally graded fin is inversely estimated. ► An inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied. ► The distributions of temperature in the fin are determined as well. ► The influence of measurement error and measurement location upon the precision of the estimated results is also investigated. - Abstract: In this study, an inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied to estimate the unknown time-dependent base heat flux of a functionally graded fin from the knowledge of temperature measurements taken within the fin. Subsequently, the distributions of temperature in the fin can be determined as well. It is assumed that no prior information is available on the functional form of the unknown base heat flux; hence the procedure is classified as the function estimation in inverse calculation. The temperature data obtained from the direct problem are used to simulate the temperature measurements. The influence of measurement errors and measurement location upon the precision of the estimated results is also investigated. Results show that an excellent estimation on the time-dependent base heat flux and temperature distributions can be obtained for the test case considered in this study.

  12. A modified estimation distribution algorithm based on extreme elitism.

    Science.gov (United States)

    Gao, Shujun; de Silva, Clarence W

    2016-12-01

    An existing estimation distribution algorithm (EDA) with univariate marginal Gaussian model was improved by designing and incorporating an extreme elitism selection method. This selection method highlighted the effect of a few top best solutions in the evolution and advanced EDA to form a primary evolution direction and obtain a fast convergence rate. Simultaneously, this selection can also keep the population diversity to make EDA avoid premature convergence. Then the modified EDA was tested by means of benchmark low-dimensional and high-dimensional optimization problems to illustrate the gains in using this extreme elitism selection. Besides, no-free-lunch theorem was implemented in the analysis of the effect of this new selection on EDAs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. An algorithm for 3D target scatterer feature estimation from sparse SAR apertures

    Science.gov (United States)

    Jackson, Julie Ann; Moses, Randolph L.

    2009-05-01

    We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.

  14. Distributed Space-Time Block Coded Transmission with Imperfect Channel Estimation: Achievable Rate and Power Allocation

    Directory of Open Access Journals (Sweden)

    Sonia Aïssa

    2008-05-01

    Full Text Available This paper investigates the effects of channel estimation error at the receiver on the achievable rate of distributed space-time block coded transmission. We consider that multiple transmitters cooperate to send the signal to the receiver and derive lower and upper bounds on the mutual information of distributed space-time block codes (D-STBCs when the channel gains and channel estimation error variances pertaining to different transmitter-receiver links are unequal. Then, assessing the gap between these two bounds, we provide a limiting value that upper bounds the latter at any input transmit powers, and also show that the gap is minimum if the receiver can estimate the channels of different transmitters with the same accuracy. We further investigate positioning the receiving node such that the mutual information bounds of D-STBCs and their robustness to the variations of the subchannel gains are maximum, as long as the summation of these gains is constant. Furthermore, we derive the optimum power transmission strategy to achieve the outage capacity lower bound of D-STBCs under arbitrary numbers of transmit and receive antennas, and provide closed-form expressions for this capacity metric. Numerical simulations are conducted to corroborate our analysis and quantify the effects of imperfect channel estimation.

  15. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  16. Estimation of cost-effectiveness of the Finnish electricity distribution utilities

    International Nuclear Information System (INIS)

    Kopsakangas-Savolainen, Maria; Svento, Rauli

    2008-01-01

    This paper examines the cost-effectiveness of Finnish electricity distribution utilities. We estimate several panel data stochastic frontier specifications using both Cobb-Douglas and Translog model specifications. The conventional models are extended in order to model observed heterogeneity explicitly in the cost frontier models. The true fixed effects model has been used as a representative of the models which account for unobserved heterogeneity and extended conventional random effect models have been used in analysing the impact of observed heterogeneity. A comparison between the conventional random effects model and models where heterogeneity component is entered either into the mean or into the variance of the inefficiency term shows that relative efficiency scores diminish when heterogeneity is added to the analysis. The true fixed effects model on the other hand gives clearly smaller inefficiency scores than random effects models. In the paper we also show that the relative inefficiency scores and rankings are not sensitive to the cost function specification. Our analysis points out the importance of the efficient use of the existing distribution network. The economies of scale results suggest that firms could reduce their operating costs by using networks more efficiently. According to our results average size firms which have high load factors are the most efficient ones. All firms have unused capacities so that they can improve cost-effectiveness rather by increasing the average distributed volumes than by mergers

  17. Estimation of cost-effectiveness of the Finnish electricity distribution utilities

    Energy Technology Data Exchange (ETDEWEB)

    Kopsakangas-Savolainen, Maria; Svento, Rauli [Department of Economics, University of Oulu (Finland)

    2008-03-15

    This paper examines the cost-effectiveness of Finnish electricity distribution utilities. We estimate several panel data stochastic frontier specifications using both Cobb-Douglas and Translog model specifications. The conventional models are extended in order to model observed heterogeneity explicitly in the cost frontier models. The true fixed effects model has been used as a representative of the models which account for unobserved heterogeneity and extended conventional random effect models have been used in analysing the impact of observed heterogeneity. A comparison between the conventional random effects model and models where heterogeneity component is entered either into the mean or into the variance of the inefficiency term shows that relative efficiency scores diminish when heterogeneity is added to the analysis. The true fixed effects model on the other hand gives clearly smaller inefficiency scores than random effects models. In the paper we also show that the relative inefficiency scores and rankings are not sensitive to the cost function specification. Our analysis points out the importance of the efficient use of the existing distribution network. The economies of scale results suggest that firms could reduce their operating costs by using networks more efficiently. According to our results average size firms which have high load factors are the most efficient ones. All firms have unused capacities so that they can improve cost-effectiveness rather by increasing the average distributed volumes than by mergers. (author)

  18. Estimating the distribution of lifetime cumulative radon exposures for California residents: a brief summary

    International Nuclear Information System (INIS)

    Liu, K.-S.; Chang, Y.-L.; Hayward, S.B.; Gadgil, A.J.; Nero, A.V.

    1992-01-01

    Data on residential radon concentrations in California, together with information on California residents' moving houses and time-activity patterns, have been used to estimate the distribution of lifetime cumulative exposures to 222 Rn. This distribution was constructed using Monte Carlo techniques to simulate the lifetime occupancy histories and associated radon exposures of 10,000 California residents. For standard male and female lifespans, the simulation sampled from transition probability matrices representing changes of residence within and between six regions of California, as well as into and out of the other United States, and then sampled from the appropriate regional (or national) distribution of indoor concentrations. The resulting distribution of lifetime cumulative exposures has a significantly narrower relative width than the distribution of California indoor concentrations, with only a small fraction (less than 0.2%) of the population having lifetime exposures equivalent to living their lifetimes in a single home with a radon concentration of 148 Bq.m -3 or more. (author)

  19. Importance of exposure model in estimating impacts when a water distribution system is contaminated

    International Nuclear Information System (INIS)

    Davis, M. J.; Janke, R.; Environmental Science Division; USEPA

    2008-01-01

    The quantity of a contaminant ingested by individuals using tap water drawn from a water distribution system during a contamination event depends on the concentration of the contaminant in the water and the volume of water ingested. If the concentration varies with time, the actual time of exposure affects the quantity ingested. The influence of the timing of exposure and of individual variability in the volume of water ingested on estimated impacts for a contamination event has received limited attention. We examine the significance of ingestion timing and variability in the volume of water ingested by using a number of models for ingestion timing and volume. Contaminant concentrations were obtained from simulations of an actual distribution system for cases involving contaminant injections lasting from 1 to 24 h. We find that assumptions about exposure can significantly influence estimated impacts, especially when injection durations are short and impact thresholds are high. The influence of ingestion timing and volume should be considered when assessing impacts for contamination events

  20. Distribution of base rock depth estimated from Rayleigh wave measurement by forced vibration tests

    International Nuclear Information System (INIS)

    Hiroshi Hibino; Toshiro Maeda; Chiaki Yoshimura; Yasuo Uchiyama

    2005-01-01

    This paper shows an application of Rayleigh wave methods to a real site, which was performed to determine spatial distribution of base rock depth from the ground surface. At a certain site in Sagami Plain in Japan, the base rock depth from surface is assumed to be distributed up to 10 m according to boring investigation. Possible accuracy of the base rock depth distribution has been needed for the pile design and construction. In order to measure Rayleigh wave phase velocity, forced vibration tests were conducted with a 500 N vertical shaker and linear arrays of three vertical sensors situated at several points in two zones around the edges of the site. Then, inversion analysis was carried out for soil profile by genetic algorithm, simulating measured Rayleigh wave phase velocity with the computed counterpart. Distribution of the base rock depth inverted from the analysis was consistent with the roughly estimated inclination of the base rock obtained from the boring tests, that is, the base rock is shallow around edge of the site and gradually inclines towards the center of the site. By the inversion analysis, the depth of base rock was determined as from 5 m to 6 m in the edge of the site, 10 m in the center of the site. The determined distribution of the base rock depth by this method showed good agreement on most of the points where boring investigation were performed. As a result, it was confirmed that the forced vibration tests on the ground by Rayleigh wave methods can be useful as the practical technique for estimating surface soil profiles to a depth of up to 10 m. (authors)

  1. Targets for Global Climate Policy: An Overview

    OpenAIRE

    Richard S.J. Tol

    2012-01-01

    A survey of the economic impact of climate change and the marginal damage costs shows that carbon dioxide emissions are a negative externality. The estimated Pigou tax and its growth rate are too low to justify the climate policy targets set by political leaders. A lower discount rate or greater concern for the global distribution of income would justify more stringent climate policy, but would imply an overhaul of other public policy. Catastrophic risk justifies more stringent climate policy...

  2. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  3. Robust Improvement in Estimation of a Covariance Matrix in an Elliptically Contoured Distribution Respect to Quadratic Loss Function

    Directory of Open Access Journals (Sweden)

    Z. Khodadadi

    2008-03-01

    Full Text Available Let S be matrix of residual sum of square in linear model Y = Aβ + e where matrix e is distributed as elliptically contoured with unknown scale matrix Σ. In present work, we consider the problem of estimating Σ with respect to squared loss function, L(Σˆ , Σ = tr(ΣΣˆ −1 −I 2 . It is shown that improvement of the estimators were obtained by James, Stein [7], Dey and Srivasan [1] under the normality assumption remains robust under an elliptically contoured distribution respect to squared loss function

  4. Proof of concept and dose estimation with binary responses under model uncertainty.

    Science.gov (United States)

    Klingenberg, B

    2009-01-30

    This article suggests a unified framework for testing Proof of Concept (PoC) and estimating a target dose for the benefit of a more comprehensive, robust and powerful analysis in phase II or similar clinical trials. From a pre-specified set of candidate models, we choose the ones that best describe the observed dose-response. To decide which models, if any, significantly pick up a dose effect, we construct the permutation distribution of the minimum P-value over the candidate set. This allows us to find critical values and multiplicity adjusted P-values that control the familywise error rate of declaring any spurious effect in the candidate set as significant. Model averaging is then used to estimate a target dose. Popular single or multiple contrast tests for PoC, such as the Cochran-Armitage, Dunnett or Williams tests, are only optimal for specific dose-response shapes and do not provide target dose estimates with confidence limits. A thorough evaluation and comparison of our approach to these tests reveal that its power is as good or better in detecting a dose-response under various shapes with many more additional benefits: It incorporates model uncertainty in PoC decisions and target dose estimation, yields confidence intervals for target dose estimates and extends to more complicated data structures. We illustrate our method with the analysis of a Phase II clinical trial. Copyright (c) 2008 John Wiley & Sons, Ltd.

  5. A heat conduction simulator to estimate lung temperature distribution during percutaneous transthoracic cryoablation for lung cancer

    International Nuclear Information System (INIS)

    Futami, Hikaru; Arai, Tsunenori; Yashiro, Hideki; Nakatsuka, Seishi; Kuribayashi, Sachio; Izumi, Youtaro; Tsukada, Norimasa; Kawamura, Masafumi

    2006-01-01

    To develop an evaluation method for the curative field when using X-ray CT imaging during percutaneous transthoracic cryoablation for lung cancer, we constructed a finite-element heat conduction simulator to estimate temperature distribution in the lung during cryo-treatment. We calculated temperature distribution using a simple two-dimensional finite element model, although the actual temperature distribution spreads in three dimensions. Temperature time-histories were measured within 10 minutes using experimental ex vivo and in vivo lung cryoablation conditions. We adjusted specific heat and thermal conductivity in the heat conduction calculation and compared them with measured temperature time-histories ex vivo. Adjusted lung specific heat was 3.7 J/ (g·deg C) for unfrozen lung and 1.8 J/ (g·deg C) for frozen lung. Adjusted lung thermal conductivity in our finite element model fitted proportionally to the exponential function of lung density. We considered the heat input by blood flow circulation and metabolic heat when we calculated the temperature time-histories during in vivo cryoablation of the lung. We assumed that the blood flow varies in inverse proportion to the change in blood viscosity up to the maximum blood flow predicted from cardiac output. Metabolic heat was set as heat generation in the calculation. The measured temperature time-histories of in vivo cryoablation were then estimated with an accuracy of ±3 deg C when calculated based on this assumption. Therefore, we successfully constructed a two-dimensional heat conduction simulator that is capable of estimating temperature distribution in the lung at the time of first freezing during cryoablation. (author)

  6. Preliminary estimates of spatially distributed net infiltration and recharge for the Death Valley region, Nevada-California

    International Nuclear Information System (INIS)

    Hevesi, J.A.; Flint, A.L.; Flint, L.E.

    2002-01-01

    A three-dimensional ground-water flow model has been developed to evaluate the Death Valley regional flow system, which includes ground water beneath the Nevada Test Site. Estimates of spatially distributed net infiltration and recharge are needed to define upper boundary conditions. This study presents a preliminary application of a conceptual and numerical model of net infiltration. The model was developed in studies at Yucca Mountain, Nevada, which is located in the approximate center of the Death Valley ground-water flow system. The conceptual model describes the effects of precipitation, runoff, evapotranspiration, and redistribution of water in the shallow unsaturated zone on predicted rates of net infiltration; precipitation and soil depth are the two most significant variables. The conceptual model was tested using a preliminary numerical model based on energy- and water-balance calculations. Daily precipitation for 1980 through 1995, averaging 202 millimeters per year over the 39,556 square kilometers area of the ground-water flow model, was input to the numerical model to simulate net infiltration ranging from zero for a soil thickness greater than 6 meters to over 350 millimeters per year for thin soils at high elevations in the Spring Mountains overlying permeable bedrock. Estimated average net infiltration over the entire ground-water flow model domain is 7.8 millimeters per year. To evaluate the application of the net-infiltration model developed on a local scale at Yucca Mountain, to net-infiltration estimates representing the magnitude and distribution of recharge on a regional scale, the net-infiltration results were compared with recharge estimates obtained using empirical methods. Comparison of model results with previous estimates of basinwide recharge suggests that the net-infiltration estimates obtained using this model may overestimate recharge because of uncertainty in modeled precipitation, bedrock permeability, and soil properties for

  7. Green sturgeon distribution in the Pacific Ocean estimated from modeled oceanographic features and migration behavior.

    Science.gov (United States)

    Huff, David D; Lindley, Steven T; Wells, Brian K; Chai, Fei

    2012-01-01

    The green sturgeon (Acipenser medirostris), which is found in the eastern Pacific Ocean from Baja California to the Bering Sea, tends to be highly migratory, moving long distances among estuaries, spawning rivers, and distant coastal regions. Factors that determine the oceanic distribution of green sturgeon are unclear, but broad-scale physical conditions interacting with migration behavior may play an important role. We estimated the distribution of green sturgeon by modeling species-environment relationships using oceanographic and migration behavior covariates with maximum entropy modeling (MaxEnt) of species geographic distributions. The primary concentration of green sturgeon was estimated from approximately 41-51.5° N latitude in the coastal waters of Washington, Oregon, and Vancouver Island and in the vicinity of San Francisco and Monterey Bays from 36-37° N latitude. Unsuitably cold water temperatures in the far north and energetic efficiencies associated with prevailing water currents may provide the best explanation for the range-wide marine distribution of green sturgeon. Independent trawl records, fisheries observer records, and tagging studies corroborated our findings. However, our model also delineated patchily distributed habitat south of Monterey Bay, though there are few records of green sturgeon from this region. Green sturgeon are likely influenced by countervailing pressures governing their dispersal. They are behaviorally directed to revisit natal freshwater spawning rivers and persistent overwintering grounds in coastal marine habitats, yet they are likely physiologically bounded by abiotic and biotic environmental features. Impacts of human activities on green sturgeon or their habitat in coastal waters, such as bottom-disturbing trawl fisheries, may be minimized through marine spatial planning that makes use of high-quality species distribution information.

  8. Design of a distributed radiator target for inertial fusion driven from two sides with heavy ion beams

    International Nuclear Information System (INIS)

    Tabak, M.; Callahan-Miller, D.

    1997-01-01

    We describe the status of a distributed radiator heavy ion target design. In integrated calculations this target ignited and produced 390-430 MJ of yieldwhen driven with 5.8-6.5 MJ of 3-4 GeV Pb ions. The target has cylindrical symmetry with disk endplates. The ions uniformly illuminate these endplates in a 5mm radius spot. We discuss the considerations which led to this design together with some previously unused design features: low density hohlraum walls in approximate pressure balance with internal low-Z fill materials, radiationsymmetry determined by the position of the radiator materials and particle ranges, and early time pressure symmetry possibly influenced by radiation shims. We discuss how this target scales to lower input energy or to lower beam power. Variant designs with more realistic beam focusing strategies are also discussed. We show the tradeoffs required for targets which accept higher particle energies

  9. System effectiveness of a targeted free mass distribution of long lasting insecticidal nets in Zanzibar, Tanzania

    Directory of Open Access Journals (Sweden)

    Abass Ali K

    2010-06-01

    Full Text Available Abstract Background Insecticide-treated nets (ITN and long-lasting insecticidal treated nets (LLIN are important means of malaria prevention. Although there is consensus regarding their importance, there is uncertainty as to which delivery strategies are optimal for dispensing these life saving interventions. A targeted mass distribution of free LLINs to children under five and pregnant women was implemented in Zanzibar between August 2005 and January 2006. The outcomes of this distribution among children under five were evaluated, four to nine months after implementation. Methods Two cross-sectional surveys were conducted in May 2006 in two districts of Zanzibar: Micheweni (MI on Pemba Island and North A (NA on Unguja Island. Household interviews were conducted with 509 caretakers of under-five children, who were surveyed for socio-economic status, the net distribution process, perceptions and use of bed nets. Each step in the distribution process was assessed in all children one to five years of age for unconditional and conditional proportion of success. System effectiveness (the accumulated proportion of success and equity effectiveness were calculated, and predictors for LLIN use were identified. Results The overall proportion of children under five sleeping under any type of treated net was 83.7% (318/380 in MI and 91.8% (357/389 in NA. The LLIN usage was 56.8% (216/380 in MI and 86.9% (338/389 in NA. Overall system effectiveness was 49% in MI and 87% in NA, and equity was found in the distribution scale-up in NA. In both districts, the predicting factor of a child sleeping under an LLIN was caretakers thinking that LLINs are better than conventional nets (OR = 2.8, p = 0.005 in MI and 2.5, p = 0.041 in NA, in addition to receiving an LLIN (OR = 4.9, p Conclusions Targeted free mass distribution of LLINs can result in high and equitable bed net coverage among children under five. However, in order to sustain high effective coverage, there

  10. A distributed computational search strategy for the identification of diagnostics targets: Application to finding aptamer targets for methicillin-resistant staphylococci

    Directory of Open Access Journals (Sweden)

    Flanagan Keith

    2014-06-01

    Full Text Available The rapid and cost-effective identification of bacterial species is crucial, especially for clinical diagnosis and treatment. Peptide aptamers have been shown to be valuable for use as a component of novel, direct detection methods. These small peptides have a number of advantages over antibodies, including greater specificity and longer shelf life. These properties facilitate their use as the detector components of biosensor devices. However, the identification of suitable aptamer targets for particular groups of organisms is challenging. We present a semi-automated processing pipeline for the identification of candidate aptamer targets from whole bacterial genome sequences. The pipeline can be configured to search for protein sequence fragments that uniquely identify a set of strains of interest. The system is also capable of identifying additional organisms that may be of interest due to their possession of protein fragments in common with the initial set. Through the use of Cloud computing technology and distributed databases, our system is capable of scaling with the rapidly growing genome repositories, and consequently of keeping the resulting data sets up-to-date. The system described is also more generically applicable to the discovery of specific targets for other diagnostic approaches such as DNA probes, PCR primers and antibodies.

  11. A distributed computational search strategy for the identification of diagnostics targets: application to finding aptamer targets for methicillin-resistant staphylococci.

    Science.gov (United States)

    Flanagan, Keith; Cockell, Simon; Harwood, Colin; Hallinan, Jennifer; Nakjang, Sirintra; Lawry, Beth; Wipat, Anil

    2014-06-30

    The rapid and cost-effective identification of bacterial species is crucial, especially for clinical diagnosis and treatment. Peptide aptamers have been shown to be valuable for use as a component of novel, direct detection methods. These small peptides have a number of advantages over antibodies, including greater specificity and longer shelf life. These properties facilitate their use as the detector components of biosensor devices. However, the identification of suitable aptamer targets for particular groups of organisms is challenging. We present a semi-automated processing pipeline for the identification of candidate aptamer targets from whole bacterial genome sequences. The pipeline can be configured to search for protein sequence fragments that uniquely identify a set of strains of interest. The system is also capable of identifying additional organisms that may be of interest due to their possession of protein fragments in common with the initial set. Through the use of Cloud computing technology and distributed databases, our system is capable of scaling with the rapidly growing genome repositories, and consequently of keeping the resulting data sets up-to-date. The system described is also more generically applicable to the discovery of specific targets for other diagnostic approaches such as DNA probes, PCR primers and antibodies.

  12. Estimasi kebutuhan spektrum untuk memenuhi target rencana pita lebar Indonesia di wilayah perkotaan [The estimation of spectrum requirements to meet the target of Indonesia broadband plan in urban area

    Directory of Open Access Journals (Sweden)

    Kasmad Ariansyah

    2015-12-01

    Full Text Available Pemerintah Indonesia telah mengesahkan Rencana Pita Lebar Indonesia menjelang akhir tahun 2014. Dokumen tersebut berisi panduan dan arah pembangunan pita lebar nasional dan berisi berisi target-target pencapaian berkelanjutan antara tahun 2014-2019. Terkait target capaian pita lebar nirkabel, ketersediaan dan kecukupan spektrum frekuensi merupakan salah satu hal yang sangat penting.  Studi ini dilakukan untuk mengestimasi kebutuhan spektrum frekuensi dalam rangka memenuhi target capaian Rencana Pita Lebar Indonesia khususnya layanan pita lebar nirkabel di wilayah perkotaan. DKI Jakarta dipilih sebagai sampel wilayah perkotaan. Analisis dilakukan dengan menghitung luas cakupan BTS, mengestimasi jumlah potensi pengguna, mengestimasi kebutuhan spektrum dan membandingkannya dengan spektrum yang sudah dialokasikan untuk mendapatkan jumlah kekurangan spektrum. 3G dan 4G diasumsikan sebagai teknologi yang digunakan untuk memenuhi sasaran pita lebar bergerak. Hasil analisis menunjukkan pada rentang tahun 2016-2019 akan terjadi kekurangan spektrum di wilayah perkotaan sebesar 2x234,5 MHz sampai dengan 2x240,5MHz (untuk mode FDD atau sebesar 313 MHz sampai dengan 321 MHz (untuk mode TDD. Spektrum frekuensi merupakan sumber daya yang reusable, dengan mengasumsikan kebutuhan spektrum di perdesaan lebih rendah dibanding kebutuhan di perkotaan, maka estimasi ini dapat pula digunakan untuk menggambarkan kebutuhan spektrum di Indonesia secara keseluruhan.*****Indonesian government has issued Indonesia Broadband Plan (IBP at the end of 2014. IBP provides guidance and direction for the development of national broadband and contains targets in the period of 2014 to 2019. Relating to wireless broadband target, the availability and the adequacy of spectrum is very important. This study was conducted to estimate the spectrum requirements to meet the Indonesia broadband plan target especially the target of mobile broadband in urban area. DKI Jakarta was taken as

  13. Multivariate analysis for the estimation of target localization errors in fiducial marker-based radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Takamiya, Masanori [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501, Japan and Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Nakamura, Mitsuhiro, E-mail: m-nkmr@kuhp.kyoto-u.ac.jp; Akimoto, Mami; Ueki, Nami; Yamada, Masahiro; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Tanabe, Hiroaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047 (Japan); Kokubo, Masaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047, Japan and Department of Radiation Oncology, Kobe City Medical Center General Hospital, Kobe 650-0047 (Japan); Itoh, Akio [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501 (Japan)

    2016-04-15

    Purpose: To assess the target localization error (TLE) in terms of the distance between the target and the localization point estimated from the surrogates (|TMD|), the average of respiratory motion for the surrogates and the target (|aRM|), and the number of fiducial markers used for estimating the target (n). Methods: This study enrolled 17 lung cancer patients who subsequently underwent four fractions of real-time tumor tracking irradiation. Four or five fiducial markers were implanted around the lung tumor. The three-dimensional (3D) distance between the tumor and markers was at maximum 58.7 mm. One of the markers was used as the target (P{sub t}), and those markers with a 3D |TMD{sub n}| ≤ 58.7 mm at end-exhalation were then selected. The estimated target position (P{sub e}) was calculated from a localization point consisting of one to three markers except P{sub t}. Respiratory motion for P{sub t} and P{sub e} was defined as the root mean square of each displacement, and |aRM| was calculated from the mean value. TLE was defined as the root mean square of each difference between P{sub t} and P{sub e} during the monitoring of each fraction. These procedures were performed repeatedly using the remaining markers. To provide the best guidance on the answer with n and |TMD|, fiducial markers with a 3D |aRM ≥ 10 mm were selected. Finally, a total of 205, 282, and 76 TLEs that fulfilled the 3D |TMD| and 3D |aRM| criteria were obtained for n = 1, 2, and 3, respectively. Multiple regression analysis (MRA) was used to evaluate TLE as a function of |TMD| and |aRM| in each n. Results: |TMD| for n = 1 was larger than that for n = 3. Moreover, |aRM| was almost constant for all n, indicating a similar scale for the marker’s motion near the lung tumor. MRA showed that |aRM| in the left–right direction was the major cause of TLE; however, the contribution made little difference to the 3D TLE because of the small amount of motion in the left–right direction. The TLE

  14. Spatially-explicit estimation of geographical representation in large-scale species distribution datasets.

    Science.gov (United States)

    Kalwij, Jesse M; Robertson, Mark P; Ronk, Argo; Zobel, Martin; Pärtel, Meelis

    2014-01-01

    Much ecological research relies on existing multispecies distribution datasets. Such datasets, however, can vary considerably in quality, extent, resolution or taxonomic coverage. We provide a framework for a spatially-explicit evaluation of geographical representation within large-scale species distribution datasets, using the comparison of an occurrence atlas with a range atlas dataset as a working example. Specifically, we compared occurrence maps for 3773 taxa from the widely-used Atlas Florae Europaeae (AFE) with digitised range maps for 2049 taxa of the lesser-known Atlas of North European Vascular Plants. We calculated the level of agreement at a 50-km spatial resolution using average latitudinal and longitudinal species range, and area of occupancy. Agreement in species distribution was calculated and mapped using Jaccard similarity index and a reduced major axis (RMA) regression analysis of species richness between the entire atlases (5221 taxa in total) and between co-occurring species (601 taxa). We found no difference in distribution ranges or in the area of occupancy frequency distribution, indicating that atlases were sufficiently overlapping for a valid comparison. The similarity index map showed high levels of agreement for central, western, and northern Europe. The RMA regression confirmed that geographical representation of AFE was low in areas with a sparse data recording history (e.g., Russia, Belarus and the Ukraine). For co-occurring species in south-eastern Europe, however, the Atlas of North European Vascular Plants showed remarkably higher richness estimations. Geographical representation of atlas data can be much more heterogeneous than often assumed. Level of agreement between datasets can be used to evaluate geographical representation within datasets. Merging atlases into a single dataset is worthwhile in spite of methodological differences, and helps to fill gaps in our knowledge of species distribution ranges. Species distribution

  15. Distribution of Estimated 10-Year Risk of Recurrent Vascular Events and Residual Risk in a Secondary Prevention Population

    NARCIS (Netherlands)

    Kaasenbrood, Lotte; Boekholdt, S. Matthijs; van der Graaf, Yolanda; Ray, Kausik K.; Peters, Ron J. G.; Kastelein, John J. P.; Amarenco, Pierre; LaRosa, John C.; Cramer, Maarten J. M.; Westerink, Jan; Kappelle, L. Jaap; de Borst, Gert J.; Visseren, Frank L. J.

    2016-01-01

    Among patients with clinically manifest vascular disease, the risk of recurrent vascular events is likely to vary. We assessed the distribution of estimated 10-year risk of recurrent vascular events in a secondary prevention population. We also estimated the potential risk reduction and residual

  16. Distribution of Estimated 10-Year Risk of Recurrent Vascular Events and Residual Risk in a Secondary Prevention Population

    NARCIS (Netherlands)

    Kaasenbrood, Lotte; Boekholdt, S. Matthijs; Van Der Graaf, Yolanda; Ray, Kausik K.; Peters, Ron J G; Kastelein, John J P; Amarenco, Pierre; Larosa, John C.; Cramer, Maarten J M; Westerink, Jan; Kappelle, L. Jaap; De Borst, Gert J.; Visseren, Frank L J

    2016-01-01

    Background: Among patients with clinically manifest vascular disease, the risk of recurrent vascular events is likely to vary. We assessed the distribution of estimated 10-year risk of recurrent vascular events in a secondary prevention population. We also estimated the potential risk reduction and

  17. Estimating the formation age distribution of continental crust by unmixing zircon ages

    Science.gov (United States)

    Korenaga, Jun

    2018-01-01

    Continental crust provides first-order control on Earth's surface environment, enabling the presence of stable dry landmasses surrounded by deep oceans. The evolution of continental crust is important for atmospheric evolution, because continental crust is an essential component of deep carbon cycle and is likely to have played a critical role in the oxygenation of the atmosphere. Geochemical information stored in the mineral zircon, known for its resilience to diagenesis and metamorphism, has been central to ongoing debates on the genesis and evolution of continental crust. However, correction for crustal reworking, which is the most critical step when estimating original formation ages, has been incorrectly formulated, undermining the significance of previous estimates. Here I suggest a simple yet promising approach for reworking correction using the global compilation of zircon data. The present-day distribution of crustal formation age estimated by the new "unmixing" method serves as the lower bound to the true crustal growth, and large deviations from growth models based on mantle depletion imply the important role of crustal recycling through the Earth history.

  18. Estimating changes in urban land and urban population using refined areal interpolation techniques

    Science.gov (United States)

    Zoraghein, Hamidreza; Leyk, Stefan

    2018-05-01

    The analysis of changes in urban land and population is important because the majority of future population growth will take place in urban areas. U.S. Census historically classifies urban land using population density and various land-use criteria. This study analyzes the reliability of census-defined urban lands for delineating the spatial distribution of urban population and estimating its changes over time. To overcome the problem of incompatible enumeration units between censuses, regular areal interpolation methods including Areal Weighting (AW) and Target Density Weighting (TDW), with and without spatial refinement, are implemented. The goal in this study is to estimate urban population in Massachusetts in 1990 and 2000 (source zones), within tract boundaries of the 2010 census (target zones), respectively, to create a consistent time series of comparable urban population estimates from 1990 to 2010. Spatial refinement is done using ancillary variables such as census-defined urban areas, the National Land Cover Database (NLCD) and the Global Human Settlement Layer (GHSL) as well as different combinations of them. The study results suggest that census-defined urban areas alone are not necessarily the most meaningful delineation of urban land. Instead, it appears that alternative combinations of the above-mentioned ancillary variables can better depict the spatial distribution of urban land, and thus make it possible to reduce the estimation error in transferring the urban population from source zones to target zones when running spatially-refined temporal areal interpolation.

  19. Improved estimates of coordinate error for molecular replacement

    International Nuclear Information System (INIS)

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.

    2013-01-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates

  20. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    Science.gov (United States)

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  1. Irradiation uniformity of spherical targets by multiple uv beams from OMEGA

    International Nuclear Information System (INIS)

    Beich, W.; Dunn, M.; Hutchison, R.

    1984-01-01

    Direct-drive laser fusion demands extremely high levels of irradiation uniformity to ensure uniform compression of spherical targets. The assessment of illumination uniformity of targets irradiated by multiple beams from the OMEGA facility is made with the aid of multiple beams spherical superposition codes, which take into account ray tracing and absorption and a detailed knowledge of the intensity distribution of each beam in the target plane. In this report, recent estimates of the irradiation uniformity achieved with 6 and 12 uv beams of OMEGA will be compared with previous measurements in the IR, and predictions will be made for the uv illumination uniformity achievable with 24 beams of OMEGA

  2. Distributions of hit-numbers in single targets

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, J F [Postgraduate Medical School, Hammersmith Hospital, London (United Kingdom)

    1966-07-01

    Very general models can be proposed for relating the surviving proportion of an irradiated population of cells or bacteria to the absorbed dose, but if the number of free parameters is large the model can never be tested experimentally (Zimmer; Zirkie; Tobias). A relatively simple model is therefore proposed here, based on the physical facts of energy deposition in small volumes which are currently under active investigation (Rossi), and on cell-survival experiments over a wide range of LET (e.g. Barendsen et al.; Barendsen). It is not suggested that the model is correct or final, but only that its shortcomings should be demonstrated by comparison with experimental results before more complicated models are worth pursuing. It is basically a multihit model applied first to a single target volume, but also applicable to the situation where only one out of many potential target volumes has to be inactivated to kill the organism. It can be extended to two or more target volumes if necessary. Emphasis is placed upon the amount of energy locally deposited in certain sensitive volumes called 'target volumes'.

  3. Statistical Inference for Data Adaptive Target Parameters.

    Science.gov (United States)

    Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J

    2016-05-01

    Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.

  4. Research reactor loading pattern optimization using estimation of distribution algorithms

    International Nuclear Information System (INIS)

    Jiang, S.; Ziver, K.; Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H.; Franklin, S. J.; Phillips, H. J.

    2006-01-01

    A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K eff ) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K eff with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)

  5. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    Science.gov (United States)

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  6. Calibration of the Diameter Distribution Derived from the Area-based Approach with Individual Tree-based Diameter Estimates Using the Airborne Laser Scanning

    Science.gov (United States)

    Xu, Q.; Hou, Z.; Maltamo, M.; Tokola, T.

    2015-12-01

    Diameter distributions of trees are important indicators of current forest stand structure and future dynamics. A new method was proposed in the study to combine the diameter distributions derived from the area-based approach (ABA) and the diameter distribution derived from the individual tree detection (ITD) in order to obtain more accurate forest stand attributes. Since dominant trees can be reliably detected and measured by the Lidar data via the ITD, the focus of the study is to retrieve the suppressed trees (trees that were missed by the ITD) from the ABA. Replacement and histogram matching were respectively employed at the plot level to retrieve the suppressed trees. Cut point was detected from the ITD-derived diameter distribution for each sample plot to distinguish dominant trees from the suppressed trees. The results showed that calibrated diameter distributions were more accurate in terms of error index and the entire growing stock estimates. Compared with the best performer between the ABA and the ITD, calibrated diameter distributions decreased the relative RMSE of the estimated entire growing stock, saw log and pulpwood fractions by 2.81%, 3.05% and 7.73% points respectively. Calibration improved the estimation of pulpwood fraction significantly, resulting in a negligible bias of the estimated entire growing stock.

  7. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  8. Estimation of CO2 flux from targeted satellite observations: a Bayesian approach

    International Nuclear Information System (INIS)

    Cox, Graham

    2014-01-01

    We consider the estimation of carbon dioxide flux at the ocean–atmosphere interface, given weighted averages of the mixing ratio in a vertical atmospheric column. In particular we examine the dependence of the posterior covariance on the weighting function used in taking observations, motivated by the fact that this function is instrument-dependent, hence one needs the ability to compare different weights. The estimation problem is considered using a variational data assimilation method, which is shown to admit an equivalent infinite-dimensional Bayesian formulation. The main tool in our investigation is an explicit formula for the posterior covariance in terms of the prior covariance and observation operator. Using this formula, we compare weighting functions concentrated near the surface of the earth with those concentrated near the top of the atmosphere, in terms of the resulting covariance operators. We also consider the problem of observational targeting, and ask if it is possible to reduce the covariance in a prescribed direction through an appropriate choice of weighting function. We find that this is not the case—there exist directions in which one can never gain information, regardless of the choice of weight. (paper)

  9. Estimating investor preferences towards portfolio return distribution in investment funds

    Directory of Open Access Journals (Sweden)

    Margareta Gardijan

    2015-03-01

    Full Text Available Recent research in the field of investor preference has emphasised the need to go beyond just simply analyzing the first two moments of a portfolio return distribution used in a MV (mean-variance paradigm. The suggestion is to observe an investor's utility function as an nth order Taylor approximation. In such terms, the assumption is that investors prefer greater values of odd and smaller values of even moments. In order to investigate the preferences of Croatian investment funds, an analysis of the moments of their return distribution is conducted. The sample contains data on monthly returns of 30 investment funds in Croatia for the period from January 1999 to May 2014. Using the theoretical utility functions (DARA, CARA, CRRA, we compare changes in their preferences when higher moments are included. Moreover, we investigate an extension of the CAPM model in order to find out whether including higher moments can explain better the relationship between the awards and risk premium, and whether we can apply these findings to estimate preferences of Croatian institutional investors. The results indicate that Croatian institutional investors do not seek compensation for bearing greater market risk.

  10. Optimal Meter Placement for Distribution Network State Estimation: A Circuit Representation Based MILP Approach

    DEFF Research Database (Denmark)

    Chen, Xiaoshuang; Lin, Jin; Wan, Can

    2016-01-01

    State estimation (SE) in distribution networks is not as accurate as that in transmission networks. Traditionally, distribution networks (DNs) are lack of direct measurements due to the limitations of investments and the difficulties of maintenance. Therefore, it is critical to improve the accuracy...... of SE in distribution networks by placing additional physical meters. For state-of-the-art SE models, it is difficult to clearly quantify measurements' influences on SE errors, so the problems of optimal meter placement for reducing SE errors are mostly solved by heuristic or suboptimal algorithms....... Under this background, this paper proposes a circuit representation model to represent SE errors. Based on the matrix formulation of the circuit representation model, the problem of optimal meter placement can be transformed to a mixed integer linear programming problem (MILP) via the disjunctive model...

  11. Estimation of the volatility distribution of organic aerosol combining thermodenuder and isothermal dilution measurements

    Directory of Open Access Journals (Sweden)

    E. E. Louvaris

    2017-10-01

    Full Text Available A method is developed following the work of Grieshop et al. (2009 for the determination of the organic aerosol (OA volatility distribution combining thermodenuder (TD and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS and a scanning mobility particle sizer (SMPS. In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60–75 % of the cooking OA (COA at concentrations around 500 µg m−3 consisted of low-volatility organic compounds (LVOCs, 20–30 % of semivolatile organic compounds (SVOCs, and around 10 % of intermediate-volatility organic compounds (IVOCs. The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol−1 and the effective accommodation coefficient was 0.06–0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.

  12. Estimation of the volatility distribution of organic aerosol combining thermodenuder and isothermal dilution measurements

    Science.gov (United States)

    Louvaris, Evangelos E.; Karnezi, Eleni; Kostenidou, Evangelia; Kaltsonoudis, Christos; Pandis, Spyros N.

    2017-10-01

    A method is developed following the work of Grieshop et al. (2009) for the determination of the organic aerosol (OA) volatility distribution combining thermodenuder (TD) and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA) produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) and a scanning mobility particle sizer (SMPS). In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60-75 % of the cooking OA (COA) at concentrations around 500 µg m-3 consisted of low-volatility organic compounds (LVOCs), 20-30 % of semivolatile organic compounds (SVOCs), and around 10 % of intermediate-volatility organic compounds (IVOCs). The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol-1 and the effective accommodation coefficient was 0.06-0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.

  13. Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks

    Science.gov (United States)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Rahmani, Amirreza

    2011-01-01

    Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation-flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are proposed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms rely on a local information exchange network, relaxing the assumptions on existing algorithms. Distributed space systems rely on a signal transmission network among multiple spacecraft for their operation. Control and coordination among multiple spacecraft in a formation is facilitated via a network of relative sensing and interspacecraft communications. Guidance, navigation, and control rely on the sensing network. This network becomes more complex the more spacecraft are added, or as mission requirements become more complex. The observability of a formation state was observed by a set of local observations from a particular node in the formation. Formation observability can be parameterized in terms of the matrices appearing in the formation dynamics and observation matrices. An agreement protocol was used as a mechanism for observing formation states from local measurements. An agreement protocol is essentially an unforced dynamic system whose trajectory is governed by the interconnection geometry and initial condition of each node, with a goal of reaching a common value of interest. The observability of the interconnected system depends on the geometry of the network, as well as the position of the observer relative to the topology. For the first time, critical GN&C (guidance, navigation, and control estimation) subsystems are synthesized by bringing the contribution of the spacecraft information-exchange network to the forefront of algorithmic analysis and design. The result is a

  14. Estimation of residual stress distribution for pressurizer nozzle of Kori nuclear power plant considering safe end

    Energy Technology Data Exchange (ETDEWEB)

    Song, Tae Kwang; Bae, Hong Yeol; Chun, Yun Bae; Oh, Chang Young; Kim, Yun Jae [Korea University, Seoul (Korea, Republic of); Lee, Kyoung Soo; Park, Chi Yong [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2008-08-15

    In nuclear power plants, ferritic low alloy steel nozzle was connected with austenitic stainless steel piping system through alloy 82/182 butt weld. Accurate estimation of residual stress for weldment is important in the sense that alloy 82/182 is susceptible to stress corrosion cracking. There are many results which predict residual stress distribution for alloy 82/182 weld between nozzle and pipe. However, nozzle and piping system usually connected through safe end which has short length. In this paper, residual stress distribution for pressurizer nozzle of Kori nuclear power plant was predicted using FE analysis, which considered safe end. As a result, existing residual stress profile was redistributed and residual stress of inner surface was decreased specially. It means that safe end should be considered to reduce conservatism when estimating the piping system.

  15. Estimation of soil-soil solution distribution coefficient of radiostrontium using soil properties.

    Science.gov (United States)

    Ishikawa, Nao K; Uchida, Shigeo; Tagami, Keiko

    2009-02-01

    We propose a new approach for estimation of soil-soil solution distribution coefficient (K(d)) of radiostrontium using some selected soil properties. We used 142 Japanese agricultural soil samples (35 Andosol, 25 Cambisol, 77 Fluvisol, and 5 others) for which Sr-K(d) values had been determined by a batch sorption test and listed in our database. Spearman's rank correlation test was carried out to investigate correlations between Sr-K(d) values and soil properties. Electrical conductivity and water soluble Ca had good correlations with Sr-K(d) values for all soil groups. Then, we found a high correlation between the ratio of exchangeable Ca to Ca concentration in water soluble fraction and Sr-K(d) values with correlation coefficient R=0.72. This pointed us toward a relatively easy way to estimate Sr-K(d) values.

  16. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  17. A hierarchical model for estimating the spatial distribution and abundance of animals detected by continuous-time recorders.

    Directory of Open Access Journals (Sweden)

    Robert M Dorazio

    Full Text Available Several spatial capture-recapture (SCR models have been developed to estimate animal abundance by analyzing the detections of individuals in a spatial array of traps. Most of these models do not use the actual dates and times of detection, even though this information is readily available when using continuous-time recorders, such as microphones or motion-activated cameras. Instead most SCR models either partition the period of trap operation into a set of subjectively chosen discrete intervals and ignore multiple detections of the same individual within each interval, or they simply use the frequency of detections during the period of trap operation and ignore the observed times of detection. Both practices make inefficient use of potentially important information in the data.We developed a hierarchical SCR model to estimate the spatial distribution and abundance of animals detected with continuous-time recorders. Our model includes two kinds of point processes: a spatial process to specify the distribution of latent activity centers of individuals within the region of sampling and a temporal process to specify temporal patterns in the detections of individuals. We illustrated this SCR model by analyzing spatial and temporal patterns evident in the camera-trap detections of tigers living in and around the Nagarahole Tiger Reserve in India. We also conducted a simulation study to examine the performance of our model when analyzing data sets of greater complexity than the tiger data.Our approach provides three important benefits: First, it exploits all of the information in SCR data obtained using continuous-time recorders. Second, it is sufficiently versatile to allow the effects of both space use and behavior of animals to be specified as functions of covariates that vary over space and time. Third, it allows both the spatial distribution and abundance of individuals to be estimated, effectively providing a species distribution model, even in

  18. Estimating the Grain Size Distribution of Mars based on Fragmentation Theory and Observations

    Science.gov (United States)

    Charalambous, C.; Pike, W. T.; Golombek, M.

    2017-12-01

    We present here a fundamental extension to the fragmentation theory [1] which yields estimates of the distribution of particle sizes of a planetary surface. The model is valid within the size regimes of surfaces whose genesis is best reflected by the evolution of fragmentation phenomena governed by either the process of meteoritic impacts, or by a mixture with aeolian transportation at the smaller sizes. The key parameter of the model, the regolith maturity index, can be estimated as an average of that observed at a local site using cratering size-frequency measurements, orbital and surface image-detected rock counts and observations of sub-mm particles at landing sites. Through validation of ground truth from previous landed missions, the basis of this approach has been used at the InSight landing ellipse on Mars to extrapolate rock size distributions in HiRISE images down to 5 cm rock size, both to determine the landing safety risk and the subsequent probability of obstruction by a rock of the deployed heat flow mole down to 3-5 m depth [2]. Here we focus on a continuous extrapolation down to 600 µm coarse sand particles, the upper size limit that may be present through aeolian processes [3]. The parameters of the model are first derived for the fragmentation process that has produced the observable rocks via meteorite impacts over time, and therefore extrapolation into a size regime that is affected by aeolian processes has limited justification without further refinement. Incorporating thermal inertia estimates, size distributions observed by the Spirit and Opportunity Microscopic Imager [4] and Atomic Force and Optical Microscopy from the Phoenix Lander [5], the model's parameters in combination with synthesis methods are quantitatively refined further to allow transition within the aeolian transportation size regime. In addition, due to the nature of the model emerging in fractional mass abundance, the percentage of material by volume or mass that resides

  19. Assessing different parameters estimation methods of Weibull distribution to compute wind power density

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi

    2016-01-01

    Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.

  20. A Study on Grid-Square Statistics Based Estimation of Regional Electricity Demand and Regional Potential Capacity of Distributed Generators

    Science.gov (United States)

    Kato, Takeyoshi; Sugimoto, Hiroyuki; Suzuoki, Yasuo

    We established a procedure for estimating regional electricity demand and regional potential capacity of distributed generators (DGs) by using a grid square statistics data set. A photovoltaic power system (PV system) for residential use and a co-generation system (CGS) for both residential and commercial use were taken into account. As an example, the result regarding Aichi prefecture was presented in this paper. The statistical data of the number of households by family-type and the number of employees by business category for about 4000 grid-square with 1km × 1km area was used to estimate the floor space or the electricity demand distribution. The rooftop area available for installing PV systems was also estimated with the grid-square statistics data set. Considering the relation between a capacity of existing CGS and a scale-index of building where CGS is installed, the potential capacity of CGS was estimated for three business categories, i.e. hotel, hospital, store. In some regions, the potential capacity of PV systems was estimated to be about 10,000kW/km2, which corresponds to the density of the existing area with intensive installation of PV systems. Finally, we discussed the ratio of regional potential capacity of DGs to regional maximum electricity demand for deducing the appropriate capacity of DGs in the model of future electricity distribution system.

  1. Distribution of separated energy and injected charge at normal falling of fast electron beam on target

    CERN Document Server

    Smolyar, V A; Eremin, V V

    2002-01-01

    In terms of a kinetic equation diffusion model for a beam of electrons falling on a target along the normal one derived analytical formulae for distributions of separated energy and injected charge. In this case, no empirical adjustable parameters are introduced to the theory. The calculated distributions of separated energy for an electron plate directed source within infinite medium for C, Al, Sn and Pb are in good consistency with the Spencer data derived on the basis of the accurate solution of the Bethe equation being the source one in assumption of a diffusion model, as well

  2. Distribution of separated energy and injected charge at normal falling of fast electron beam on target

    International Nuclear Information System (INIS)

    Smolyar, V.A.; Eremin, A.V.; Eremin, V.V.

    2002-01-01

    In terms of a kinetic equation diffusion model for a beam of electrons falling on a target along the normal one derived analytical formulae for distributions of separated energy and injected charge. In this case, no empirical adjustable parameters are introduced to the theory. The calculated distributions of separated energy for an electron plate directed source within infinite medium for C, Al, Sn and Pb are in good consistency with the Spencer data derived on the basis of the accurate solution of the Bethe equation being the source one in assumption of a diffusion model, as well [ru

  3. Accuracy in estimation of timber assortments and stem distribution - A comparison of airborne and terrestrial laser scanning techniques

    Science.gov (United States)

    Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto

    2014-11-01

    Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.

  4. Target selection and mass estimation for manned NEO exploration using a baseline mission design

    Science.gov (United States)

    Boden, Ralf C.; Hein, Andreas M.; Kawaguchi, Junichiro

    2015-06-01

    In recent years Near-Earth Objects (NEOs) have received an increased amount of interest as a target for human exploration. NEOs offer scientifically interesting targets, and at the same time function as a stepping stone for achieving future Mars missions. The aim of this research is to identify promising targets from the large number of known NEOs that qualify for a manned sample-return mission with a maximum duration of one year. By developing a baseline mission design and a mass estimation model, mission opportunities are evaluated based on on-orbit mass requirements, safety considerations, and the properties of the potential targets. A selection of promising NEOs is presented and the effects of mission requirements and restrictions are discussed. Regarding safety aspects, the use of free-return trajectories provides the lowest on-orbit mass, when compared to an alternative design that uses system redundancies to ensure return of the spacecraft to Earth. It is discovered that, although a number of targets are accessible within the analysed time frame, no NEO offers both easy access and high incentive for its exploration. Under the discussed aspects a first human exploration mission going beyond the vicinity of Earth will require a trade off between targets that provide easy access and those that are of scientific interest. This lack of optimal mission opportunities can be seen in the small number of only 4 NEOs that meet all requirements for a sample-return mission and remain below an on-orbit mass of 500 metric Tons (mT). All of them require a mass between 315 and 492 mT. Even less ideal, smaller asteroids that are better accessible require an on-orbit mass that exceeds the launch capability of future heavy lift vehicles (HLV) such as SLS by at least 30 mT. These mass requirements show that additional efforts are necessary to increase the number of available targets and reduce on-orbit mass requirements through advanced mission architectures. The need for on

  5. Weighted Moments Estimators of the Parameters for the Extreme Value Distribution Based on the Multiply Type II Censored Sample

    Directory of Open Access Journals (Sweden)

    Jong-Wuu Wu

    2013-01-01

    Full Text Available We propose the weighted moments estimators (WMEs of the location and scale parameters for the extreme value distribution based on the multiply type II censored sample. Simulated mean squared errors (MSEs of best linear unbiased estimator (BLUE and exact MSEs of WMEs are compared to study the behavior of different estimation methods. The results show the best estimator among the WMEs and BLUE under different combinations of censoring schemes.

  6. The capability of professional- and lay-rescuers to estimate the chest compression-depth target: a short, randomized experiment.

    Science.gov (United States)

    van Tulder, Raphael; Laggner, Roberta; Kienbacher, Calvin; Schmid, Bernhard; Zajicek, Andreas; Haidvogel, Jochen; Sebald, Dieter; Laggner, Anton N; Herkner, Harald; Sterz, Fritz; Eisenburger, Philip

    2015-04-01

    In CPR, sufficient compression depth is essential. The American Heart Association ("at least 5cm", AHA-R) and the European Resuscitation Council ("at least 5cm, but not to exceed 6cm", ERC-R) recommendations differ, and both are hardly achieved. This study aims to investigate the effects of differing target depth instructions on compression depth performances of professional and lay-rescuers. 110 professional-rescuers and 110 lay-rescuers were randomized (1:1, 4 groups) to estimate the AHA-R or ERC-R on a paper sheet (given horizontal axis) using a pencil and to perform chest compressions according to AHA-R or ERC-R on a manikin. Distance estimation and compression depth were the outcome variables. Professional-rescuers estimated the distance according to AHA-R in 19/55 (34.5%) and to ERC-R in 20/55 (36.4%) cases (p=0.84). Professional-rescuers achieved correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 36/55 (65.4%) cases (p=0.97). Lay-rescuers estimated the distance correctly according to AHA-R in 18/55 (32.7%) and to ERC-R in 20/55 (36.4%) cases (p=0.59). Lay-rescuers yielded correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 26/55 (47.3%) cases (p=0.02). Professional and lay-rescuers have severe difficulties in correctly estimating distance on a sheet of paper. Professional-rescuers are able to yield AHA-R and ERC-R targets likewise. In lay-rescuers AHA-R was associated with significantly higher success rates. The inability to estimate distance could explain the failure to appropriately perform chest compressions. For teaching lay-rescuers, the AHA-R with no upper limit of compression depth might be preferable. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Research reactor loading pattern optimization using estimation of distribution algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, S. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Ziver, K. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); AMCG Group, RM Consultants, Abingdon (United Kingdom); Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Franklin, S. J.; Phillips, H. J. [Imperial College, Reactor Centre, Silwood Park, Buckhurst Road, Ascot, Berkshire, SL5 7TE (United Kingdom)

    2006-07-01

    A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K{sub eff}) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K{sub eff} with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)

  8. A lightweight target-tracking scheme using wireless sensor network

    International Nuclear Information System (INIS)

    Kuang, Xing-hong; Shao, Hui-he; Feng, Rui

    2008-01-01

    This paper describes a lightweight target-tracking scheme using wireless sensor network, where randomly distributed sensor nodes take responsibility for tracking the moving target based on the acoustic sensing signal. At every localization interval, a backoff timer algorithm is performed to elect the leader node and determine the transmission order of the localization nodes. An adaptive active region size algorithm based on the node density is proposed to select the optimal nodes taking part in localization. An improved particle filter algorithm performed by the leader node estimates the target state based on the selected nodes' acoustic energy measurements. Some refinements such as optimal linear combination algorithm, residual resampling algorithm, Markov chain Monte Carlo method are introduced in the scheme to improve the tracking performance. Simulation results validate the efficiency of the proposed tracking scheme

  9. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loids Pollution Based on Kriging Interpolation and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Zhenyi Jia

    2017-12-01

    Full Text Available Soil pollution by metal(loids resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As and cadmium (Cd pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loids in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid pollution.

  10. Estimating the changes in the distribution of energy efficiency in the U.S. automobile assembly industry

    International Nuclear Information System (INIS)

    Boyd, Gale A.

    2014-01-01

    This paper describes the EPA's voluntary ENERGY STAR program and the results of the automobile manufacturing industry's efforts to advance energy management as measured by the updated ENERGY STAR Energy Performance Indicator (EPI). A stochastic single-factor input frontier estimation using the gamma error distribution is applied to separately estimate the distribution of the electricity and fossil fuel efficiency of assembly plants using data from 2003 to 2005 and then compared to model results from a prior analysis conducted for the 1997–2000 time period. This comparison provides an assessment of how the industry has changed over time. The frontier analysis shows a modest improvement (reduction) in “best practice” for electricity use and a larger one for fossil fuels. This is accompanied by a large reduction in the variance of fossil fuel efficiency distribution. The results provide evidence of a shift in the frontier, in addition to some “catching up” of poor performing plants over time. - Highlights: • A non-public dataset of U.S. auto manufacturing plants is compiled. • A stochastic frontier with a gamma distribution is applied to plant level data. • Electricity and fuel use are modeled separately. • Comparison to prior analysis reveals a shift in the frontier and “catching up”. • Results are used by ENERGY STAR to award energy efficiency plant certifications

  11. Spatial Distribution of Estimated Wind-Power Royalties in West Texas

    Directory of Open Access Journals (Sweden)

    Christian Brannstrom

    2015-12-01

    Full Text Available Wind-power development in the U.S. occurs primarily on private land, producing royalties for landowners through private contracts with wind-farm operators. Texas, the U.S. leader in wind-power production with well-documented support for wind power, has virtually all of its ~12 GW of wind capacity sited on private lands. Determining the spatial distribution of royalty payments from wind energy is a crucial first step to understanding how renewable power may alter land-based livelihoods of some landowners, and, as a result, possibly encourage land-use changes. We located ~1700 wind turbines (~2.7 GW on 241 landholdings in Nolan and Taylor counties, Texas, a major wind-development region. We estimated total royalties to be ~$11.5 million per year, with mean annual royalty received per landowner per year of $47,879 but with significant differences among quintiles and between two sub-regions. Unequal distribution of royalties results from land-tenure patterns established before wind-power development because of a “property advantage,” defined as the pre-existing land-tenure patterns that benefit the fraction of rural landowners who receive wind turbines. A “royalty paradox” describes the observation that royalties flow to a small fraction of landowners even though support for wind power exceeds 70 percent.

  12. Distortions in Distributions of Impact Estimates in Multi-Site Trials: The Central Limit Theorem Is Not Your Friend

    Science.gov (United States)

    May, Henry

    2014-01-01

    Interest in variation in program impacts--How big is it? What might explain it?--has inspired recent work on the analysis of data from multi-site experiments. One critical aspect of this problem involves the use of random or fixed effect estimates to visualize the distribution of impact estimates across a sample of sites. Unfortunately, unless the…

  13. Single snapshot DOA estimation

    Science.gov (United States)

    Häcker, P.; Yang, B.

    2010-10-01

    In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.

  14. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  15. The Reliability Estimation for the Open Function of Cabin Door Affected by the Imprecise Judgment Corresponding to Distribution Hypothesis

    Science.gov (United States)

    Yu, Z. P.; Yue, Z. F.; Liu, W.

    2018-05-01

    With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.

  16. Frequency Diversity Array for DOA Estimation

    Directory of Open Access Journals (Sweden)

    NAUMAN ANWAR BAIG

    2017-10-01

    Full Text Available The localization of targets has been presented in this article. DOA (Direction of Arrival is an important parameter to be determined by radar. The MLE (Maximum Likelihood Estimator has been widely used to accurately and efficiently estimate the DOAs of multiple targets. The targets at different ranges result in a variation in amplitude of the received signals, so an MLE estimator has to operate at all ranges. For accurate results of DOA, the complex amplitudes of multiple targets should not be much different and also the prior information of Doppler and number of targets is required. In this paper, an approach is proposed which uses the classical 2D algorithm to estimate range, Doppler and number of targets and then FDA (Frequency Diversity Array is used to focus power in a particular range. As a result, the MLE can get data from a particular range cell where all targets have almost same amplitude and thus MLE can accurately estimate the DOAs of multiple targets. Simulations and results have confirmed the effectiveness of proposed approach.

  17. Performance of Distributed CFAR Processors in Pearson Distributed Clutter

    OpenAIRE

    Messali Zoubeida; Soltani Faouzi

    2007-01-01

    This paper deals with the distributed constant false alarm rate (CFAR) radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA), order statistics (OS), and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S) random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating ...

  18. Bayes allocation of the sample for estimation of the mean when each stratum has a Poisson distribution

    International Nuclear Information System (INIS)

    Wright, T.

    1983-01-01

    Consider a stratified population with L strata, so that a Poisson random variable is associated with each stratum. The parameter associated with the hth stratum is theta/sub h/, h = 1, 2, ..., L. Let ω/sub h/ be the known proportion of the population in the hth stratum, h = 1, 2, ..., L. The authors want to estimate the parameter theta = summation from h = 1 to L ω/sub h/theta/sub h/. We assume that prior information is available on theta/sub h/ and that it can be expressed in terms of a gamma distribution with parameters α/sub h/ and β/sub h/, h = 1, 2, ..., L. We also assume that the prior distributions are independent. Using squared error loss function, a Bayes allocation of total sample size with a cost constraint is given. The Bayes estimate using the Bayes allocation is shown to have an adjusted mean square error which is strictly less than the adjusted mean square error of the classical estimate using the classical allocation

  19. Multi-UAV Doppler Information Fusion for Target Tracking Based on Distributed High Degrees Information Filters

    Directory of Open Access Journals (Sweden)

    Hamza Benzerrouk

    2018-03-01

    Full Text Available Multi-Unmanned Aerial Vehicle (UAV Doppler-based target tracking has not been widely investigated, specifically when using modern nonlinear information filters. A high-degree Gauss–Hermite information filter, as well as a seventh-degree cubature information filter (CIF, is developed to improve the fifth-degree and third-degree CIFs proposed in the most recent related literature. These algorithms are applied to maneuvering target tracking based on Radar Doppler range/range rate signals. To achieve this purpose, different measurement models such as range-only, range rate, and bearing-only tracking are used in the simulations. In this paper, the mobile sensor target tracking problem is addressed and solved by a higher-degree class of quadrature information filters (HQIFs. A centralized fusion architecture based on distributed information filtering is proposed, and yielded excellent results. Three high dynamic UAVs are simulated with synchronized Doppler measurement broadcasted in parallel channels to the control center for global information fusion. Interesting results are obtained, with the superiority of certain classes of higher-degree quadrature information filters.

  20. Distribution Agnostic Structured Sparsity Recovery: Algorithms and Applications

    KAUST Repository

    Masood, Mudassir

    2015-05-01

    Compressed sensing has been a very active area of research and several elegant algorithms have been developed for the recovery of sparse signals in the past few years. However, most of these algorithms are either computationally expensive or make some assumptions that are not suitable for all real world problems. Recently, focus has shifted to Bayesian-based approaches that are able to perform sparse signal recovery at much lower complexity while invoking constraint and/or a priori information about the data. While Bayesian approaches have their advantages, these methods must have access to a priori statistics. Usually, these statistics are unknown and are often difficult or even impossible to predict. An effective workaround is to assume a distribution which is typically considered to be Gaussian, as it makes many signal processing problems mathematically tractable. Seemingly attractive, this assumption necessitates the estimation of the associated parameters; which could be hard if not impossible. In the thesis, we focus on this aspect of Bayesian recovery and present a framework to address the challenges mentioned above. The proposed framework allows Bayesian recovery of sparse signals but at the same time is agnostic to the distribution of the unknown sparse signal components. The algorithms based on this framework are agnostic to signal statistics and utilize a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. In the thesis, we propose several algorithms based on this framework which utilize the structure present in signals for improved recovery. In addition to the algorithm that considers just the sparsity structure of sparse signals, tools that target additional structure of the sparsity recovery problem are proposed. These include several algorithms for a) block-sparse signal estimation, b) joint reconstruction of several common support sparse signals, and c

  1. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  2. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    Science.gov (United States)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  3. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  4. Distributed Cerebral Blood Flow estimation using a spatiotemporal hemodynamic response model and a Kalman-like Filter approach

    KAUST Repository

    Belkhatir, Zehor

    2015-11-23

    This paper discusses the estimation of distributed Cerebral Blood Flow (CBF) using spatiotemporal traveling wave model. We consider a damped wave partial differential equation that describes a physiological relationship between the blood mass density and the CBF. The spatiotemporal model is reduced to a finite dimensional system using a cubic b-spline continuous Galerkin method. A Kalman Filter with Unknown Inputs without Direct Feedthrough (KF-UI-WDF) is applied on the obtained reduced differential model to estimate the source term which is the CBF scaled by a factor. Numerical results showing the performances of the adopted estimator are provided.

  5. Delay-distribution-dependent H∞ state estimation for delayed neural networks with (x,v)-dependent noises and fading channels.

    Science.gov (United States)

    Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E

    2016-12-01

    This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Angular distributions of absorbed dose of Bremsstrahlung and secondary electrons induced by 18-, 28- and 38-MeV electron beams in thick targets.

    Science.gov (United States)

    Takada, Masashi; Kosako, Kazuaki; Oishi, Koji; Nakamura, Takashi; Sato, Kouichi; Kamiyama, Takashi; Kiyanagi, Yoshiaki

    2013-03-01

    Angular distributions of absorbed dose of Bremsstrahlung photons and secondary electrons at a wide range of emission angles from 0 to 135°, were experimentally obtained using an ion chamber with a 0.6 cm(3) air volume covered with or without a build-up cap. The Bremsstrahlung photons and electrons were produced by 18-, 28- and 38-MeV electron beams bombarding tungsten, copper, aluminium and carbon targets. The absorbed doses were also calculated from simulated photon and electron energy spectra by multiplying simulated response functions of the ion chambers, simulated with the MCNPX code. Calculated-to-experimental (C/E) dose ratios obtained are from 0.70 to 1.57 for high-Z targets of W and Cu, from 15 to 135° and the C/E range from 0.6 to 1.4 at 0°; however, the values of C/E for low-Z targets of Al and C are from 0.5 to 1.8 from 0 to 135°. Angular distributions at the forward angles decrease with increasing angles; on the other hand, the angular distributions at the backward angles depend on the target species. The dependences of absorbed doses on electron energy and target thickness were compared between the measured and simulated results. The attenuation profiles of absorbed doses of Bremsstrahlung beams at 0, 30 and 135° were also measured.

  7. Measurement of neutron spectra generated from bombardment of 4 to 24 MeV protons on a thick 9Be target and estimation of neutron yields

    International Nuclear Information System (INIS)

    Paul, Sabyasachi; Sahoo, G. S.; Tripathy, S. P.; Sunil, C.; Bandyopadhyay, T.; Sharma, S. C.; Ramjilal,; Ninawe, N. G.; Gupta, A. K.

    2014-01-01

    A systematic study on the measurement of neutron spectra emitted from the interaction of protons of various energies with a thick beryllium target has been carried out. The measurements were carried out in the forward direction (at 0° with respect to the direction of protons) using CR-39 detectors. The doses were estimated using the in-house image analyzing program autoTRAK-n, which works on the principle of luminosity variation in and around the track boundaries. A total of six different proton energies starting from 4 MeV to 24 MeV with an energy gap of 4 MeV were chosen for the study of the neutron yields and the estimation of doses. Nearly, 92% of the recoil tracks developed after chemical etching were circular in nature, but the size distributions of the recoil tracks were not found to be linearly dependent on the projectile energy. The neutron yield and dose values were found to be increasing linearly with increasing projectile energies. The response of CR-39 detector was also investigated at different beam currents at two different proton energies. A linear increase of neutron yield with beam current was observed

  8. Multiple Target Laser Designator (MTLD)

    Science.gov (United States)

    2007-03-01

    Optimized Liquid Crystal Scanning Element Optimize the Nonimaging Predictive Algorithm for Target Ranging, Tracking, and Position Estimation...commercial potential. 3.0 PROGRESS THIS QUARTER 3.1 Optimization of Nonimaging Holographic Antenna for Target Tracking and Position Estimation (Task 6) In

  9. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network.

    Science.gov (United States)

    Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao

    2017-12-26

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.

  10. Optimal random search for a single hidden target.

    Science.gov (United States)

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  11. Estimating the implicit component of visuomotor rotation learning by constraining movement preparation time.

    Science.gov (United States)

    Leow, Li-Ann; Gunn, Reece; Marinovic, Welber; Carroll, Timothy J

    2017-08-01

    When sensory feedback is perturbed, accurate movement is restored by a combination of implicit processes and deliberate reaiming to strategically compensate for errors. Here, we directly compare two methods used previously to dissociate implicit from explicit learning on a trial-by-trial basis: 1 ) asking participants to report the direction that they aim their movements, and contrasting this with the directions of the target and the movement that they actually produce, and 2 ) manipulating movement preparation time. By instructing participants to reaim without a sensory perturbation, we show that reaiming is possible even with the shortest possible preparation times, particularly when targets are narrowly distributed. Nonetheless, reaiming is effortful and comes at the cost of increased variability, so we tested whether constraining preparation time is sufficient to suppress strategic reaiming during adaptation to visuomotor rotation with a broad target distribution. The rate and extent of error reduction under preparation time constraints were similar to estimates of implicit learning obtained from self-report without time pressure, suggesting that participants chose not to apply a reaiming strategy to correct visual errors under time pressure. Surprisingly, participants who reported aiming directions showed less implicit learning according to an alternative measure, obtained during trials performed without visual feedback. This suggests that the process of reporting can affect the extent or persistence of implicit learning. The data extend existing evidence that restricting preparation time can suppress explicit reaiming and provide an estimate of implicit visuomotor rotation learning that does not require participants to report their aiming directions. NEW & NOTEWORTHY During sensorimotor adaptation, implicit error-driven learning can be isolated from explicit strategy-driven reaiming by subtracting self-reported aiming directions from movement directions, or

  12. A new technique for testing distribution of knowledge and to estimate sampling sufficiency in ethnobiology studies.

    Science.gov (United States)

    Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino

    2012-03-15

    We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.

  13. Charged state distributions of swift heavy ions behind various solid targets (36 ≤ Zp ≤ 92, 18 MeV/u ≤ E ≤ 44 MeV/u)

    International Nuclear Information System (INIS)

    Leon, A.; Melki, S.; Lisfi, D.; Grandin, J.P.; Jardin, P.; Suraud, M.G.; Cassimi, A.

    1998-01-01

    Noting the lack of and the increasing need for information concerning heavy ion stripping in the intermediate velocity regime, the authors have studied a large number of ion-target systems experimentally. They present experimental charge state distributions obtained at the GANIL accelerator for several projectiles (36 ≤ Z p ≤ 92) with energies ranging from 18 MeV/u to 44 MeV/u, emerging from various target foils (4 ≤ Z t ≤ 79) of natural isotopic composition. The target thicknesses (from 1 microg/cm 2 up to several mg/cm 2 ) are chosen to cover the pre- and post-charge-state equilibrium regimes. Charge state fractions, mean charge state, charge distribution width, and emerging ion energy are tabulated for each of the 107 projectile-target element-target thickness combinations. They also present an improvement of the semi-empirical formulae proposed by Baron et al. to predict the mean charge states and the distribution widths at equilibrium. These formulae are compared with the available experimental data

  14. Adjusting estimative prediction limits

    OpenAIRE

    Masao Ueki; Kaoru Fueda

    2007-01-01

    This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.

  15. Age difference in deposition of plutonium in organs of rats and the estimation of distribution in humans

    Energy Technology Data Exchange (ETDEWEB)

    Fukuda, Satoshi; Iida, Haruzo [National Inst. of Radiological Sciences, Chiba (Japan)

    2000-05-01

    Differences in plutonium distribution in various organs, particularly the bones, of rats injected at different ages were examined in order to aid in estimating plutonium distribution in humans. Comparisons were made between rats and humans based on the bone histomorphometric and mineral density data. Male and female rats of three ages (3, 12, and 24 months old), respectively, received an injection of plutonium nitrate by two dose modalities; a fixed amount of plutonium without regard to age, sex, or body weight; per g of body weight. The rats were killed 2 weeks after the injection of plutonium. The amounts of plutonium deposited in the organs varied without regard to the body or organ weight; those in the skeleton increased from 3 to 12 months, reaching a peak at 12 months, but then decreased, along with the age-related changes in the bone surface, volume, and mineral density. Those in the liver, spleen and kidney decreased despite the body weight gain with age in both sexes. Age-related differences in the deposition of plutonium in humans were estimated based on the bone data characteristics obtained from the histomorphometry and bone mineral density for corresponding of ages between rats and humans. The results indicate that age is the most important factor in estimating the distribution of plutonium deposition in the early period after plutonium exposure, and that body or organ weight is not always a useful indicator, particularly in the aged. (author)

  16. Gaussian Quadrature is an efficient method for the back-transformation in estimating the usual intake distribution when assessing dietary exposure.

    Science.gov (United States)

    Dekkers, A L M; Slob, W

    2012-10-01

    In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Estimating Single and Multiple Target Locations Using K-Means Clustering with Radio Tomographic Imaging in Wireless Sensor Networks

    Science.gov (United States)

    2015-03-26

    clustering is an algorithm that has been used in data mining applications such as machine learning applications , pattern recognition, hyper-spectral imagery...42 3.7.2 Application of K-means Clustering . . . . . . . . . . . . . . . . . 42 3.8 Experiment Design...Tomographic Imaging WLAN Wireless Local Area Networks WSN Wireless Sensor Network xx ESTIMATING SINGLE AND MULTIPLE TARGET LOCATIONS USING K-MEANS CLUSTERING

  18. Estimating Elevation Angles From SAR Crosstalk

    Science.gov (United States)

    Freeman, Anthony

    1994-01-01

    Scheme for processing polarimetric synthetic-aperture-radar (SAR) image data yields estimates of elevation angles along radar beam to target resolution cells. By use of estimated elevation angles, measured distances along radar beam to targets (slant ranges), and measured altitude of aircraft carrying SAR equipment, one can estimate height of target terrain in each resolution cell. Monopulselike scheme yields low-resolution topographical data.

  19. Signal Processing of Ground Penetrating Radar Using Spectral Estimation Techniques to Estimate the Position of Buried Targets

    Directory of Open Access Journals (Sweden)

    Shanker Man Shrestha

    2003-11-01

    Full Text Available Super-resolution is very important for the signal processing of GPR (ground penetration radar to resolve closely buried targets. However, it is not easy to get high resolution as GPR signals are very weak and enveloped by the noise. The MUSIC (multiple signal classification algorithm, which is well known for its super-resolution capacity, has been implemented for signal and image processing of GPR. In addition, conventional spectral estimation technique, FFT (fast Fourier transform, has also been implemented for high-precision receiving signal level. In this paper, we propose CPM (combined processing method, which combines time domain response of MUSIC algorithm and conventional IFFT (inverse fast Fourier transform to obtain a super-resolution and high-precision signal level. In order to support the proposal, detailed simulation was performed analyzing SNR (signal-to-noise ratio. Moreover, a field experiment at a research field and a laboratory experiment at the University of Electro-Communications, Tokyo, were also performed for thorough investigation and supported the proposed method. All the simulation and experimental results are presented.

  20. Targeted alpha therapy of mCRPC. Dosimetry estimate of {sup 213}bismuth-PSMA-617

    Energy Technology Data Exchange (ETDEWEB)

    Kratochwil, Clemens; Afshar-Oromieh, Ali; Rathke, Hendrik; Giesel, Frederik L. [University Hospital Heidelberg, Department of Nuclear Medicine, Heidelberg (Germany); Schmidt, Karl [ABX-CRO, Dresden (Germany); Bruchertseifer, Frank; Morgenstern, Alfred [European Commission - Joint Research Centre, Directorate for Nuclear Safety and Security, Karlsruhe (Germany); Haberkorn, Uwe [University Hospital Heidelberg, Department of Nuclear Medicine, Heidelberg (Germany); German Cancer Research Center (DKFZ), Cooperation Unit Nuclear Medicine, Heidelberg (Germany)

    2018-01-15

    PSMA-617 is a small molecule targeting the prostate-specific membrane antigen (PSMA). In this work, we estimate the radiation dosimetry for this ligand labeled with the alpha-emitter {sup 213}Bi. Three patients with metastatic prostate cancer underwent PET scans 0.1 h, 1 h, 2 h, 3 h, 4 h and 5 h after injection of {sup 68}Ga-PSMA-617. Source organs were kidneys, liver, spleen, salivary glands, bladder, red marrow and representative tumor lesions. The imaging nuclide {sup 68}Ga was extrapolated to the half-life of {sup 213}Bi. The residence times of {sup 213}Bi were forwarded to the instable daughter nuclides. OLINDA was used for dosimetry calculation. Results are discussed in comparison to literature data for {sup 225}Ac-PSMA-617. Assuming a relative biological effectiveness of 5 for alpha radiation, the dosimetry estimate revealed equivalent doses of mean 8.1 Sv{sub RBE5}/GBq for salivary glands, 8.1 Sv{sub RBE5}/GBq for kidneys and 0.52 Sv{sub RBE5}/GBq for red marrow. Liver (1.2 Sv{sub RBE5}/GBq), spleen (1.4 Sv{sub RBE5}/GBq), bladder (0.28 Sv{sub RBE5}/GBq) and other organs (0.26 Sv{sub RBE5}/GBq) were not dose-limiting. The effective dose is 0.56 Sv{sub RBE5}/GBq. Tumor lesions were in the range 3.2-9.0 Sv{sub RBE5}/GBq (median 7.6 Sv{sub RBE5}/GBq). Kidneys would limit the cumulative treatment activity to 3.7 GBq; red marrow might limit the maximum single fraction to 2 GBq. Despite promising results, the therapeutic index was inferior compared to {sup 225}Ac-PSMA-617. Dosimetry of {sup 213}Bi-PSMA-617 is in a range traditionally considered reasonable for clinical application. Nevertheless, compared to {sup 225}Ac-PSMA-617, it suffers from higher perfusion-dependent off-target radiation and a longer biological half-life of PSMA-617 in dose-limiting organs than the physical half-life of {sup 213}Bi, rendering this nuclide as a second choice radiolabel for targeted alpha therapy of prostate cancer. (orig.)

  1. Habitat Preferences, Distribution Pattern, and Root Weight Estimation of Pasak Bumi (Eurycoma longifolia Jack.

    Directory of Open Access Journals (Sweden)

    Siti Masitoh Kartikawati

    2014-04-01

    Full Text Available Pasak bumi (Eurycoma longifolia Jack is one of non timber forest products with “indeterminate” conservation status and commercially traded in West Kalimantan. The research objective was to determine the potential of pasak bumi root per hectare and its ecological condition under natural habitat. Root weight of E. longifolia Jack was estimated using simple linear regression and exponential equation with stem diameter and height as independent variables. The results showed that the individual number of the population was 114 with the majority in seedling stage with 71 individuals (62.28%. The distribution was found in clumped pattern. Conditions of the habitat could be described as follows: daily average temperature of 25.6oC, daily average relative humidity of 73.6%, light intensity of 0.9 klx, and red-yellow podsolic soil with texture ranged from clay to sandy clay. The selected estimator model for E. longifolia Jack root weight used exponential equation with stem height as independent variable using the equation of Y= 21.99T0,010 and determination coefficient of 0.97. After height variable was added, the potential of E. longifolia Jack minimum root weight that could be harvested per hectare was 0.33 kg.Keywords: Eurycoma longifolia, habitat preference, distribution pattern, root weight

  2. A analytical comparison of different estimators for the density distribution of the catalyst in a experimental riser by a gammametric technique

    International Nuclear Information System (INIS)

    Lima, Emerson Alexandre de Oliveira; Dantas Carlos C.; Melo, Silvio de Barros; Santos, Valdemir Alexandre dos

    2005-01-01

    In this paper, we solve the following questions: what will be the estimate of the r = r (x, y, z) function format? Which method would describe the density distribution function more precisely? Which is the best estimator? Also, once the ρ=ρ(x, y, z) format and the approximation technique is defined, comes the experimental arrangement and pass length configuration, which are the next problems to be solved. Finding the best parameter estimation for the ρ=ρ(x, y, z) function according to the C pass lengths and their spatial configuration. The latter is required to define the ρ=ρ(x, y, z) function resolution and the mechanical scanning movements of the arrangement. Such definitions will be implemented for an automate arrangement, by a computational program, to further development of the reconstruction of catalyst density distribution on experimental risers.. The precision evaluation was finally compared to the arrangement geometry that yields the best pass length spatial configuration. The results are shown in graphics for the two known density distributions. As a conclusion, the parameters for an automate arrangement design, are given under the required precision for the catalyst distribution reconstruction. (author)

  3. Estimating cost ratio distribution between fatal and non-fatal road accidents in Malaysia

    Science.gov (United States)

    Hamdan, Nurhidayah; Daud, Noorizam

    2014-07-01

    Road traffic crashes are a global major problem, and should be treated as a shared responsibility. In Malaysia, road accident tragedies kill 6,917 people and injure or disable 17,522 people in year 2012, and government spent about RM9.3 billion in 2009 which cost the nation approximately 1 to 2 percent loss of gross domestic product (GDP) reported annually. The current cost ratio for fatal and non-fatal accident used by Ministry of Works Malaysia simply based on arbitrary value of 6:4 or equivalent 1.5:1 depends on the fact that there are six factors involved in the calculation accident cost for fatal accident while four factors for non-fatal accident. The simple indication used by the authority to calculate the cost ratio is doubted since there is lack of mathematical and conceptual evidence to explain how this ratio is determined. The main aim of this study is to determine the new accident cost ratio for fatal and non-fatal accident in Malaysia based on quantitative statistical approach. The cost ratio distributions will be estimated based on Weibull distribution. Due to the unavailability of official accident cost data, insurance claim data both for fatal and non-fatal accident have been used as proxy information for the actual accident cost. There are two types of parameter estimates used in this study, which are maximum likelihood (MLE) and robust estimation. The findings of this study reveal that accident cost ratio for fatal and non-fatal claim when using MLE is 1.33, while, for robust estimates, the cost ratio is slightly higher which is 1.51. This study will help the authority to determine a more accurate cost ratio between fatal and non-fatal accident as compared to the official ratio set by the government, since cost ratio is an important element to be used as a weightage in modeling road accident related data. Therefore, this study provides some guidance tips to revise the insurance claim set by the Malaysia road authority, hence the appropriate method

  4. Empirical Bayesian estimation in graphical analysis: a voxel-based approach for the determination of the volume of distribution in PET studies

    Energy Technology Data Exchange (ETDEWEB)

    Zanderigo, Francesca [Department of Molecular Imaging and Neuropathology, New York State Psychiatric Institute, New York, NY (United States)], E-mail: francesca.zanderigo@gmail.com; Ogden, R. Todd [Department of Molecular Imaging and Neuropathology, New York State Psychiatric Institute, New York, NY (United States); Department of Psychiatry, College of Physicians and Surgeons, Columbia University, New York, NY (United States); Department of Biostatistics, Mailman School of Public Health, Columbia University, New York, NY (United States); Bertoldo, Alessandra; Cobelli, Claudio [Department of Information Engineering, University of Padova, Padova (Italy); Mann, J. John; Parsey, Ramin V. [Department of Molecular Imaging and Neuropathology, New York State Psychiatric Institute, New York, NY (United States); Department of Psychiatry, College of Physicians and Surgeons, Columbia University, New York, NY (United States)

    2010-05-15

    Introduction: Total volume of distribution (V{sub T}) determined by graphical analysis (GA) of PET data suffers from a noise-dependent bias. Likelihood estimation in GA (LEGA) eliminates this bias at the region of interest (ROI) level, but at voxel noise levels, the variance of estimators is high, yielding noisy images. We hypothesized that incorporating LEGA V{sub T} estimation in a Bayesian framework would shrink estimators towards prior means, reducing variability and producing meaningful and useful voxel images. Methods: Empirical Bayesian estimation in GA (EBEGA) determines prior distributions using a two-step k-means clustering of voxel activity. Results obtained on eight [{sup 11}C]-DASB studies are compared with estimators computed by ROI-based LEGA. Results: EBEGA reproduces the results obtained by ROI LEGA while providing low-variability V{sub T} images. Correlation coefficients between average EBEGA V{sub T} and corresponding ROI LEGA V{sub T} range from 0.963 to 0.994. Conclusions: EBEGA is a fully automatic and general approach that can be applied to voxel-level V{sub T} image creation and to any modeling strategy to reduce voxel-level estimation variability without prefiltering of the PET data.

  5. Is it wise to protect false targets?

    International Nuclear Information System (INIS)

    Levitin, Gregory; Hausken, Kjell

    2011-01-01

    The paper considers a system consisting of genuine elements and false targets that cannot be distinguished by the attacker's observation. The false targets can be destroyed with much less effort than the genuine elements. We show that even when an attacker cannot distinguish between the genuine elements and the false targets, in many cases it can enhance the attack efficiency using a double attack strategy in which it tries first to eliminate with minimal effort as many false targets as possible in the first attack and then distributes its entire remaining resource among all surviving targets in the second attack. The model for evaluating the system vulnerability in the double attack is suggested for a single genuine element, and multiple genuine elements configured in parallel or in series. This model assumes that in both attacks the attacking resource is distributed evenly among the attacked targets. The defender can optimize its limited resource distribution between deploying more false targets and protecting them better. The attacker can optimize its limited resource distribution between two attacks. The defense strategy is analyzed based on a two period minmax game. A numerical procedure is suggested that allows the defender to find the optimal resource distribution between deploying and protecting the false targets. The methodology of optimal attack and defense strategies analysis is demonstrated. It is shown that protecting the false targets may reduce the efficiency of the double attack strategy and make this strategy ineffective in situations with low contest intensity and few false targets. - Highlights: ► The efficiency of the double attack tactics against using false targets is analyzed. ► The role of the false target protection in system survivability enhancement is shown. ► The resource distribution between deploying more false targets and protecting them better is optimized. ► Both series and parallel systems are considered.

  6. Regional models for distributed flash-flood nowcasting: towards an estimation of potential impacts and damages

    Directory of Open Access Journals (Sweden)

    Le Bihan Guillaume

    2016-01-01

    Full Text Available Flash floods monitoring systems developed up to now generally enable a real-time assessment of the potential flash-floods magnitudes based on highly distributed hydrological models and weather radar records. The approach presented here aims to go one step ahead by offering a direct assessment of the potential impacts of flash floods on inhabited areas. This approach is based on an a priori analysis of the considered area in order (1 to evaluate based on a semi-automatic hydraulic approach (Cartino method the potentially flooded areas for different discharge levels, and (2 to identify the associated buildings and/or population at risk based on geographic databases. This preliminary analysis enables to build a simplified impact model (discharge-impact curve for each river reach, which can be used to directly estimate the importance of potentially affected assets based on the outputs of a distributed rainfall-runoff model. This article presents a first case study conducted in the Gard region (south eastern France. The first validation results are presented in terms of (1 accuracy of the delineation of the flooded areas estimated based on the Cartino method and using a high resolution DTM, and (2 relevance and usefulness of the impact model obtained. The impacts estimated at the event scale will now be evaluated in a near future based on insurance claim data provided by CCR (Caisse Centrale de Réassurrance.

  7. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  8. Fast neutron distributions from Be and C thick targets bombarded with 80 and 160 MeV deuterons

    International Nuclear Information System (INIS)

    Pauwels, N.; Laurent, H.; Clapier, F.; Brandenburg, S.; Beijers, J. P .M.; Zegers, R. G. T.; Lebreton, H.; Saint-Laurent, M.G.; Mirea, M.

    2001-01-01

    Production of fast neutron studies have come to the fore in the past few years because of the great interest for the possible applications of induced fission to produce neutron rich ion beams. In this context, the main objective of the SPIRAL II (Systeme de Production d'Ions Radioactifs Acceleres en Ligne) and PARRNe (Production d'Atomes Radioactifs Riches en Neutrons) R and D projects is the investigation of the feasibility and of the optimum parameters for a neutron rich isotope source. Special attention is dedicated to the energy and angular distributions of the neutrons obtained through deuteron break--up in different types of converters and different incident energies. Analysis and modelling of such behaviors, together with the study of the yields of neutron induced fission, can be used to optimize the productivity of the fissioning target its geometry and designing it accordingly. The present report continues our previous studies realised for 17, 20, 28 and 200 MeV deuteron energies and it is focused on deuteron incident energies of 80 and 160 MeV. In the experiment, the double differential cross section for neutron production induced by 80 and 160 MeV deuterons impinging on thick C and Be targets, in which the incident deuterons were complete stopped, have been measured. The energy of the neutrons was determined from the time--of--flight (TOF) measurement. To obtain an energy resolution of about 4% for the fastest, forward--emitted neutrons, which have approximately beam velocity, the length of the flightpath for the detectors at angles up to 30 angle was chosen to be 6 m. At backward angles, where the neutron energies are lower, a shorter flightpath was chosen. A schematic drawing of the setup is shown. A 100 mm thick Be target and a 70 mm thick C target were used. Results are exemplified with the angular and energy distributions of neutron obtained for Be target at 80 MeV. (authors)

  9. Extracting Prior Distributions from a Large Dataset of In-Situ Measurements to Support SWOT-based Estimation of River Discharge

    Science.gov (United States)

    Hagemann, M.; Gleason, C. J.

    2017-12-01

    The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.

  10. Improving the rainfall rate estimation in the midstream of the Heihe River Basin using raindrop size distribution

    Directory of Open Access Journals (Sweden)

    G. Zhao

    2011-03-01

    Full Text Available During the intensive observation period of the Watershed Allied Telemetry Experimental Research (WATER, a total of 1074 raindrop size distribution were measured by the Parsivel disdrometer, the latest state-of-the-art optical laser instrument. Because of the limited observation data in Qinghai-Tibet Plateau, the modelling behaviour was not well done. We used raindrop size distributions to improve the rain rate estimator of meteorological radar in order to obtain many accurate rain rate data in this area. We got the relationship between the terminal velocity of the raindrop and the diameter (mm of a raindrop: v(D = 4.67D0.53. Then four types of estimators for X-band polarimetric radar are examined. The simulation results show that the classical estimator R (ZH is most sensitive to variations in DSD and the estimator R (KDP, ZH, ZDR is the best estimator for estimating the rain rate. An X-band polarimetric radar (714XDP is used for verifying these estimators. The lowest sensitivity of the rain rate estimator R (KDP, ZH, ZDR to variations in DSD can be explained by the following facts. The difference in the forward-scattering amplitudes at horizontal and vertical polarizations, which contributes KDP, is proportional to the 3rd power of the drop diameter. On the other hand, the exponent of the backscatter cross-section, which contributes to ZH, is proportional to the 6th power of the drop diameter. Because the rain rate R is proportional to the 3.57th power of the drop diameter, KDP is less sensitive to DSD variations than ZH.

  11. Observation-based Quantitative Uncertainty Estimation for Realtime Tsunami Inundation Forecast using ABIC and Ensemble Simulation

    Science.gov (United States)

    Takagawa, T.

    2016-12-01

    An ensemble forecasting scheme for tsunami inundation is presented. The scheme consists of three elemental methods. The first is a hierarchical Bayesian inversion using Akaike's Bayesian Information Criterion (ABIC). The second is Montecarlo sampling from a probability density function of multidimensional normal distribution. The third is ensamble analysis of tsunami inundation simulations with multiple tsunami sources. Simulation based validation of the model was conducted. A tsunami scenario of M9.1 Nankai earthquake was chosen as a target of validation. Tsunami inundation around Nagoya Port was estimated by using synthetic tsunami waveforms at offshore GPS buoys. The error of estimation of tsunami inundation area was about 10% even if we used only ten minutes observation data. The estimation accuracy of waveforms on/off land and spatial distribution of maximum tsunami inundation depth is demonstrated.

  12. Pilots' Visual Scan Patterns and Attention Distribution During the Pursuit of a Dynamic Target.

    Science.gov (United States)

    Yu, Chung-San; Wang, Eric Min-Yang; Li, Wen-Chin; Braithwaite, Graham; Greaves, Matthew

    2016-01-01

    The current research was to investigate pilots' visual scan patterns in order to assess attention distribution during air-to-air maneuvers. A total of 30 qualified mission-ready fighter pilots participated in this research. Eye movement data were collected by a portable head-mounted eye-tracking device, combined with a jet fighter simulator. To complete the task, pilots had to search for, pursue, and lock on a moving target while performing air-to-air tasks. There were significant differences in pilots' saccade duration (ms) in three operating phases, including searching (M = 241, SD = 332), pursuing (M = 311, SD = 392), and lock-on (M = 191, SD = 226). Also, there were significant differences in pilots' pupil sizes (pixel(2)), of which the lock-on phase was the largest (M = 27,237, SD = 6457), followed by pursuit (M = 26,232, SD = 6070), then searching (M = 25,858, SD = 6137). Furthermore, there were significant differences between expert and novice pilots in the percentage of fixation on the head-up display (HUD), time spent looking outside the cockpit, and the performance of situational awareness (SA). Experienced pilots have better SA performance and paid more attention to the HUD, but focused less outside the cockpit when compared with novice pilots. Furthermore, pilots with better SA performance exhibited a smaller pupil size during the operational phase of lock on while pursuing a dynamic target. Understanding pilots' visual scan patterns and attention distribution are beneficial to the design of interface displays in the cockpit and in developing human factors training syllabi to improve the safety of flight operations.

  13. Conceptual Design of an Online Estimation System for Stigmergic Collaboration and Nodal Intelligence on Distributed DC Systems

    Directory of Open Access Journals (Sweden)

    DOORSAMY, W.

    2017-05-01

    Full Text Available The secondary level control of stand-alone distributed energy systems requires accurate online state information for effective coordination of its components. State estimation is possible through several techniques depending on the system's architecture and control philosophy. A conceptual design of an online state estimation system to provide nodal autonomy on DC systems is presented. The proposed estimation system uses local measurements - at each node - to obtain an aggregation of the system's state required for nodal self-control without the need for external communication with other nodes or a central controller. The recursive least-squares technique is used in conjunction with stigmergic collaboration to implement the state estimation system. Numerical results are obtained using a Matlab/Simulink model and experimentally validated in a laboratory setting. Results indicate that the proposed system provides accurate estimation and fast updating during both quasi-static and transient states.

  14. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    Energy Technology Data Exchange (ETDEWEB)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithm which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a

  15. An overview of distributed microgrid state estimation and control for smart grids.

    Science.gov (United States)

    Rana, Md Masud; Li, Li

    2015-02-12

    Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.

  16. An Overview of Distributed Microgrid State Estimation and Control for Smart Grids

    Science.gov (United States)

    Rana, Md Masud; Li, Li

    2015-01-01

    Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method. PMID:25686316

  17. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    Energy Technology Data Exchange (ETDEWEB)

    Fan, J; Fan, J; Hu, W; Wang, J [Fudan University Shanghai Cancer Center, Shanghai, Shanghai (China)

    2016-06-15

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditional probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.

  18. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    International Nuclear Information System (INIS)

    Fan, J; Fan, J; Hu, W; Wang, J

    2016-01-01

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditional probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.

  19. Efficient and robust identification of cortical targets in concurrent TMS-fMRI experiments

    Science.gov (United States)

    Yau, Jeffrey M.; Hua, Jun; Liao, Diana A.; Desmond, John E.

    2014-01-01

    Transcranial magnetic stimulation (TMS) can be delivered during fMRI scans to evoke BOLD responses in distributed brain networks. While concurrent TMS-fMRI offers a potentially powerful tool for non-invasively investigating functional human neuroanatomy, the technique is currently limited by the lack of methods to rapidly and precisely localize targeted brain regions – a reliable procedure is necessary for validly relating stimulation targets to BOLD activation patterns, especially for cortical targets outside of motor and visual regions. Here we describe a convenient and practical method for visualizing coil position (in the scanner) and identifying the cortical location of TMS targets without requiring any calibration or any particular coil-mounting device. We quantified the precision and reliability of the target position estimates by testing the marker processing procedure on data from 9 scan sessions: Rigorous testing of the localization procedure revealed minimal variability in coil and target position estimates. We validated the marker processing procedure in concurrent TMS-fMRI experiments characterizing motor network connectivity. Together, these results indicate that our efficient method accurately and reliably identifies TMS targets in the MR scanner, which can be useful during scan sessions for optimizing coil placement and also for post-scan outlier identification. Notably, this method can be used generally to identify the position and orientation of MR-compatible hardware placed near the head in the MR scanner. PMID:23507384

  20. Inverse Estimation of Heat Flux and Temperature Distribution in 3D Finite Domain

    International Nuclear Information System (INIS)

    Muhammad, Nauman Malik

    2009-02-01

    Inverse heat conduction problems occur in many theoretical and practical applications where it is difficult or practically impossible to measure the input heat flux and the temperature of the layer conducting the heat flux to the body. Thus it becomes imperative to devise some means to cater for such a problem and estimate the heat flux inversely. Adaptive State Estimator is one such technique which works by incorporating the semi-Markovian concept into a Bayesian estimation technique thereby developing an inverse input and state estimator consisting of a bank of parallel adaptively weighted Kalman filters. The problem presented in this study deals with a three dimensional system of a cube with one end conducting heat flux and all the other sides are insulated while the temperatures are measured on the accessible faces of the cube. The measurements taken on these accessible faces are fed into the estimation algorithm and the input heat flux and the temperature distribution at each point in the system is calculated. A variety of input heat flux scenarios have been examined to underwrite the robustness of the estimation algorithm and hence insure its usability in practical applications. These include sinusoidal input flux, a combination of rectangular, linearly changing and sinusoidal input flux and finally a step changing input flux. The estimator's performance limitations have been examined in these input set-ups and error associated with each set-up is compared to conclude the realistic application of the estimation algorithm in such scenarios. Different sensor arrangements, that is different sensor numbers and their locations are also examined to impress upon the importance of number of measurements and their location i.e. close or farther from the input area. Since practically it is both economically and physically tedious to install more number of measurement sensors, hence optimized number and location is very important to determine for making the study more