Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar
Directory of Open Access Journals (Sweden)
Zhenxin Cao
2018-02-01
Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.
Xu, Huijun; Gordon, J James; Siebers, Jeffrey V
2011-02-01
A dosimetric margin (DM) is the margin in a specified direction between a structure and a specified isodose surface, corresponding to a prescription or tolerance dose. The dosimetric margin distribution (DMD) is the distribution of DMs over all directions. Given a geometric uncertainty model, representing inter- or intrafraction setup uncertainties or internal organ motion, the DMD can be used to calculate coverage Q, which is the probability that a realized target or organ-at-risk (OAR) dose metric D, exceeds the corresponding prescription or tolerance dose. Postplanning coverage evaluation quantifies the percentage of uncertainties for which target and OAR structures meet their intended dose constraints. The goal of the present work is to evaluate coverage probabilities for 28 prostate treatment plans to determine DMD sampling parameters that ensure adequate accuracy for postplanning coverage estimates. Normally distributed interfraction setup uncertainties were applied to 28 plans for localized prostate cancer, with prescribed dose of 79.2 Gy and 10 mm clinical target volume to planning target volume (CTV-to-PTV) margins. Using angular or isotropic sampling techniques, dosimetric margins were determined for the CTV, bladder and rectum, assuming shift invariance of the dose distribution. For angular sampling, DMDs were sampled at fixed angular intervals w (e.g., w = 1 degree, 2 degrees, 5 degrees, 10 degrees, 20 degrees). Isotropic samples were uniformly distributed on the unit sphere resulting in variable angular increments, but were calculated for the same number of sampling directions as angular DMDs, and accordingly characterized by the effective angular increment omega eff. In each direction, the DM was calculated by moving the structure in radial steps of size delta (=0.1, 0.2, 0.5, 1 mm) until the specified isodose was crossed. Coverage estimation accuracy deltaQ was quantified as a function of the sampling parameters omega or omega eff and delta. The
Development of distributed target
Yu Hai Jun; Li Qin; Zhou Fu Xin; Shi Jin Shui; Ma Bing; Chen Nan; Jing Xiao Bing
2002-01-01
Linear introduction accelerator is expected to generate small diameter X-ray spots with high intensity. The interaction of the electron beam with plasmas generated at the X-ray converter will make the spot on target increase with time and debase the X-ray dose and the imaging resolving power. A distributed target is developed which has about 24 pieces of thin 0.05 mm tantalum films distributed over 1 cm. due to the structure adoption, the distributed target material over a large volume decreases the energy deposition per unit volume and hence reduces the temperature of target surface, then reduces the initial plasma formalizing and its expansion velocity. The comparison and analysis with two kinds of target structures are presented using numerical calculation and experiments, the results show the X-ray dose and normalized angle distribution of the two is basically the same, while the surface of the distributed target is not destroyed like the previous block target
Distribution load estimation - DLE
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A. [VTT Energy, Espoo (Finland)
1996-12-31
The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems
Distribution load estimation - DLE
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A [VTT Energy, Espoo (Finland)
1997-12-31
The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems
Distribution load estimation (DLE)
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A; Lehtonen, M [VTT Energy, Espoo (Finland)
1998-08-01
The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented
Estimation of Bridge Reliability Distributions
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Target distribution in cooperative combat based on Bayesian optimization algorithm
Institute of Scientific and Technical Information of China (English)
Shi Zhifu; Zhang An; Wang Anli
2006-01-01
Target distribution in cooperative combat is a difficult and emphases. We build up the optimization model according to the rule of fire distribution. We have researched on the optimization model with BOA. The BOA can estimate the joint probability distribution of the variables with Bayesian network, and the new candidate solutions also can be generated by the joint distribution. The simulation example verified that the method could be used to solve the complex question, the operation was quickly and the solution was best.
Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting
Directory of Open Access Journals (Sweden)
Stanislav Anatolyev
2015-08-01
Full Text Available Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
Directory of Open Access Journals (Sweden)
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
Range distributions in multiply implanted targets
International Nuclear Information System (INIS)
Kostic, S.; Jimenez-Rodriguez, J.J.; Karpuzov, D.S.; Armour, D.G.; Carter, G.; Salford Univ.
1984-01-01
Range distributions in inhomogeneous binary targets have been investigated both theoretically and experimentally. Silicon single crystal targets [(111) orientation] were implanted with 40 keV Pb + ions to fluences in the range from 5x10 14 to 7.5x10 16 cm -2 prior to bombardment with 80 keV Kr + ions to a fluence of 5x10 15 cm -2 . The samples were analysed using high resolution Rutherford backscattering before and after the krypton implantation in order to determine the dependence of the krypton distribution on the amount of lead previously implanted. The theoretical analysis was undertaken using the formalism developed in [1] and the computer simulation was based on the MARLOWE code. The agreement between the experimental, theoretical and computational krypton profiles is very good and the results indicate that accurate prediction of ranges profiles in inhomogeneous binary targets is possible using available theoretical and computational treatments. (orig.)
Resilient Distributed Estimation Through Adversary Detection
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2018-05-01
This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.
Distributed Estimation using Bayesian Consensus Filtering
2014-06-06
Convergence rate analysis of distributed gossip (linear parameter) estimation: Fundamental limits and tradeoffs,” IEEE J. Sel. Topics Signal Process...Dimakis, S. Kar, J. Moura, M. Rabbat, and A. Scaglione, “ Gossip algorithms for distributed signal processing,” Proc. of the IEEE, vol. 98, no. 11, pp
Statistical distributions applications and parameter estimates
Thomopoulos, Nick T
2017-01-01
This book gives a description of the group of statistical distributions that have ample application to studies in statistics and probability. Understanding statistical distributions is fundamental for researchers in almost all disciplines. The informed researcher will select the statistical distribution that best fits the data in the study at hand. Some of the distributions are well known to the general researcher and are in use in a wide variety of ways. Other useful distributions are less understood and are not in common use. The book describes when and how to apply each of the distributions in research studies, with a goal to identify the distribution that best applies to the study. The distributions are for continuous, discrete, and bivariate random variables. In most studies, the parameter values are not known a priori, and sample data is needed to estimate parameter values. In other scenarios, no sample data is available, and the researcher seeks some insight that allows the estimate of ...
Wireless sensor networks distributed consensus estimation
Chen, Cailian; Guan, Xinping
2014-01-01
This SpringerBrief evaluates the cooperative effort of sensor nodes to accomplish high-level tasks with sensing, data processing and communication. The metrics of network-wide convergence, unbiasedness, consistency and optimality are discussed through network topology, distributed estimation algorithms and consensus strategy. Systematic analysis reveals that proper deployment of sensor nodes and a small number of low-cost relays (without sensing function) can speed up the information fusion and thus improve the estimation capability of wireless sensor networks (WSNs). This brief also investiga
Bayesian estimation of Weibull distribution parameters
International Nuclear Information System (INIS)
Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.
1994-11-01
In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs
Estimating the Distribution of Dietary Consumption Patterns
Carroll, Raymond J.
2014-02-01
In the United States the preferred method of obtaining dietary intake data is the 24-hour dietary recall, yet the measure of most interest is usual or long-term average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. We were interested in estimating the population distribution of the Healthy Eating Index-2005 (HEI-2005), a multi-component dietary quality index involving ratios of interrelated dietary components to energy, among children aged 2-8 in the United States, using a national survey and incorporating survey weights. We developed a highly nonlinear, multivariate zero-inflated data model with measurement error to address this question. Standard nonlinear mixed model software such as SAS NLMIXED cannot handle this problem. We found that taking a Bayesian approach, and using MCMC, resolved the computational issues and doing so enabled us to provide a realistic distribution estimate for the HEI-2005 total score. While our computation and thinking in solving this problem was Bayesian, we relied on the well-known close relationship between Bayesian posterior means and maximum likelihood, the latter not computationally feasible, and thus were able to develop standard errors using balanced repeated replication, a survey-sampling approach.
Quantum partial search for uneven distribution of multiple target items
Zhang, Kun; Korepin, Vladimir
2018-06-01
Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Kalman filter data assimilation: Targeting observations and parameter estimation
International Nuclear Information System (INIS)
Bellsky, Thomas; Kostelich, Eric J.; Mahalov, Alex
2014-01-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation
A Study of Adaptive Detection of Range-Distributed Targets
National Research Council Canada - National Science Library
Gerlach, Karl R
2000-01-01
... to be characterized as complex zero-mean correlated Gaussian random variables. The target's or targets' complex amplitudes are assumed to be distributed across the entire input data block (sensor x range...
Low Complexity Parameter Estimation For Off-the-Grid Targets
Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim
2015-01-01
In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms
Low Complexity Parameter Estimation For Off-the-Grid Targets
Jardak, Seifallah
2015-10-05
In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.
Targeting estimation of CCC-GARCH models with infinite fourth moments
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard
. In this paper we consider the large-sample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using non-standard limit theory we derive new results...... for the estimator stating that its limiting distribution is multivariate stable. The rate of consistency of the estimator is slower than √Τ (as obtained by the quasi-maximum likelihood estimator) and depends on the tails of the data generating process....
Measurement for cobalt target activity and its axial distribution
International Nuclear Information System (INIS)
Li Xingyuan; Chen Zigen.
1985-01-01
Cobalt target activity and its axial distribution are measured in process of producing radioactive isotopes 60 Co by irradiation in HFETR. Cobalt target activity is obtained with measured data at 3.60 m and 4.60 m, relative axial distribution of cobalt target activity is obtained with one at 30 cm, and axial distribution of cobalt target activity(or specific activity) is obtained with both of data. The difference between this specific activity and measured result for 60 Co teletherapy sources in the end is less than +- 5%
Distribution of induced activity in tungsten targets
International Nuclear Information System (INIS)
Donahue, R.J.; Nelson, W.R.
1988-09-01
Estimates are made of the induced activity created during high-energy electron showers in tungsten, using the EGS4 code. Photon track lengths, neutron yields and spatial profiles of the induced activity are presented. 8 refs., 9 figs., 1 tab
Estimation of Radar Cross Section of a Target under Track
Directory of Open Access Journals (Sweden)
Hong Sun-Mog
2010-01-01
Full Text Available In allocating radar beam for tracking a target, it is attempted to maintain the signal-to-noise ratio (SNR of signal returning from the illuminated target close to an optimum value for efficient track updates. An estimate of the average radar cross section (RCS of the target is required in order to adjust transmitted power based on the estimate such that a desired SNR can be realized. In this paper, a maximum-likelihood (ML approach is presented for estimating the average RCS, and a numerical solution to the approach is proposed based on a generalized expectation maximization (GEM algorithm. Estimation accuracy of the approach is compared to that of a previously reported procedure.
Estimating Conditional Distributions by Neural Networks
DEFF Research Database (Denmark)
Kulczycki, P.; Schiøler, Henrik
1998-01-01
Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...
Estimating Loan-to-value Distributions
DEFF Research Database (Denmark)
Korteweg, Arthur; Sørensen, Morten
2016-01-01
We estimate a model of house prices, combined loan-to-value ratios (CLTVs) and trade and foreclosure behavior. House prices are only observed for traded properties and trades are endogenous, creating sample-selection problems for existing approaches to estimating CLTVs. We use a Bayesian filtering...
Estimating the relationship between abundance and distribution
DEFF Research Database (Denmark)
Rindorf, Anna; Lewy, Peter
2012-01-01
based on Euclidean distance to the centre of gravity of the spatial distribution. Only the proportion of structurally empty areas, Lloyds index, and indices of the distance to the centre of gravity of the spatial distribution are unbiased at all levels of abundance. The remaining indices generate...
Distributed estimation based on observations prediction in wireless sensor networks
Bouchoucha, Taha; Ahmed, Mohammed F A; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2015-01-01
We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process
Efficient channel estimation in massive MIMO systems - a distributed approach
Al-Naffouri, Tareq Y.
2016-01-01
We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed
Effect of Smart Meter Measurements Data On Distribution State Estimation
DEFF Research Database (Denmark)
Pokhrel, Basanta Raj; Nainar, Karthikeyan; Bak-Jensen, Birgitte
2018-01-01
Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements in the phy......Smart distribution grids with renewable energy based generators and demand response resources (DRR) requires accurate state estimators for real time control. Distribution grid state estimators are normally based on accumulated smart meter measurements. However, increase of measurements...... in the physical grid can enforce significant stress not only on the communication infrastructure but also in the control algorithms. This paper aims to propose a methodology to analyze needed real time smart meter data from low voltage distribution grids and their applicability in distribution state estimation...
Distribution measurement of 60Co target radioactive specific activity
International Nuclear Information System (INIS)
Li Xingyan; Chen Zigen; Ren Min
1994-01-01
Radioactive specific activity distribution of cobalt 60 target by irradiation in HFETR is a key parameter. With the collimate principle, the under water measurement device and conversion coefficient which is get by experiments, and the radioactive specific activity distribution is obtained. The uncertainty of measurement is less than 10%
Method of estimating the reactor power distribution
International Nuclear Information System (INIS)
Mitsuta, Toru; Fukuzaki, Takaharu; Doi, Kazuyori; Kiguchi, Takashi.
1984-01-01
Purpose: To improve the calculation accuracy for the power distribution thereby improve the reliability of power distribution monitor. Constitution: In detector containing strings disposed within a reactor core, movable type neutron flux monitors are provided in addition to position fixed type neutron monitors conventionally disposed so far. Upon periodical monitoring, a power distribution X1 is calculated from a physical reactor core model. Then, a higher power position X2 is detected by position detectors and value X2 is sent to a neutron flux monitor driving device to displace the movable type monitors to a higher power position in each of the strings. After displacement, the value X1 is amended by an amending device using measured values from the movable type and fixed type monitors and the amended value is sent to a reactor core monitor device. Upon failure of the fixed type monitors, the position is sent to the monitor driving device and the movable monitors are displaced to that position for measurement. (Sekiya, K.)
Unbiased estimators for spatial distribution functions of classical fluids
Adib, Artur B.; Jarzynski, Christopher
2005-01-01
We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.
Maximum likelihood estimation of phase-type distributions
DEFF Research Database (Denmark)
Esparza, Luz Judith R
for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...
Control and Estimation of Distributed Parameter Systems
Kappel, F; Kunisch, K
1998-01-01
Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.
Ballistic model to estimate microsprinkler droplet distribution
Directory of Open Access Journals (Sweden)
Conceição Marco Antônio Fonseca
2003-01-01
Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.
Béjar Haro, Benjamín
2013-01-01
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of es...
Impact of microbial count distributions on human health risk estimates
DEFF Research Database (Denmark)
Ribeiro Duarte, Ana Sofia; Nauta, Maarten
2015-01-01
Quantitative microbiological risk assessment (QMRA) is influenced by the choice of the probability distribution used to describe pathogen concentrations, as this may eventually have a large effect on the distribution of doses at exposure. When fitting a probability distribution to microbial...... enumeration data, several factors may have an impact on the accuracy of that fit. Analysis of the best statistical fits of different distributions alone does not provide a clear indication of the impact in terms of risk estimates. Thus, in this study we focus on the impact of fitting microbial distributions...... on risk estimates, at two different concentration scenarios and at a range of prevalence levels. By using five different parametric distributions, we investigate whether different characteristics of a good fit are crucial for an accurate risk estimate. Among the factors studied are the importance...
An observer-theoretic approach to estimating neutron flux distribution
International Nuclear Information System (INIS)
Park, Young Ho; Cho, Nam Zin
1989-01-01
State feedback control provides many advantages such as stabilization and improved transient response. However, when the state feedback control is considered for spatial control of a nuclear reactor, it requires complete knowledge of the distributions of the system state variables. This paper describes a method for estimating the flux spatial distribution using only limited flux measurements. It is based on the Luenberger observer in control theory, extended to the distributed parameter systems such as the space-time reactor dynamics equation. The results of the application of the method to simple reactor models showed that the flux distribution is estimated by the observer very efficiently using information from only a few sensors
Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.
Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E
2017-12-11
Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.
Multivariate phase type distributions - Applications and parameter estimation
DEFF Research Database (Denmark)
Meisch, David
The best known univariate probability distribution is the normal distribution. It is used throughout the literature in a broad field of applications. In cases where it is not sensible to use the normal distribution alternative distributions are at hand and well understood, many of these belonging...... and statistical inference, is the multivariate normal distribution. Unfortunately only little is known about the general class of multivariate phase type distribution. Considering the results concerning parameter estimation and inference theory of univariate phase type distributions, the class of multivariate...... projects and depend on reliable cost estimates. The Successive Principle is a group analysis method primarily used for analyzing medium to large projects in relation to cost or duration. We believe that the mathematical modeling used in the Successive Principle can be improved. We suggested a novel...
Estimation of particle size distribution of nanoparticles from electrical ...
Indian Academy of Sciences (India)
2018-02-02
Feb 2, 2018 ... An indirect method of estimation of size distribution of nanoparticles in a nanocomposite is ... The present approach exploits DC electrical current–voltage ... the sizes of nanoparticles (NPs) by electrical characterization.
Pilots' Attention Distributions Between Chasing a Moving Target and a Stationary Target.
Li, Wen-Chin; Yu, Chung-San; Braithwaite, Graham; Greaves, Matthew
2016-12-01
Attention plays a central role in cognitive processing; ineffective attention may induce accidents in flight operations. The objective of the current research was to examine military pilots' attention distributions between chasing a moving target and a stationary target. In the current research, 37 mission-ready F-16 pilots participated. Subjects' eye movements were collected by a portable head-mounted eye-tracker during tactical training in a flight simulator. The scenarios of chasing a moving target (air-to-air) and a stationary target (air-to-surface) consist of three operational phases: searching, aiming, and lock-on to the targets. The findings demonstrated significant differences in pilots' percentage of fixation during the searching phase between air-to-air (M = 37.57, SD = 5.72) and air-to-surface (M = 33.54, SD = 4.68). Fixation duration can indicate pilots' sustained attention to the trajectory of a dynamic target during air combat maneuvers. Aiming at the stationary target resulted in larger pupil size (M = 27,105, SD = 6565), reflecting higher cognitive loading than aiming at the dynamic target (M = 23,864, SD = 8762). Pilots' visual behavior is not only closely related to attention distribution, but also significantly associated with task characteristics. Military pilots demonstrated various visual scan patterns for searching and aiming at different types of targets based on the research settings of a flight simulator. The findings will facilitate system designers' understanding of military pilots' cognitive processes during tactical operations. They will assist human-centered interface design to improve pilots' situational awareness. The application of an eye-tracking device integrated with a flight simulator is a feasible and cost-effective intervention to improve the efficiency and safety of tactical training.Li W-C, Yu C-S, Braithwaite G, Greaves M. Pilots' attention distributions between chasing a moving target and a stationary target. Aerosp Med
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Distributive estimation of frequency selective channels for massive MIMO systems
Zaib, Alam
2015-12-28
We consider frequency selective channel estimation in the uplink of massive MIMO-OFDM systems, where our major concern is complexity. A low complexity distributed LMMSE algorithm is proposed that attains near optimal channel impulse response (CIR) estimates from noisy observations at receive antenna array. In proposed method, every antenna estimates the CIRs of its neighborhood followed by recursive sharing of estimates with immediate neighbors. At each step, every antenna calculates the weighted average of shared estimates which converges to near optimal LMMSE solution. The simulation results validate the near optimal performance of proposed algorithm in terms of mean square error (MSE). © 2015 EURASIP.
Targeted maximum likelihood estimation for a binary treatment: A tutorial.
Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E
2018-04-23
When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
A Geology-Based Estimate of Connate Water Salinity Distribution
2014-09-01
poses serious environmental concerns if connate water is mobilized into shallow aquifers or surface water systems. Estimating the distribution of...groundwater flow and salinity transport near the Herbert Hoover Dike (HHD) surrounding Lake Okeechobee in Florida . The simulations were conducted using the...on the geologic configuration at equilibrium, and the horizontal salinity distribution is strongly linked to aquifer connectivity because
Comparison of estimation methods for fitting weibull distribution to ...
African Journals Online (AJOL)
Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.
Comparing four methods to estimate usual intake distributions
Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.
2011-01-01
Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data
Effect of Smart Meter Measurements Data On Distribution State Estimation
DEFF Research Database (Denmark)
Pokhrel, Basanta Raj; Nainar, Karthikeyan; Bak-Jensen, Birgitte
2018-01-01
in the physical grid can enforce significant stress not only on the communication infrastructure but also in the control algorithms. This paper aims to propose a methodology to analyze needed real time smart meter data from low voltage distribution grids and their applicability in distribution state estimation...
Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation
Jardak, Seifallah
2014-04-01
Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location
Moving target tracking through distributed clustering in directional sensor networks.
Enayet, Asma; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif
2014-12-18
The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works.
Distributed estimation based on observations prediction in wireless sensor networks
Bouchoucha, Taha
2015-03-19
We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.
SIMPLE ESTIMATOR AND CONSISTENT STRONGLY OF STABLE DISTRIBUTIONS
Directory of Open Access Journals (Sweden)
Cira E. Guevara Otiniano
2016-06-01
Full Text Available Stable distributions are extensively used to analyze earnings of financial assets, such as exchange rates and stock prices assets. In this paper we propose a simple and strongly consistent estimator for the scale parameter of a symmetric stable L´evy distribution. The advantage of this estimator is that your computational time is minimum thus it can be used to initialize intensive computational procedure such as maximum likelihood. With random samples of sized n we tested the efficacy of these estimators by Monte Carlo method. We also included applications for three data sets.
Distributed Dynamic State Estimation with Extended Kalman Filter
Energy Technology Data Exchange (ETDEWEB)
Du, Pengwei; Huang, Zhenyu; Sun, Yannan; Diao, Ruisheng; Kalsi, Karanjit; Anderson, Kevin K.; Li, Yulan; Lee, Barry
2011-08-04
Increasing complexity associated with large-scale renewable resources and novel smart-grid technologies necessitates real-time monitoring and control. Our previous work applied the extended Kalman filter (EKF) with the use of phasor measurement data (PMU) for dynamic state estimation. However, high computation complexity creates significant challenges for real-time applications. In this paper, the problem of distributed dynamic state estimation is investigated. One domain decomposition method is proposed to utilize decentralized computing resources. The performance of distributed dynamic state estimation is tested on a 16-machine, 68-bus test system.
Graziani, Rebecca; Guindani, Michele; Thall, Peter F.
2015-01-01
Summary The effect of a targeted agent on a cancer patient's clinical outcome putatively is mediated through the agent's effect on one or more early biological events. This is motivated by pre-clinical experiments with cells or animals that identify such events, represented by binary or quantitative biomarkers. When evaluating targeted agents in humans, central questions are whether the distribution of a targeted biomarker changes following treatment, the nature and magnitude of this change, and whether it is associated with clinical outcome. Major difficulties in estimating these effects are that a biomarker's distribution may be complex, vary substantially between patients, and have complicated relationships with clinical outcomes. We present a probabilistically coherent framework for modeling and estimation in this setting, including a hierarchical Bayesian nonparametric mixture model for biomarkers that we use to define a functional profile of pre-versus-post treatment biomarker distribution change. The functional is similar to the receiver operating characteristic used in diagnostic testing. The hierarchical model yields clusters of individual patient biomarker profile functionals, and we use the profile as a covariate in a regression model for clinical outcome. The methodology is illustrated by analysis of a dataset from a clinical trial in prostate cancer using imatinib to target platelet-derived growth factor, with the clinical aim to improve progression-free survival time. PMID:25319212
Distributed state estimation for multi-agent based active distribution networks
Nguyen, H.P.; Kling, W.L.
2010-01-01
Along with the large-scale implementation of distributed generators, the current distribution networks have changed gradually from passive to active operation. State estimation plays a vital role to facilitate this transition. In this paper, a suitable state estimation method for the active network
Estimation of expected value for lognormal and gamma distributions
International Nuclear Information System (INIS)
White, G.C.
1978-01-01
Concentrations of environmental pollutants tend to follow positively skewed frequency distributions. Two such density functions are the gamma and lognormal. Minimum variance unbiased estimators of the expected value for both densities are available. The small sample statistical properties of each of these estimators were compared for its own distribution, as well as the other distribution to check the robustness of the estimator. Results indicated that the arithmetic mean provides an unbiased estimator when the underlying density function of the sample is either lognormal or gamma, and that the achieved coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two. Further Monte Carlo simulations were conducted to study the robustness of the above estimators by simulating a lognormal or gamma distribution with the expected value of a particular observation selected from a uniform distribution before the lognormal or gamma observation is generated. Again, the arithmetic mean provides an unbiased estimate of expected value, and the coverage of the confidence interval is greater than 75 percent for coefficients of variation less than two
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Flow distribution in the accelerator-production-of-tritium target
International Nuclear Information System (INIS)
Siebe, D.A.; Spatz, T.L.; Pasamehmetoglu, K.O.; Sherman, M.P.
1999-01-01
Achieving nearly uniform flow distributions in the accelerator production of tritium (APT) target structures is an important design objective. Manifold effects tend to cause a nonuniform distribution in flow systems of this type, although nearly even distribution can be achieved. A program of hydraulic experiments is underway to provide a database for validation of calculational methodologies that may be used for analyzing this problem and to evaluate the approach with the most promise for achieving a nearly even flow distribution. Data from the initial three tests are compared to predictions made using four calculational methods. The data show that optimizing the ratio of the supply-to-return-manifold areas can produce an almost even flow distribution in the APT ladder assemblies. The calculations compare well with the data for ratios of the supply-to-return-manifold areas spanning the optimum value. Thus, the results to date show that a nearly uniform flow distribution can be achieved by carefully sizing the supply and return manifolds and that the calculational methods available are adequate for predicting the distributions through a range of conditions
Moving Target Tracking through Distributed Clustering in Directional Sensor Networks
Directory of Open Access Journals (Sweden)
Asma Enayet
2014-12-01
Full Text Available The problem of moving target tracking in directional sensor networks (DSNs introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target’s location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works.
Estimation of the target stem-cell population size in chronic myeloid leukemogenesis
International Nuclear Information System (INIS)
Radivoyevitch, T.; Ramsey, M.J.; Tucker, J.D.
1999-01-01
Estimation of the number of hematopoietic stem cells capable of causing chronic myeloid leukemia (CML) is relevant to the development of biologically based risk models of radiation-induced CML. Through a comparison of the age structure of CML incidence data from the Surveillance, Epidemiology, and End Results (SEER) Program and the age structure of chromosomal translocations found in healthy subjects, the number of CML target stem cells is estimated for individuals above 20 years of age. The estimation involves three steps. First, CML incidence among adults is fit to an exponentially increasing function of age. Next, assuming a relatively short waiting time distribution between BCR-ABL induction and the appearance of CML, an exponential age function with rate constants fixed to the values found for CML is fitted to the translocation data. Finally, assuming that translocations are equally likely to occur between any two points in the genome, the parameter estimates found in the first two steps are used to estimate the number of target stem cells for CML. The population-averaged estimates of this number are found to be 1.86 x 10 8 for men and 1.21 x 10 8 for women; the 95% confidence intervals of these estimates are (1.34 x 10 8 , 2.50 x 10 8 ) and (0.84 x 10 8 , 1.83 x 10 8 ), respectively. (orig.)
Estimation of modal parameters using bilinear joint time frequency distributions
Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.
2007-07-01
In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.
Feasibility of estimating generalized extreme-value distribution of floods
International Nuclear Information System (INIS)
Ferreira de Queiroz, Manoel Moises
2004-01-01
Flood frequency analysis by generalized extreme-value probability distribution (GEV) has found increased application in recent years, given its flexibility in dealing with the three asymptotic forms of extreme distribution derived from different initial probability distributions. Estimation of higher quantiles of floods is usually accomplished by extrapolating one of the three inverse forms of GEV distribution fitted to the experimental data for return periods much higher than those actually observed. This paper studies the feasibility of fitting GEV distribution by moments of linear combinations of higher order statistics (LH moments) using synthetic annual flood series with varying characteristics and lengths. As the hydrologic events in nature such as daily discharge occur with finite values, their annual maximums are expected to follow the asymptotic form of the limited GEV distribution. Synthetic annual flood series were thus obtained from the stochastic sequences of 365 daily discharges generated by Monte Carlo simulation on the basis of limited probability distribution underlying the limited GEV distribution. The results show that parameter estimation by LH moments of this distribution, fitted to annual flood samples of less than 100-year length derived from initial limited distribution, may indicate any form of extreme-value distribution, not just the limited form as expected, and with large uncertainty in fitted parameters. A frequency analysis, on the basis of GEV distribution and LH moments, of annual flood series of lengths varying between 13 and 73 years observed at 88 gauge stations on Parana River in Brazil, indicated all the three forms of GEV distribution.(Author)
A Bayesian nonparametric estimation of distributions and quantiles
International Nuclear Information System (INIS)
Poern, K.
1988-11-01
The report describes a Bayesian, nonparametric method for the estimation of a distribution function and its quantiles. The method, presupposing random sampling, is nonparametric, so the user has to specify a prior distribution on a space of distributions (and not on a parameter space). In the current application, where the method is used to estimate the uncertainty of a parametric calculational model, the Dirichlet prior distribution is to a large extent determined by the first batch of Monte Carlo-realizations. In this case the results of the estimation technique is very similar to the conventional empirical distribution function. The resulting posterior distribution is also Dirichlet, and thus facilitates the determination of probability (confidence) intervals at any given point in the space of interest. Another advantage is that also the posterior distribution of a specified quantitle can be derived and utilized to determine a probability interval for that quantile. The method was devised for use in the PROPER code package for uncertainty and sensitivity analysis. (orig.)
Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution
Directory of Open Access Journals (Sweden)
Emmanuel Kidando
2017-01-01
Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.
Estimation of thermochemical behavior of spallation products in mercury target
Energy Technology Data Exchange (ETDEWEB)
Kobayashi, Kaoru; Kaminaga, Masanori; Haga, Katsuhiro; Kinoshita, Hidetaka; Aso, Tomokazu; Teshigawara, Makoto; Hino, Ryutaro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2002-02-01
In order to examine the radiation safety of a spallation mercury target system, especially source term evaluation, it is necessary to clarify the chemical forms of spallation products generated by spallation reaction with proton beam. As for the chemical forms of spallation products in mercury that involves large amounts of spallation products, these forms were estimated by using the binary phase diagrams and the thermochemical equilibrium calculation based on the amounts of spallation product. Calculation results showed that the mercury would dissolve Al, As, B, Be, Bi, C, Co, Cr, Fe, Ga, Ge, Ir, Mo, Nb, Os, Re, Ru, Sb, Si, Ta, Tc, V and W in the element state, and Ag, Au, Ba, Br, Ca, Cd, Ce, Cl, Cs, Cu, Dy, Er, Eu, F, Gd, Hf, Ho, I, In, K, La, Li, Lu, Mg, Mn, Na, Nd, Ni, O, Pb, Pd, Pr, Pt, Rb, Rh, S, Sc, Se, Sm, Sn, Sr, Tb, Te, Ti, Tl, Tm, Y, Yb, Zn and Zr in the form of inorganic mercury compounds. As for As, Be, Co, Cr, Fe, Ge, Ir, Mo, Nb, Os, Pt, Re, Ru, Se, Ta, V, W and Zr, precipitation could be occurred when increasing the amounts of spallation products with operation time of the spallation target system. On the other hand, beryllium-7 (Be-7), which is produced by spallation reaction of oxygen in the cooling water of a safety hull, becomes the main factor of the external exposure to maintain the cooling loop. Based on the thermochemical equilibrium calculation to Be-H{sub 2}O binary system, the chemical forms of Be in the cooling water were estimated. Then the Be could exist in the form of cations such as BeOH{sup +}, BeO{sup +} and Be{sup 2+} under the condition of less than 10{sup -8} of the Be mole fraction in the cooling water. (author)
Estimation of thermochemical behavior of spallation products in mercury target
International Nuclear Information System (INIS)
Kobayashi, Kaoru; Kaminaga, Masanori; Haga, Katsuhiro; Kinoshita, Hidetaka; Aso, Tomokazu; Teshigawara, Makoto; Hino, Ryutaro
2002-02-01
In order to examine the radiation safety of a spallation mercury target system, especially source term evaluation, it is necessary to clarify the chemical forms of spallation products generated by spallation reaction with proton beam. As for the chemical forms of spallation products in mercury that involves large amounts of spallation products, these forms were estimated by using the binary phase diagrams and the thermochemical equilibrium calculation based on the amounts of spallation product. Calculation results showed that the mercury would dissolve Al, As, B, Be, Bi, C, Co, Cr, Fe, Ga, Ge, Ir, Mo, Nb, Os, Re, Ru, Sb, Si, Ta, Tc, V and W in the element state, and Ag, Au, Ba, Br, Ca, Cd, Ce, Cl, Cs, Cu, Dy, Er, Eu, F, Gd, Hf, Ho, I, In, K, La, Li, Lu, Mg, Mn, Na, Nd, Ni, O, Pb, Pd, Pr, Pt, Rb, Rh, S, Sc, Se, Sm, Sn, Sr, Tb, Te, Ti, Tl, Tm, Y, Yb, Zn and Zr in the form of inorganic mercury compounds. As for As, Be, Co, Cr, Fe, Ge, Ir, Mo, Nb, Os, Pt, Re, Ru, Se, Ta, V, W and Zr, precipitation could be occurred when increasing the amounts of spallation products with operation time of the spallation target system. On the other hand, beryllium-7 (Be-7), which is produced by spallation reaction of oxygen in the cooling water of a safety hull, becomes the main factor of the external exposure to maintain the cooling loop. Based on the thermochemical equilibrium calculation to Be-H 2 O binary system, the chemical forms of Be in the cooling water were estimated. Then the Be could exist in the form of cations such as BeOH + , BeO + and Be 2+ under the condition of less than 10 -8 of the Be mole fraction in the cooling water. (author)
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.
2011-01-01
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.
Nearest Neighbor Estimates of Entropy for Multivariate Circular Distributions
Directory of Open Access Journals (Sweden)
Neeraj Misra
2010-05-01
Full Text Available In molecular sciences, the estimation of entropies of molecules is important for the understanding of many chemical and biological processes. Motivated by these applications, we consider the problem of estimating the entropies of circular random vectors and introduce non-parametric estimators based on circular distances between n sample points and their k th nearest neighbors (NN, where k (≤ n – 1 is a fixed positive integer. The proposed NN estimators are based on two different circular distances, and are proven to be asymptotically unbiased and consistent. The performance of one of the circular-distance estimators is investigated and compared with that of the already established Euclidean-distance NN estimator using Monte Carlo samples from an analytic distribution of six circular variables of an exactly known entropy and a large sample of seven internal-rotation angles in the molecule of tartaric acid, obtained by a realistic molecular-dynamics simulation.
Distributions of component failure rates estimated from LER data
International Nuclear Information System (INIS)
Atwood, C.L.
1985-01-01
Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter gamma distributions. In this study, a more complicated distributional form is considered, a mixture of gammas. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single gamma distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution. 9 refs
Distributions of component failure rates, estimated from LER data
International Nuclear Information System (INIS)
Atwood, C.L.
1985-01-01
Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter γ distributions. In this study, a more complicated distributional form is considered, a mixture of γs. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single γ distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution
Estimating probable flaw distributions in PWR steam generator tubes
International Nuclear Information System (INIS)
Gorman, J.A.; Turner, A.P.L.
1997-01-01
This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses
Efficient channel estimation in massive MIMO systems - a distributed approach
Al-Naffouri, Tareq Y.
2016-01-21
We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.
Private and Secure Distribution of Targeted Advertisements to Mobile Phones
Directory of Open Access Journals (Sweden)
Stylianos S. Mamais
2017-05-01
Full Text Available Online Behavioural Advertising (OBA enables promotion companies to effectively target users with ads that best satisfy their purchasing needs. This is highly beneficial for both vendors and publishers who are the owners of the advertising platforms, such as websites and app developers, but at the same time creates a serious privacy threat for users who expose their consumer interests. In this paper, we categorize the available ad-distribution methods and identify their limitations in terms of security, privacy, targeting effectiveness and practicality. We contribute our own system, which utilizes opportunistic networking in order to distribute targeted adverts within a social network. We improve upon previous work by eliminating the need for trust among the users (network nodes while at the same time achieving low memory and bandwidth overhead, which are inherent problems of many opportunistic networks. Our protocol accomplishes this by identifying similarities between the consumer interests of users and then allows them to share access to the same adverts, which need to be downloaded only once. Although the same ads may be viewed by multiple users, privacy is preserved as the users do not learn each other’s advertising interests. An additional contribution is that malicious users cannot alter the ads in order to spread malicious content, and also, they cannot launch impersonation attacks.
Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution
Directory of Open Access Journals (Sweden)
Hare Krishna
2017-01-01
Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.
Estimating the parameters of a generalized lambda distribution
International Nuclear Information System (INIS)
Fournier, B.; Rupin, N.; Najjar, D.; Iost, A.; Rupin, N.; Bigerelle, M.; Wilcox, R.; Fournier, B.
2007-01-01
The method of moments is a popular technique for estimating the parameters of a generalized lambda distribution (GLD), but published results suggest that the percentile method gives superior results. However, the percentile method cannot be implemented in an automatic fashion, and automatic methods, like the starship method, can lead to prohibitive execution time with large sample sizes. A new estimation method is proposed that is automatic (it does not require the use of special tables or graphs), and it reduces the computational time. Based partly on the usual percentile method, this new method also requires choosing which quantile u to use when fitting a GLD to data. The choice for u is studied and it is found that the best choice depends on the final goal of the modeling process. The sampling distribution of the new estimator is studied and compared to the sampling distribution of estimators that have been proposed. Naturally, all estimators are biased and here it is found that the bias becomes negligible with sample sizes n ≥ 2 * 10(3). The.025 and.975 quantiles of the sampling distribution are investigated, and the difference between these quantiles is found to decrease proportionally to 1/root n.. The same results hold for the moment and percentile estimates. Finally, the influence of the sample size is studied when a normal distribution is modeled by a GLD. Both bounded and unbounded GLDs are used and the bounded GLD turns out to be the most accurate. Indeed it is shown that, up to n = 10(6), bounded GLD modeling cannot be rejected by usual goodness-of-fit tests. (authors)
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level
Anatomical distribution of estrogen target neurons in turtle brain
International Nuclear Information System (INIS)
Kim, Y.S.; Stumpf, W.E.; Sar, M.
1981-01-01
Autoradiographic studies with [ 3 H]estradiol-17β in red-eared turtle (Pseudemys scripta elegans) show concentration and retention of radioactivity in nuclei of neurons in certain regions. Accumulations of estrogen target neurons exist in the periventricular brain with relationships to ventral extensions of the forebrain ventricles, including parolfactory, amygdaloid, septal, preoptic, hypothalamic and thalamic areas, as well as the dorsal ventricular ridge, the piriform cortex, and midbrain-pontine periaqueductal structures. The general anatomical pattern of distribution of estrogen target neurons corresponds to those observed not only in another reptile (Anolis carolinensis), but also in birds and mammals, as well as in teleosts and cyclostomes. In Pseudemys, which appears to display an intermediate degree of phylogenetic differentiation, the amygdaloid-septal-preoptic groups of estrogen target neurons constitute a continuum. In phylogenetic ascendency, e.g. in mammals, these cell populations are increasingly separated and distinct, while in phylogenetic descendency, e.g. in teleosts and cyclostomes, an amygdaloid group appears to be absent or contained within the septal-preoptic target cell population. (Auth.)
Anatomical distribution of estrogen target neurons in turtle brain
Energy Technology Data Exchange (ETDEWEB)
Kim, Y.S.; Stumpf, W.E.; Sar, M. (North Carolina Univ., Chapel Hill (USA))
1981-12-28
Autoradiographic studies with (/sup 3/H)estradiol-17..beta.. in red-eared turtle (Pseudemys scripta elegans) show concentration and retention of radioactivity in nuclei of neurons in certain regions. Accumulations of estrogen target neurons exist in the periventricular brain with relationships to ventral extensions of the forebrain ventricles, including parolfactory, amygdaloid, septal, preoptic, hypothalamic and thalamic areas, as well as the dorsal ventricular ridge, the piriform cortex, and midbrain-pontine periaqueductal structures. The general anatomical pattern of distribution of estrogen target neurons corresponds to those observed not only in another reptile (Anolis carolinensis), but also in birds and mammals, as well as in teleosts and cyclostomes. In Pseudemys, which appears to display an intermediate degree of phylogenetic differentiation, the amygdaloid-septal-preoptic groups of estrogen target neurons constitute a continuum. In phylogenetic ascendency, e.g. in mammals, these cell populations are increasingly separated and distinct, while in phylogenetic descendency, e.g. in teleosts and cyclostomes, an amygdaloid group appears to be absent or contained within the septal-preoptic target cell population.
Voltage Estimation in Active Distribution Grids Using Neural Networks
DEFF Research Database (Denmark)
Pertl, Michael; Heussen, Kai; Gehrke, Oliver
2016-01-01
the observability of distribution systems has to be improved. To increase the situational awareness of the power system operator data driven methods can be employed. These methods benefit from newly available data sources such as smart meters. This paper presents a voltage estimation method based on neural networks...
Estimation of particle size distribution of nanoparticles from electrical ...
Indian Academy of Sciences (India)
... blockade (CB) phenomena of electrical conduction through atiny nanoparticle. Considering the ZnO nanocomposites to be spherical, Coulomb-blockade model of quantum dot isapplied here. The size distribution of particle is estimated from that model and compared with the results obtainedfrom AFM and XRD analyses.
Experiment of ambient temperature distribution in ICF driver's target building
International Nuclear Information System (INIS)
Zhou Yi; He Jie; Yang Shujuan; Zhang Junwei; Zhou Hai; Feng Bin; Xie Na; Lin Donghui
2009-01-01
An experiment is designed to explore the ambient temperature distribution in an ICF driver's target building, Multi-channel PC-2WS temperature monitoring recorders and PTWD-2A precision temperature sensors are used to measure temperatures on the three vertical cross-sections in the building, and the collected data have been handled by MATLAB. The experiment and analysis show that the design of the heating ventilation and air conditioning (HVAC) system can maintain the temperature stability throughout the building. However, because of the impact of heat in the target chamber, larger local environmental temperature gradients appear near the marshalling yard, the staff region on the middle floor, and equipments on the lower floor which needs to be controlled. (authors)
A distributed approach for parameters estimation in System Biology models
International Nuclear Information System (INIS)
Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.
2009-01-01
Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
Percentile estimation using the normal and lognormal probability distribution
International Nuclear Information System (INIS)
Bement, T.R.
1980-01-01
Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution
Witte, de W.E.A.; Vauquelin, G.; Graaf, van der P.H.; Lange, de E.C.M.
2017-01-01
The influence of drug-target binding kinetics on target occupancy can be influenced by drug distribution and diffusion around the target, often referred to as "rebinding" or "diffusion-limited binding". This gives rise to a decreased decline of the drug-target complex concentration as a result of a
Estimation of photon energy distribution in gamma calibration field
International Nuclear Information System (INIS)
Takahashi, Fumiaki; Shimizu, Shigeru; Yamaguchi, Yasuhiro
1997-03-01
Photon survey instruments used for radiation protection are usually calibrated at gamma radiation fields, which are traceable to the national standard with regard to exposure. Whereas scattered radiations as well as primary gamma-rays exit in the calibration field, no consideration for the effect of the scattered radiations on energy distribution is given in routine calibration works. The scattered radiations can change photon energy spectra in the field, and this can result in misinterpretations of energy-dependent instrument responses. Construction materials in the field affect the energy distribution and magnitude of the scattered radiations. The geometric relationship between a gamma source and an instrument can determine the energy distribution at the calibration point. Therefore, it is essential for the assurance of quality calibration to estimate the energy spectra at the gamma calibration fields. Then, photon energy distributions at some fields in the Facility of Radiation Standard of the Japan Atomic Energy Research Institute (JAERI) were estimated by measurements using a NaI(Tl) detector and Monte Carlo calculations. It was found that the use of collimator gives a different feature in photon energy distribution. The origin of scattered radiations and the ratio of the scattered radiations to the primary gamma-rays were obtained. The results can help to improve the calibration of photon survey instruments in the JAERI. (author)
Energy Technology Data Exchange (ETDEWEB)
Diwold, Konrad; Yan, Wei [Fraunhofer IWES, Kassel (Germany); Braun, Martin [Fraunhofer IWES, Kassel (Germany); Stuttgart Univ. (Germany). Inst. fuer Energieuebertragung und Hochspannungstechnik (IEH)
2012-07-01
The increased integration of distributed energy units creates challenges for the operators of distribution systems. This is due to the fact that distribution systems that were initially designed for distributed consumption and central generation now face decentralized feed-in. One imminent problem associated with decentralised fee-in are local voltage violations in the distribution system, which are hard to handle via conventional voltage control strategies. This article proposes a new voltage control framework for distribution system operation. The framework utilizes reactive power of distributed energy units as well on-load tap changers to mitigate voltage problems in the network. Using an optimization-band the control strategy can be used in situations where network information is derived from distribution state estimators and thus holds some error. The control capabilities in combination with a distribution state estimator are tested using data from a real rural distribution network. The results are very promising, as voltage control is achieved fast and accurate, preventing a majority of the voltage violations during system operation under realistic system conditions. (orig.)
Structure Learning and Statistical Estimation in Distribution Networks - Part II
Energy Technology Data Exchange (ETDEWEB)
Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-13
Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver
2016-01-01
Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities
Adaptive distributed video coding with correlation estimation using expectation propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Pedestrian count estimation using texture feature with spatial distribution
Directory of Open Access Journals (Sweden)
Hongyu Hu
2016-12-01
Full Text Available We present a novel pedestrian count estimation approach based on global image descriptors formed from multi-scale texture features that considers spatial distribution. For regions of interest, local texture features are represented based on histograms of multi-scale block local binary pattern, which jointly constitute the feature vector of the whole image. Therefore, to achieve an effective estimation of pedestrian count, principal component analysis is used to reduce the dimension of the global representation features, and a fitting model between image global features and pedestrian count is constructed via support vector regression. The experimental result shows that the proposed method exhibits high accuracy on pedestrian count estimation and can be applied well in the real world.
Iterative methods for distributed parameter estimation in parabolic PDE
Energy Technology Data Exchange (ETDEWEB)
Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Inhomogeneous target-dose distributions: a dimension more for optimization?
International Nuclear Information System (INIS)
Gersem, Werner R.T. de; Derycke, Sylvie; Colle, Christophe O.; Wagter, Carlos de; Neve, Wilfried J. de
1999-01-01
Purpose: To evaluate if the use of inhomogeneous target-dose distributions, obtained by 3D conformal radiotherapy plans with or without beam intensity modulation, offers the possibility to decrease indices of toxicity to normal tissues and/or increase indices of tumor control stage III non-small cell lung cancer (NSCLC). Methods and Materials: Ten patients with stage III NSCLC were planned using a conventional 3D technique and a technique involving noncoplanar beam intensity modulation (BIM). Two planning target volumes (PTVs) were defined: PTV1 included macroscopic tumor volume and PTV2 included macroscopic and microscopic tumor volume. Virtual simulation defined the beam shapes and incidences as well as the wedge orientations (3D) and segment outlines (BIM). Weights of wedged beams, unwedged beams, and segments were determined by optimization using an objective function with a biological and a physical component. The biological component included tumor control probability (TCP) for PTV1 (TCP1), PTV2 (TCP2), and normal tissue complication probability (NTCP) for lung, spinal cord, and heart. The physical component included the maximum and minimum dose as well as the standard deviation of the dose at PTV1. The most inhomogeneous target-dose distributions were obtained by using only the biological component of the objective function (biological optimization). By enabling the physical component in addition to the biological component, PTV1 inhomogeneity was reduced (biophysical optimization). As indices for toxicity to normal tissues, NTCP-values as well as maximum doses or dose levels to relevant fractions of the organ's volume were used. As indices for tumor control, TCP-values as well as minimum doses to the PTVs were used. Results: When optimization was performed with the biophysical as compared to the biological objective function, the PTV1 inhomogeneity decreased from 13 (8-23)% to 4 (2-9)% for the 3D-(p = 0.00009) and from 44 (33-56)% to 20 (9-34)% for the BIM
Estimating the temporal distribution of exposure-related cancers
International Nuclear Information System (INIS)
Carter, R.L.; Sposto, R.; Preston, D.L.
1993-09-01
The temporal distribution of exposure-related cancers is relevant to the study of carcinogenic mechanisms. Statistical methods for extracting pertinent information from time-to-tumor data, however, are not well developed. Separation of incidence from 'latency' and the contamination of background cases are two problems. In this paper, we present methods for estimating both the conditional distribution given exposure-related cancers observed during the study period and the unconditional distribution. The methods adjust for confounding influences of background cases and the relationship between time to tumor and incidence. Two alternative methods are proposed. The first is based on a structured, theoretically derived model and produces direct inferences concerning the distribution of interest but often requires more-specialized software. The second relies on conventional modeling of incidence and is implemented through readily available, easily used computer software. Inferences concerning the effects of radiation dose and other covariates, however, are not always obtainable directly. We present three examples to illustrate the use of these two methods and suggest criteria for choosing between them. The first approach was used, with a log-logistic specification of the distribution of interest, to analyze times to bone sarcoma among a group of German patients injected with 224 Ra. Similarly, a log-logistic specification was used in the analysis of time to chronic myelogenous leukemias among male atomic-bomb survivors. We used the alternative approach, involving conventional modeling, to estimate the conditional distribution of exposure-related acute myelogenous leukemias among male atomic-bomb survivors, given occurrence between 1 October 1950 and 31 December 1985. All analyses were performed using Poisson regression methods for analyzing grouped survival data. (J.P.N.)
Gridded rainfall estimation for distributed modeling in western mountainous areas
Moreda, F.; Cong, S.; Schaake, J.; Smith, M.
2006-05-01
Estimation of precipitation in mountainous areas continues to be problematic. It is well known that radar-based methods are limited due to beam blockage. In these areas, in order to run a distributed model that accounts for spatially variable precipitation, we have generated hourly gridded rainfall estimates from gauge observations. These estimates will be used as basic data sets to support the second phase of the NWS-sponsored Distributed Hydrologic Model Intercomparison Project (DMIP 2). One of the major foci of DMIP 2 is to better understand the modeling and data issues in western mountainous areas in order to provide better water resources products and services to the Nation. We derive precipitation estimates using three data sources for the period of 1987-2002: 1) hourly cooperative observer (coop) gauges, 2) daily total coop gauges and 3) SNOw pack TELemetry (SNOTEL) daily gauges. The daily values are disaggregated using the hourly gauge values and then interpolated to approximately 4km grids using an inverse-distance method. Following this, the estimates are adjusted to match monthly mean values from the Parameter-elevation Regressions on Independent Slopes Model (PRISM). Several analyses are performed to evaluate the gridded estimates for DMIP 2 experiments. These gridded inputs are used to generate mean areal precipitation (MAPX) time series for comparison to the traditional mean areal precipitation (MAP) time series derived by the NWS' California-Nevada River Forecast Center for model calibration. We use two of the DMIP 2 basins in California and Nevada: the North Fork of the American River (catchment area 885 sq. km) and the East Fork of the Carson River (catchment area 922 sq. km) as test areas. The basins are sub-divided into elevation zones. The North Fork American basin is divided into two zones above and below an elevation threshold. Likewise, the Carson River basin is subdivided in to four zones. For each zone, the analyses include: a) overall
Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks
Directory of Open Access Journals (Sweden)
José Raúl Machado-Fernández
2016-12-01
Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.
Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang
2017-11-01
The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.
Collaborative 3D Target Tracking in Distributed Smart Camera Networks for Wide-Area Surveillance
Directory of Open Access Journals (Sweden)
Xenofon Koutsoukos
2013-05-01
Full Text Available With the evolution and fusion of wireless sensor network and embedded camera technologies, distributed smart camera networks have emerged as a new class of systems for wide-area surveillance applications. Wireless networks, however, introduce a number of constraints to the system that need to be considered, notably the communication bandwidth constraints. Existing approaches for target tracking using a camera network typically utilize target handover mechanisms between cameras, or combine results from 2D trackers in each camera into 3D target estimation. Such approaches suffer from scale selection, target rotation, and occlusion, drawbacks typically associated with 2D tracking. In this paper, we present an approach for tracking multiple targets directly in 3D space using a network of smart cameras. The approach employs multi-view histograms to characterize targets in 3D space using color and texture as the visual features. The visual features from each camera along with the target models are used in a probabilistic tracker to estimate the target state. We introduce four variations of our base tracker that incur different computational and communication costs on each node and result in different tracking accuracy. We demonstrate the effectiveness of our proposed trackers by comparing their performance to a 3D tracker that fuses the results of independent 2D trackers. We also present performance analysis of the base tracker along Quality-of-Service (QoS and Quality-of-Information (QoI metrics, and study QoS vs. QoI trade-offs between the proposed tracker variations. Finally, we demonstrate our tracker in a real-life scenario using a camera network deployed in a building.
Directory of Open Access Journals (Sweden)
Bin Deng
2013-01-01
Full Text Available Parabolic-reflector antennas (PRAs, usually possessing rotation, are a particular type of targets of potential interest to the synthetic aperture radar (SAR community. This paper is aimed to investigate PRA’s scattering characteristics and then to extract PRA’s parameters from SAR returns, for supporting image interpretation and target recognition. We at first obtain both closed-form and numeric solutions to PRA’s backscattering by geometrical optics (GO, physical optics, and graphical electromagnetic computation, respectively. Based on the GO solution, a migratory scattering center model is at first presented for representing the movement of the specular point with aspect angle, and then a hybrid model, named the migratory/micromotion scattering center (MMSC model, is proposed for characterizing a rotating PRA in the SAR geometry, which incorporates PRA’s rotation into its migratory scattering center model. Additionally, we in detail analyze PRA’s radar characteristics on radar cross-section, high-resolution range profiles, time-frequency distribution, and 2D images, which also confirm the models proposed. A maximal likelihood estimator is developed for jointly solving the MMSC model for PRA’s multiple parameters by optimization. By exploiting the aforementioned characteristics, the coarse parameter estimation guarantees convergency upon global minima. The signatures recovered can be favorably utilized for SAR image interpretation and target recognition.
Structure Learning and Statistical Estimation in Distribution Networks - Part I
Energy Technology Data Exchange (ETDEWEB)
Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-13
Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of the metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presents algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time– which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.
Improving Distribution Resiliency with Microgrids and State and Parameter Estimation
Energy Technology Data Exchange (ETDEWEB)
Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Williams, Tess L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schneider, Kevin P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elizondo, Marcelo A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Yannan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Chen-Ching [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Yin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil Gup [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-09-30
Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using
P3T+: A Performance Estimator for Distributed and Parallel Programs
Directory of Open Access Journals (Sweden)
T. Fahringer
2000-01-01
Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.
Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models
Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.
2014-12-01
We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.
PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION
Directory of Open Access Journals (Sweden)
Samir Kamel Ashour
2010-12-01
Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.
Estimation of the effective distribution coefficient from the solubility constant
International Nuclear Information System (INIS)
Wang, Yug-Yea; Yu, C.
1994-01-01
An updated version of RESRAD has been developed by Argonne National Laboratory for the US Department of Energy to derive site-specific soil guidelines for residual radioactive material. In this updated version, many new features have been added to the, RESRAD code. One of the options is that a user can input a solubility constant to limit the leaching of contaminants. The leaching model used in the code requires the input of an empirical distribution coefficient, K d , which represents the ratio of the solute concentration in soil to that in solution under equilibrium conditions. This paper describes the methodology developed to estimate an effective distribution coefficient, Kd, from the user-input solubility constant and the use of the effective K d for predicting the leaching of contaminants
Likelihood Estimation of Gamma Ray Bursts Duration Distribution
Horvath, Istvan
2005-01-01
Two classes of Gamma Ray Bursts have been identified so far, characterized by T90 durations shorter and longer than approximately 2 seconds. It was shown that the BATSE 3B data allow a good fit with three Gaussian distributions in log T90. In the same Volume in ApJ. another paper suggested that the third class of GRBs is may exist. Using the full BATSE catalog here we present the maximum likelihood estimation, which gives us 0.5% probability to having only two subclasses. The MC simulation co...
Design wave estimation considering directional distribution of waves
Digital Repository Service at National Institute of Oceanography (India)
SanilKumar, V.; Deo, M.C
.elsevier.com/locate/oceaneng Technical Note Design wave estimation considering directional distribution of waves V. Sanil Kumar a,C3 , M.C. Deo b a OceanEngineeringDivision,NationalInstituteofOceanography,Donapaula,Goa-403004,India b Civil... of Physical Oceanography Norway, Report method for the routine 18, 1020–1034. ocean waves. Division of No. UR-80-09, 187 p. analysis of pitch and roll Conference on Coastal Engineering, 1. ASCE, Taiwan, pp. 136–149. Deo, M.C., Burrows, R., 1986. Extreme wave...
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-01-01
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Calculational estimations of neutron yield from ADS target
International Nuclear Information System (INIS)
Degtyarev, I.I.; Liashenko, O.A.; Yazynin, I.A.; Belyakov-Bodin, V.I.; Blokhin, A.I.
2002-01-01
Results of computational studies of high power spallation thick ADS (Accelerator-Driven System) targets with 0.8-1.2 GeV proton beams are given. Comparisons of experiments and calculations of double differential and integral n/p yield are also described. (author)
Distributed Road Grade Estimation for Heavy Duty Vehicles
Energy Technology Data Exchange (ETDEWEB)
Sahlholm, Per
2011-07-01
An increasing need for goods and passenger transportation drives continued worldwide growth in traffic. As traffic increases environmental concerns, traffic safety, and cost efficiency become ever more important. Advancements in microelectronics open the possibility to address these issues through new advanced driver assistance systems. Applications such as predictive cruise control, automated gearbox control, predictive front lighting control, and hybrid vehicle state-of-charge control decrease the energy consumption of vehicles and increase the safety. These control systems can benefit significantly from preview road grade information. This information is currently obtained using specialized survey vehicles, and is not widely available. This thesis proposes new methods to obtain road grade information using on-board sensors. The task of creating road grade maps is addressed by the proposal of a framework where vehicles using a road network collect the necessary data for estimating the road grade. The estimation can then be carried out locally in the vehicle, or in the presence of a communication link to the infrastructure, centrally. In either case the accuracy of the map increases over time, and costly road surveys can be avoided. This thesis presents a new distributed method for creating accurate road grade maps for vehicle control applications. Standard heavy duty vehicles in normal operation are used to collect measurements. Estimates from multiple passes along a road segment are merged to form a road grade map, which improves each time a vehicle retraces a route. The design and implementation of the road grade estimator are described, and the performance is experimentally evaluated using real vehicles. Three different grade estimation methods, based on different assumption on the road grade signal, are proposed and compared. They all use data from sensors that are standard equipment in heavy duty vehicles. Measurements of the vehicle speed and the engine
Distributed estimation and control for mobile sensor networks with coupling delays.
Su, Housheng; Chen, Xuan; Chen, Michael Z Q; Wang, Lei
2016-09-01
This paper deals with the issue of distributed estimation and control for mobile sensor networks with coupling delays. Based on the Kalman-Consensus filter and the flocking algorithm, all mobile sensors move to a target to increase the quality of gathered data, and achieve consensus on the estimation values of the target in the presence of time-delay and noises. By applying an effective cascading Lyapunov method and matrix theory, stability analysis is carried out. Furthermore, a necessary condition for the convergence is presented via the boundary conditions of feedback coefficients. Some numerical examples are provided to validate the effectiveness of theoretical results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Wireless Power Transfer for Distributed Estimation in Sensor Networks
Mai, Vien V.; Shin, Won-Yong; Ishibashi, Koji
2017-04-01
This paper studies power allocation for distributed estimation of an unknown scalar random source in sensor networks with a multiple-antenna fusion center (FC), where wireless sensors are equipped with radio-frequency based energy harvesting technology. The sensors' observation is locally processed by using an uncoded amplify-and-forward scheme. The processed signals are then sent to the FC, and are coherently combined at the FC, at which the best linear unbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve the following two power allocation problems: 1) minimizing distortion under various power constraints; and 2) minimizing total transmit power under distortion constraints, where the distortion is measured in terms of mean-squared error of the BLUE. Two iterative algorithms are developed to solve the non-convex problems, which converge at least to a local optimum. In particular, the above algorithms are designed to jointly optimize the amplification coefficients, energy beamforming, and receive filtering. For each problem, a suboptimal design, a single-antenna FC scenario, and a common harvester deployment for colocated sensors, are also studied. Using the powerful semidefinite relaxation framework, our result is shown to be valid for any number of sensors, each with different noise power, and for an arbitrarily number of antennas at the FC.
Estimation of neutron energy distributions from prompt gamma emissions
Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.
2017-11-01
A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.
van Zyl, J. Martin
2012-01-01
Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good wit...
Cost-effectiveness of targeted screening for abdominal aortic aneurysm. Monte Carlo-based estimates.
Pentikäinen, T J; Sipilä, T; Rissanen, P; Soisalon-Soininen, S; Salo, J
2000-01-01
This article reports a cost-effectiveness analysis of targeted screening for abdominal aortic aneurysm (AAA). A major emphasis was on the estimation of distributions of costs and effectiveness. We performed a Monte Carlo simulation using C programming language in a PC environment. Data on survival and costs, and a majority of screening probabilities, were from our own empirical studies. Natural history data were based on the literature. Each screened male gained 0.07 life-years at an incremental cost of FIM 3,300. The expected values differed from zero very significantly. For females, expected gains were 0.02 life-years at an incremental cost of FIM 1,100, which was not statistically significant. Cost-effectiveness ratios and their 95% confidence intervals were FIM 48,000 (27,000-121,000) and 54,000 (22,000-infinity) for males and females, respectively. Sensitivity analysis revealed that the results for males were stable. Individual variation in life-year gains was high. Males seemed to benefit from targeted AAA screening, and the results were stable. As far as the cost-effectiveness ratio is considered acceptable, screening for males seemed to be justified. However, our assumptions about growth and rupture behavior of AAAs might be improved with further clinical and epidemiological studies. As a point estimate, females benefited in a similar manner, but the results were not statistically significant. The evidence of this study did not justify screening of females.
Directory of Open Access Journals (Sweden)
Keisuke Yano
2014-05-01
Full Text Available We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and the Fisher information matrix for the target variables. We assume that the trace has a unique maximum point with respect to the parameter. We construct asymptotically constant-risk Bayesian predictive densities using a prior depending on the sample size. Further, we apply the theory to the subminimax estimator problem and the prediction based on the binary regression model.
A Structural VAR Approach to Estimating Budget Balance Targets
Robert A Buckle; Kunhong Kim; Julie Tam
2001-01-01
The Fiscal Responsibility Act 1994 states that, as a principle of responsible fiscal management, a New Zealand government should ensure total Crown debt is at a prudent level by ensuring total operating expenses do not exceed total operating revenues. In this paper a structural VAR model is estimated to evaluate the impact on the government's cash operating surplus (or budget balance) of four independent disturbances: supply, fiscal, real private demand, and nominal disturbances. Based on the...
Hydroacoustic estimates of fish biomass and spatial distributions in shallow lakes
Lian, Yuxi; Huang, Geng; Godlewska, Małgorzata; Cai, Xingwei; Li, Chang; Ye, Shaowen; Liu, Jiashou; Li, Zhongjie
2018-03-01
We conducted acoustical surveys with a horizontal beam transducer to detect fish and with a vertical beam transducer to detect depth and macrophytes in two typical shallow lakes along the middle and lower reaches of the Changjiang (Yangtze) River in November 2013. Both lakes are subject to active fish management with annual stocking and removal of large fish. The purpose of the study was to compare hydroacoustic horizontal beam estimates with fish landings. The preliminary results show that the fish distribution patterns differed in the two lakes and were affected by water depth and macrophyte coverage. The hydroacoustically estimated fish biomass matched the commercial catch very well in Niushan Lake, but it was two times higher in Kuilei Lake. However, acoustic estimates included all fish, whereas the catch included only fish >45 cm (smaller ones were released). We were unable to determine the proper regression between acoustic target strength and fish length for the dominant fish species in the two lakes.
Fast Parabola Detection Using Estimation of Distribution Algorithms
Directory of Open Access Journals (Sweden)
Jose de Jesus Guerrero-Turrubiates
2017-01-01
Full Text Available This paper presents a new method based on Estimation of Distribution Algorithms (EDAs to detect parabolic shapes in synthetic and medical images. The method computes a virtual parabola using three random boundary pixels to calculate the constant values of the generic parabola equation. The resulting parabola is evaluated by matching it with the parabolic shape in the input image by using the Hadamard product as fitness function. This proposed method is evaluated in terms of computational time and compared with two implementations of the generalized Hough transform and RANSAC method for parabola detection. Experimental results show that the proposed method outperforms the comparative methods in terms of execution time about 93.61% on synthetic images and 89% on retinal fundus and human plantar arch images. In addition, experimental results have also shown that the proposed method can be highly suitable for different medical applications.
A modified estimation distribution algorithm based on extreme elitism.
Gao, Shujun; de Silva, Clarence W
2016-12-01
An existing estimation distribution algorithm (EDA) with univariate marginal Gaussian model was improved by designing and incorporating an extreme elitism selection method. This selection method highlighted the effect of a few top best solutions in the evolution and advanced EDA to form a primary evolution direction and obtain a fast convergence rate. Simultaneously, this selection can also keep the population diversity to make EDA avoid premature convergence. Then the modified EDA was tested by means of benchmark low-dimensional and high-dimensional optimization problems to illustrate the gains in using this extreme elitism selection. Besides, no-free-lunch theorem was implemented in the analysis of the effect of this new selection on EDAs. Copyright Â© 2016 Elsevier Ireland Ltd. All rights reserved.
Research reactor loading pattern optimization using estimation of distribution algorithms
Energy Technology Data Exchange (ETDEWEB)
Jiang, S. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Ziver, K. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); AMCG Group, RM Consultants, Abingdon (United Kingdom); Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Franklin, S. J.; Phillips, H. J. [Imperial College, Reactor Centre, Silwood Park, Buckhurst Road, Ascot, Berkshire, SL5 7TE (United Kingdom)
2006-07-01
A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K{sub eff}) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K{sub eff} with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)
Research reactor loading pattern optimization using estimation of distribution algorithms
International Nuclear Information System (INIS)
Jiang, S.; Ziver, K.; Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H.; Franklin, S. J.; Phillips, H. J.
2006-01-01
A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K eff ) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K eff with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)
Empirical Estimates in Stochastic Optimization via Distribution Tails
Czech Academy of Sciences Publication Activity Database
Kaňková, Vlasta
2010-01-01
Roč. 46, č. 3 (2010), s. 459-471 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR GA402/07/1113; GA ČR(CZ) GA402/08/0107; GA MŠk(CZ) LC06075 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic programming problems * Stability * Wasserstein metric * L_1 norm * Lipschitz property * Empirical estimates * Convergence rate * Exponential tails * Heavy tails * Pareto distribution * Risk functional * Empirical quantiles Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010
Estimation of temperature distribution in a reactor shield
International Nuclear Information System (INIS)
Agarwal, R.A.; Goverdhan, P.; Gupta, S.K.
1989-01-01
Shielding is provided in a nuclear reactor to absorb the radiations emanating from the core. The energy of these radiations appear in the form of heat. Concrete which is commonly used as a shielding material in nuclear power plants must be able to withstand the temperatures and temperature gradients appearing in the shield due to this heat. High temperatures lead to dehydration of the concrete and in turn reduce the shielding effectiveness of the material. Adequate cooling needs to be provided in these shields in order to limit the maximum temperature. This paper describes a method to estimate steady state and transient temperature distribution in reactor shields. The results due to loss of coolant in the coolant tubes have been studied and presented in the paper. (author). 5 figs
Distributed and decentralized state estimation in gas networks as distributed parameter systems.
Ahmadian Behrooz, Hesam; Boozarjomehry, R Bozorgmehry
2015-09-01
In this paper, a framework for distributed and decentralized state estimation in high-pressure and long-distance gas transmission networks (GTNs) is proposed. The non-isothermal model of the plant including mass, momentum and energy balance equations are used to simulate the dynamic behavior. Due to several disadvantages of implementing a centralized Kalman filter for large-scale systems, the continuous/discrete form of extended Kalman filter for distributed and decentralized estimation (DDE) has been extended for these systems. Accordingly, the global model is decomposed into several subsystems, called local models. Some heuristic rules are suggested for system decomposition in gas pipeline networks. In the construction of local models, due to the existence of common states and interconnections among the subsystems, the assimilation and prediction steps of the Kalman filter are modified to take the overlapping and external states into account. However, dynamic Riccati equation for each subsystem is constructed based on the local model, which introduces a maximum error of 5% in the estimated standard deviation of the states in the benchmarks studied in this paper. The performance of the proposed methodology has been shown based on the comparison of its accuracy and computational demands against their counterparts in centralized Kalman filter for two viable benchmarks. In a real life network, it is shown that while the accuracy is not significantly decreased, the real-time factor of the state estimation is increased by a factor of 10. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Directory of Open Access Journals (Sweden)
Rongda Chen
Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Estimating investor preferences towards portfolio return distribution in investment funds
Directory of Open Access Journals (Sweden)
Margareta Gardijan
2015-03-01
Full Text Available Recent research in the field of investor preference has emphasised the need to go beyond just simply analyzing the first two moments of a portfolio return distribution used in a MV (mean-variance paradigm. The suggestion is to observe an investor's utility function as an nth order Taylor approximation. In such terms, the assumption is that investors prefer greater values of odd and smaller values of even moments. In order to investigate the preferences of Croatian investment funds, an analysis of the moments of their return distribution is conducted. The sample contains data on monthly returns of 30 investment funds in Croatia for the period from January 1999 to May 2014. Using the theoretical utility functions (DARA, CARA, CRRA, we compare changes in their preferences when higher moments are included. Moreover, we investigate an extension of the CAPM model in order to find out whether including higher moments can explain better the relationship between the awards and risk premium, and whether we can apply these findings to estimate preferences of Croatian institutional investors. The results indicate that Croatian institutional investors do not seek compensation for bearing greater market risk.
International Nuclear Information System (INIS)
Coelli, Tim J.; Gautier, Axel; Perelman, Sergio; Saplacan-Pop, Roxana
2013-01-01
The quality of electricity distribution is being more and more scrutinized by regulatory authorities, with explicit reward and penalty schemes based on quality targets having been introduced in many countries. It is then of prime importance to know the cost of improving the quality for a distribution system operator. In this paper, we focus on one dimension of quality, the continuity of supply, and we estimated the cost of preventing power outages. For that, we make use of the parametric distance function approach, assuming that outages enter in the firm production set as an input, an imperfect substitute for maintenance activities and capital investment. This allows us to identify the sources of technical inefficiency and the underlying trade-off faced by operators between quality and other inputs and costs. For this purpose, we use panel data on 92 electricity distribution units operated by ERDF (Electricité de France - Réseau Distribution) in the 2003–2005 financial years. Assuming a multi-output multi-input translog technology, we estimate that the cost of preventing one interruption is equal to 10.7€ for an average DSO. Furthermore, as one would expect, marginal quality improvements tend to be more expensive as quality itself improves. - Highlights: ► We estimate the implicit cost of outages for the main distribution company in France. ► For this purpose, we make use of a parametric distance function approach. ► Marginal quality improvements tend to be more expensive as quality itself improves. ► The cost of preventing one interruption varies from 1.8 € to 69.2 € (2005 prices). ► We estimate that, in average, it lays 33% above the regulated price of quality.
2017-12-01
Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR and FAR. Analysis was...distribution is unlimited. 8 Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR...stimuli. Simulations show that this regression method results in an unbiased and accurate estimate of target detection performance. The regression
A Study of Adaptive Detection of Range-Distributed Targets
National Research Council Canada - National Science Library
Gerlach, Karl R
2000-01-01
.... The unknown parameters associated with the hypothesis test are the complex amplitudes in range of the desired target and the unknown covariance matrix of the additive interference, which is assumed...
Energy Technology Data Exchange (ETDEWEB)
Meliopoulos, Sakis [Georgia Inst. of Technology, Atlanta, GA (United States); Cokkinides, George [Georgia Inst. of Technology, Atlanta, GA (United States); Fardanesh, Bruce [New York Power Authority, NY (United States); Hedrington, Clinton [U.S. Virgin Islands Water and Power Authority (WAPA), St. Croix (U.S. Virgin Islands)
2013-12-31
This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based
Multiframe Superresolution of Vehicle License Plates Based on Distribution Estimation Approach
Directory of Open Access Journals (Sweden)
Renchao Jin
2016-01-01
Full Text Available Low-resolution (LR license plate images or videos are often captured in the practical applications. In this paper, a distribution estimation based superresolution (SR algorithm is proposed to reconstruct the license plate image. Different from the previous work, here, the high-resolution (HR image is estimated via the obtained posterior probability distribution by using the variational Bayesian framework. To regularize the estimated HR image, a feature-specific prior model is proposed by considering the most significant characteristic of license plate images; that is, the target has high contrast with the background. In order to assure the success of the SR reconstruction, the models representing smoothness constraints on images are also used to regularize the estimated HR image with the proposed feature-specific prior model. We show by way of experiments, under challenging blur with size 7 × 7 and zero-mean Gaussian white noise with variances 0.2 and 0.5, respectively, that the proposed method could achieve the peak signal-to-noise ratio (PSNR of 22.69 dB and the structural similarity (SSIM of 0.9022 under the noise with variance 0.2 and the PSNR of 19.89 dB and the SSIM of 0.8582 even under the noise with variance 0.5, which are 1.84 dB and 0.04 improvements in comparison with other methods.
Multiplicity distributions of shower particles and target fragments in ...
Indian Academy of Sciences (India)
2Institute of Theoretical Physics, Shanxi University, Taiyuan, Shanxi 030006, China. *Corresponding ... describes the probability distributions of different quantities [17–19]. ... In the Monte Carlo method, let Rij denote random numbers in [0,1].
Joint inversion of NMR and SIP data to estimate pore size distribution of geomaterials
Niu, Qifei; Zhang, Chi
2018-03-01
There are growing interests in using geophysical tools to characterize the microstructure of geomaterials because of the non-invasive nature and the applicability in field. In these applications, multiple types of geophysical data sets are usually processed separately, which may be inadequate to constrain the key feature of target variables. Therefore, simultaneous processing of multiple data sets could potentially improve the resolution. In this study, we propose a method to estimate pore size distribution by joint inversion of nuclear magnetic resonance (NMR) T2 relaxation and spectral induced polarization (SIP) spectra. The petrophysical relation between NMR T2 relaxation time and SIP relaxation time is incorporated in a nonlinear least squares problem formulation, which is solved using Gauss-Newton method. The joint inversion scheme is applied to a synthetic sample and a Berea sandstone sample. The jointly estimated pore size distributions are very close to the true model and results from other experimental method. Even when the knowledge of the petrophysical models of the sample is incomplete, the joint inversion can still capture the main features of the pore size distribution of the samples, including the general shape and relative peak positions of the distribution curves. It is also found from the numerical example that the surface relaxivity of the sample could be extracted with the joint inversion of NMR and SIP data if the diffusion coefficient of the ions in the electrical double layer is known. Comparing to individual inversions, the joint inversion could improve the resolution of the estimated pore size distribution because of the addition of extra data sets. The proposed approach might constitute a first step towards a comprehensive joint inversion that can extract the full pore geometry information of a geomaterial from NMR and SIP data.
Hodille, E. A.; Bernard, E.; Markelj, S.; Mougenot, J.; Becquart, C. S.; Bisson, R.; Grisolia, C.
2017-12-01
Based on macroscopic rate equation simulations of tritium migration in an actively cooled tungsten (W) plasma facing component (PFC) using the code MHIMS (migration of hydrogen isotopes in metals), an estimation has been made of the tritium retention in ITER W divertor target during a non-uniform exponential distribution of particle fluxes. Two grades of materials are considered to be exposed to tritium ions: an undamaged W and a damaged W exposed to fast fusion neutrons. Due to strong temperature gradient in the PFC, Soret effect’s impacts on tritium retention is also evaluated for both cases. Thanks to the simulation, the evolutions of the tritium retention and the tritium migration depth are obtained as a function of the implanted flux and the number of cycles. From these evolutions, extrapolation laws are built to estimate the number of cycles needed for tritium to permeate from the implantation zone to the cooled surface and to quantify the corresponding retention of tritium throughout the W PFC.
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
Distributions of hit-numbers in single targets
Energy Technology Data Exchange (ETDEWEB)
Fowler, J F [Postgraduate Medical School, Hammersmith Hospital, London (United Kingdom)
1966-07-01
Very general models can be proposed for relating the surviving proportion of an irradiated population of cells or bacteria to the absorbed dose, but if the number of free parameters is large the model can never be tested experimentally (Zimmer; Zirkie; Tobias). A relatively simple model is therefore proposed here, based on the physical facts of energy deposition in small volumes which are currently under active investigation (Rossi), and on cell-survival experiments over a wide range of LET (e.g. Barendsen et al.; Barendsen). It is not suggested that the model is correct or final, but only that its shortcomings should be demonstrated by comparison with experimental results before more complicated models are worth pursuing. It is basically a multihit model applied first to a single target volume, but also applicable to the situation where only one out of many potential target volumes has to be inactivated to kill the organism. It can be extended to two or more target volumes if necessary. Emphasis is placed upon the amount of energy locally deposited in certain sensitive volumes called 'target volumes'.
The space distribution of neutrons generated in massive lead target by relativistic nuclear beam
International Nuclear Information System (INIS)
Chultem, D.; Damdinsuren, Ts.; Enkh-Gin, L.; Lomova, L.; Perelygin, V.; Tolstov, K.
1993-01-01
The present paper is devoted to implementation of solid state nuclear track detectors in the research of the neutron generation in extended lead spallation target. Measured neutrons space distribution inside the lead target and neutron distribution in the thick water moderator are assessed. (Author)
International Nuclear Information System (INIS)
Osman, Abdalla; El-Sheimy, Naser; Nourledin, Aboelamgd; Theriault, Jim; Campbell, Scott
2009-01-01
The problem of target detection and tracking in the ocean environment has attracted considerable attention due to its importance in military and civilian applications. Sonobuoys are one of the capable passive sonar systems used in underwater target detection. Target detection and bearing estimation are mainly obtained through spectral analysis of received signals. The frequency resolution introduced by current techniques is limited which affects the accuracy of target detection and bearing estimation at a relatively low signal-to-noise ratio (SNR). This research investigates the development of a bearing estimation method using fast orthogonal search (FOS) for enhanced spectral estimation. FOS is employed in this research in order to improve both target detection and bearing estimation in the case of low SNR inputs. The proposed methods were tested using simulated data developed for two different scenarios under different underwater environmental conditions. The results show that the proposed method is capable of enhancing the accuracy for target detection as well as bearing estimation especially in cases of a very low SNR
Estimation of potential distribution of gas hydrate in the northern South China Sea
Wang, Chunjuan; Du, Dewen; Zhu, Zhiwei; Liu, Yonggang; Yan, Shijuan; Yang, Gang
2010-05-01
Gas hydrate research has significant importance for securing world energy resources, and has the potential to produce considerable economic benefits. Previous studies have shown that the South China Sea is an area that harbors gas hydrates. However, there is a lack of systematic investigations and understanding on the distribution of gas hydrate throughout the region. In this paper, we applied mineral resource quantitative assessment techniques to forecast and estimate the potential distribution of gas hydrate resources in the northern South China Sea. However, current hydrate samples from the South China Sea are too few to produce models of occurrences. Thus, according to similarity and contrast principles of mineral outputs, we can use a similar hydrate-mining environment with sufficient gas hydrate data as a testing ground for modeling northern South China Sea gas hydrate conditions. We selected the Gulf of Mexico, which has extensively studied gas hydrates, to develop predictive models of gas hydrate distributions, and to test errors in the model. Then, we compared the existing northern South China Sea hydrate-mining data with the Gulf of Mexico characteristics, and collated the relevant data into the model. Subsequently, we applied the model to the northern South China Sea to obtain the potential gas hydrate distribution of the area, and to identify significant exploration targets. Finally, we evaluated the reliability of the predicted results. The south seabed area of Taiwan Bank is recommended as a priority exploration target. The Zhujiang Mouth, Southeast Hainan, and Southwest Taiwan Basins, including the South Bijia Basin, also are recommended as exploration target areas. In addition, the method in this paper can provide a useful predictive approach for gas hydrate resource assessment, which gives a scientific basis for construction and implementation of long-term planning for gas hydrate exploration and general exploitation of the seabed of China.
An Empirical Method to Fuse Partially Overlapping State Vectors for Distributed State Estimation
Sijs, J.; Hanebeck, U.; Noack, B.
2013-01-01
State fusion is a method for merging multiple estimates of the same state into a single fused estimate. Dealing with multiple estimates is one of the main concerns in distributed state estimation, where an estimated value of the desired state vector is computed in each node of a networked system.
Energy Technology Data Exchange (ETDEWEB)
Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)
2014-07-01
In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for
Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes
Directory of Open Access Journals (Sweden)
Chun-mao Yeh
2016-01-01
Full Text Available This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs. Then, the rotating velocity (RV is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.
International Nuclear Information System (INIS)
Cludius, Johanna; Forrest, Sam; MacGill, Iain
2014-01-01
The Australian Renewable Energy Target (RET) has spurred significant investment in renewable electricity generation, notably wind power, over the past decade. This paper considers distributional implications of the RET for different energy users. Using time-series regression, we show that the increasing amount of wind energy has placed considerable downward pressure on wholesale electricity prices through the so-called merit order effect. On the other hand, RET costs are passed on to consumers in the form of retail electricity price premiums. Our findings highlight likely significant redistributive transfers between different energy user classes under current RET arrangements. In particular, some energy-intensive industries are benefiting from lower wholesale electricity prices whilst being largely exempted from contributing to the costs of the scheme. By contrast, many households are paying significant RET pass through costs whilst not necessarily benefiting from lower wholesale prices. A more equitable distribution of RET costs and benefits could be achieved by reviewing the scope and extent of industry exemptions and ensuring that methodologies to estimate wholesale price components in regulated electricity tariffs reflect more closely actual market conditions. More generally, these findings support the growing international appreciation that policy makers need to integrate distributional assessments into policy design and implementation. - Highlights: • The Australian RET has complex yet important distributional impacts on different energy users. • Likely wealth transfers from residential and small business consumers to large energy-intensive industry. • Merit order effects of wind likely overcompensate exempt industry for contribution to RET costs. • RET costs for households could be reduced if merit order effects were adequately passed through. • Need for distributional impact assessments when designing and implementing clean energy policy
Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms
P.A.N. Bosman (Peter); D. Thierens (Dirk); D. Thierens (Dirk)
2007-01-01
htmlabstractRecent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we
SAR target recognition and posture estimation using spatial pyramid pooling within CNN
Peng, Lijiang; Liu, Xiaohua; Liu, Ming; Dong, Liquan; Hui, Mei; Zhao, Yuejin
2018-01-01
Many convolution neural networks(CNN) architectures have been proposed to strengthen the performance on synthetic aperture radar automatic target recognition (SAR-ATR) and obtained state-of-art results on targets classification on MSTAR database, but few methods concern about the estimation of depression angle and azimuth angle of targets. To get better effect on learning representation of hierarchies of features on both 10-class target classification task and target posture estimation tasks, we propose a new CNN architecture with spatial pyramid pooling(SPP) which can build high hierarchy of features map by dividing the convolved feature maps from finer to coarser levels to aggregate local features of SAR images. Experimental results on MSTAR database show that the proposed architecture can get high recognition accuracy as 99.57% on 10-class target classification task as the most current state-of-art methods, and also get excellent performance on target posture estimation tasks which pays attention to depression angle variety and azimuth angle variety. What's more, the results inspire us the application of deep learning on SAR target posture description.
Distributed fusion estimation for sensor networks with communication constraints
Zhang, Wen-An; Song, Haiyu; Yu, Li
2016-01-01
This book systematically presents energy-efficient robust fusion estimation methods to achieve thorough and comprehensive results in the context of network-based fusion estimation. It summarizes recent findings on fusion estimation with communication constraints; several novel energy-efficient and robust design methods for dealing with energy constraints and network-induced uncertainties are presented, such as delays, packet losses, and asynchronous information... All the results are presented as algorithms, which are convenient for practical applications.
Range distribution of heavy ions in multi-elemental targets
International Nuclear Information System (INIS)
Wang Keming; Shandong Univ., Jinan; Liu Xiju; Wang Yihua; Liu Jitian; Shi Borong; Chen Huanchu
1989-01-01
Some results of range distribution on Hg + implanted NaSBN and CeSBN crystals are given. A computer program is written based on the angular diffusion model by Biersack to calculate the mean projected range and range straggling. For comparison, other published experimental data are also included. The comparison between experimental and theoretical values indicates that the measured projected ranges are in good agreement with those predicted by the Biersack model within experimental error, and a marked improvement in range stragglings is obtained after considering the second order energy loss. (author)
Deepwater Horizon - Estimating surface oil volume distribution in real time
Lehr, B.; Simecek-Beatty, D.; Leifer, I.
2011-12-01
Spill responders to the Deepwater Horizon (DWH) oil spill required both the relative spatial distribution and total oil volume of the surface oil. The former was needed on a daily basis to plan and direct local surface recovery and treatment operations. The latter was needed less frequently to provide information for strategic response planning. Unfortunately, the standard spill observation methods were inadequate for an oil spill this size, and new, experimental, methods, were not ready to meet the operational demands of near real-time results. Traditional surface oil estimation tools for large spills include satellite-based sensors to define the spatial extent (but not thickness) of the oil, complemented with trained observers in small aircraft, sometimes supplemented by active or passive remote sensing equipment, to determine surface percent coverage of the 'thick' part of the slick, where the vast majority of the surface oil exists. These tools were also applied to DWH in the early days of the spill but the shear size of the spill prevented synoptic information of the surface slick through the use small aircraft. Also, satellite images of the spill, while large in number, varied considerably in image quality, requiring skilled interpretation of them to identify oil and eliminate false positives. Qualified staff to perform this task were soon in short supply. However, large spills are often events that overcome organizational inertia to the use of new technology. Two prime examples in DWH were the application of hyper-spectral scans from a high-altitude aircraft and more traditional fixed-wing aircraft using multi-spectral scans processed by use of a neural network to determine, respectively, absolute or relative oil thickness. But, with new technology, come new challenges. The hyper-spectral instrument required special viewing conditions that were not present on a daily basis and analysis infrastructure to process the data that was not available at the command
Bayesian Estimation of the Kumaraswamy InverseWeibull Distribution
Directory of Open Access Journals (Sweden)
Felipe R.S. de Gusmao
2017-05-01
Full Text Available The Kumaraswamy InverseWeibull distribution has the ability to model failure rates that have unimodal shapes and are quite common in reliability and biological studies. The three-parameter Kumaraswamy InverseWeibull distribution with decreasing and unimodal failure rate is introduced. We provide a comprehensive treatment of the mathematical properties of the Kumaraswany Inverse Weibull distribution and derive expressions for its moment generating function and the ligrl/ig-th generalized moment. Some properties of the model with some graphs of density and hazard function are discussed. We also discuss a Bayesian approach for this distribution and an application was made for a real data set.
Estimating tree cavity distributions from historical FIA data
Mark D. Nelson; Charlotte. Roy
2012-01-01
Tree cavities provide important habitat features for a variety of wildlife species. We describe an approach for using historical FIA data to estimate the number of trees containing cavities during the 1990s in seven states of the Upper Midwest. We estimated a total of 280 million cavity-containing trees. Iowa and Missouri had the highest percentages of cavity-...
Low Complexity Moving Target Parameter Estimation for MIMO Radar using 2D-FFT
Jardak, Seifallah
2017-06-16
In multiple-input multiple-output radar, to localize a target and estimate its reflection coefficient, a given cost function is usually optimized over a grid of points. The performance of such algorithms is directly affected by the grid resolution. Increasing the number of grid points enhances the resolution of the estimator but also increases its computational complexity exponentially. In this work, two reduced complexity algorithms are derived based on Capon and amplitude and phase estimation (APES) to estimate the reflection coefficient, angular location and, Doppler shift of multiple moving targets. By exploiting the structure of the terms, the cost-function is brought into a form that allows us to apply the two-dimensional fast-Fourier-transform (2D-FFT) and reduce the computational complexity of estimation. Using low resolution 2D-FFT, the proposed algorithm identifies sub-optimal estimates and feeds them as initial points to the derived Newton gradient algorithm. In contrast to the grid-based search algorithms, the proposed algorithm can optimally estimate on- and off-the-grid targets in very low computational complexity. A new APES cost-function with better estimation performance is also discussed. Generalized expressions of the Cramér-Rao lower bound are derived to asses the performance of the proposed algorithm.
Low Complexity Moving Target Parameter Estimation for MIMO Radar using 2D-FFT
Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim
2017-01-01
In multiple-input multiple-output radar, to localize a target and estimate its reflection coefficient, a given cost function is usually optimized over a grid of points. The performance of such algorithms is directly affected by the grid resolution. Increasing the number of grid points enhances the resolution of the estimator but also increases its computational complexity exponentially. In this work, two reduced complexity algorithms are derived based on Capon and amplitude and phase estimation (APES) to estimate the reflection coefficient, angular location and, Doppler shift of multiple moving targets. By exploiting the structure of the terms, the cost-function is brought into a form that allows us to apply the two-dimensional fast-Fourier-transform (2D-FFT) and reduce the computational complexity of estimation. Using low resolution 2D-FFT, the proposed algorithm identifies sub-optimal estimates and feeds them as initial points to the derived Newton gradient algorithm. In contrast to the grid-based search algorithms, the proposed algorithm can optimally estimate on- and off-the-grid targets in very low computational complexity. A new APES cost-function with better estimation performance is also discussed. Generalized expressions of the Cramér-Rao lower bound are derived to asses the performance of the proposed algorithm.
comparison of estimation methods for fitting weibull distribution
African Journals Online (AJOL)
Tersor
Tree diameter characterisation using probability distribution functions is essential for determining the structure of forest stands. This has been an intrinsic part of forest management planning, decision-making and research in recent times. The distribution of species and tree size in a forest area gives the structure of the stand.
Ahmed, Sajid
2017-05-12
The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.
Ahmed, Sajid; Jardak, Seifallah; Alouini, Mohamed-Slim
2017-01-01
The estimation of angular-location and range of a target is a joint optimization problem. In this work, to estimate these parameters, by meticulously evaluating the phase of the received samples, low complexity sequential and joint estimation algorithms are proposed. We use a single-input and multiple-output (SIMO) system and transmit frequency-modulated continuous-wave signal. In the proposed algorithm, it is shown that by ignoring very small value terms in the phase of the received samples, fast-Fourier-transform (FFT) and two-dimensional FFT can be exploited to estimate these parameters. Sequential estimation algorithm uses FFT and requires only one received snapshot to estimate the angular-location. Joint estimation algorithm uses two-dimensional FFT to estimate the angular-location and range of the target. Simulation results show that joint estimation algorithm yields better mean-squared-error (MSE) for the estimation of angular-location and much lower run-time compared to conventional MUltiple SIgnal Classification (MUSIC) algorithm.
Tajiri, Shinya; Tashiro, Mutsumi; Mizukami, Tomohiro; Tsukishima, Chihiro; Torikoshi, Masami; Kanai, Tatsuaki
2017-11-01
Carbon-ion therapy by layer-stacking irradiation for static targets has been practised in clinical treatments. In order to apply this technique to a moving target, disturbances of carbon-ion dose distributions due to respiratory motion have been studied based on the measurement using a respiratory motion phantom, and the margin estimation given by the square root of the summation Internal margin2+Setup margin2 has been assessed. We assessed the volume in which the variation in the ratio of the dose for a target moving due to respiration relative to the dose for a static target was within 5%. The margins were insufficient for use with layer-stacking irradiation of a moving target, and an additional margin was required. The lateral movement of a target converts to the range variation, as the thickness of the range compensator changes with the movement of the target. Although the additional margin changes according to the shape of the ridge filter, dose uniformity of 5% can be achieved for a spherical target 93 mm in diameter when the upward range variation is limited to 5 mm and the additional margin of 2.5 mm is applied in case of our ridge filter. Dose uniformity in a clinical target largely depends on the shape of the mini-peak as well as on the bolus shape. We have shown the relationship between range variation and dose uniformity. In actual therapy, the upper limit of target movement should be considered by assessing the bolus shape. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Joint angle and Doppler frequency estimation of coherent targets in monostatic MIMO radar
Cao, Renzheng; Zhang, Xiaofei
2015-05-01
This paper discusses the problem of joint direction of arrival (DOA) and Doppler frequency estimation of coherent targets in a monostatic multiple-input multiple-output radar. In the proposed algorithm, we perform a reduced dimension (RD) transformation on the received signal first and then use forward spatial smoothing (FSS) technique to decorrelate the coherence and obtain joint estimation of DOA and Doppler frequency by exploiting the estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm. The joint estimated parameters of the proposed RD-FSS-ESPRIT are automatically paired. Compared with the conventional FSS-ESPRIT algorithm, our RD-FSS-ESPRIT algorithm has much lower complexity and better estimation performance of both DOA and frequency. The variance of the estimation error and the Cramer-Rao Bound of the DOA and frequency estimation are derived. Simulation results show the effectiveness and improvement of our algorithm.
Parameter estimation of the zero inflated negative binomial beta exponential distribution
Sirichantra, Chutima; Bodhisuwan, Winai
2017-11-01
The zero inflated negative binomial-beta exponential (ZINB-BE) distribution is developed, it is an alternative distribution for the excessive zero counts with overdispersion. The ZINB-BE distribution is a mixture of two distributions which are Bernoulli and negative binomial-beta exponential distributions. In this work, some characteristics of the proposed distribution are presented, such as, mean and variance. The maximum likelihood estimation is applied to parameter estimation of the proposed distribution. Finally some results of Monte Carlo simulation study, it seems to have high-efficiency when the sample size is large.
Oguchi, Masahiro; Fuse, Masaaki
2015-02-03
Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.
Energy Technology Data Exchange (ETDEWEB)
Takada, Hiroshi; Kasugai, Yoshimi; Nakashima, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ino, Takashi; Kawai, Masayoshi [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Jerde, Eric; Glasgow, David [Oak Ridge National Laboratory, Oak Ridge, TN (United States)
2000-02-01
A neutronics experiment was carried out using a thick mercury target at the Alternating Gradient Synchrotron (AGS) facility of Brookhaven National Laboratory in a framework of the ASTE (AGS Spallation Target Experiment) collaboration. Reaction rate distributions around the target were measured by the activation technique at incident proton energies of 1.6, 12 and 24 GeV. Various activation detectors such as the {sup 115}In(n,n'){sup 115m}In, {sup 93}Nb(n,2n){sup 92m}Nb, and {sup 209}Bi(n,xn) reactions with threshold energies ranging from 0.3 to 70.5 MeV were employed to obtain the reaction rate data for estimating spallation source neutron characteristics of the mercury target. It was found from the measured {sup 115}In(n,n'){sup 115m}In reaction rate distribution that the number of leakage neutrons becomes maximum at about 11 cm from the top of hemisphere of the mercury target for the 1.6-GeV proton incidence and the peak position moves towards forward direction with increase of the incident proton energy. The similar result was observed in the reaction rate distributions of other activation detectors. The experimental procedures and a full set of experimental data in numerical form are summarized in this report. (author)
International Nuclear Information System (INIS)
Takada, Hiroshi; Kasugai, Yoshimi; Nakashima, Hiroshi; Ikeda, Yujiro; Jerde, Eric; Glasgow, David
2000-02-01
A neutronics experiment was carried out using a thick mercury target at the Alternating Gradient Synchrotron (AGS) facility of Brookhaven National Laboratory in a framework of the ASTE (AGS Spallation Target Experiment) collaboration. Reaction rate distributions around the target were measured by the activation technique at incident proton energies of 1.6, 12 and 24 GeV. Various activation detectors such as the 115 In(n,n') 115m In, 93 Nb(n,2n) 92m Nb, and 209 Bi(n,xn) reactions with threshold energies ranging from 0.3 to 70.5 MeV were employed to obtain the reaction rate data for estimating spallation source neutron characteristics of the mercury target. It was found from the measured 115 In(n,n') 115m In reaction rate distribution that the number of leakage neutrons becomes maximum at about 11 cm from the top of hemisphere of the mercury target for the 1.6-GeV proton incidence and the peak position moves towards forward direction with increase of the incident proton energy. The similar result was observed in the reaction rate distributions of other activation detectors. The experimental procedures and a full set of experimental data in numerical form are summarized in this report. (author)
Sass, D. A.; Schmitt, T. A.; Walker, C. M.
2008-01-01
Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.; Cañ ette, Isabel
2011-01-01
to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article
Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William
2014-03-01
The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.
Hydroacoustic Estimates of Fish Density Distributions in Cougar Reservoir, 2011
Energy Technology Data Exchange (ETDEWEB)
Ploskey, Gene R.; Zimmerman, Shon A.; Hennen, Matthew J.; Batten, George W.; Mitchell, T. D.
2012-09-01
Day and night mobile hydroacoustic surveys were conducted once each month from April through December 2011 to quantify the horizontal and vertical distributions of fish throughout Cougar Reservoir, Lane County, Oregon.
Transmuted of Rayleigh Distribution with Estimation and Application on Noise Signal
Ahmed, Suhad; Qasim, Zainab
2018-05-01
This paper deals with transforming one parameter Rayleigh distribution, into transmuted probability distribution through introducing a new parameter (λ), since this studied distribution is necessary in representing signal data distribution and failure data model the value of this transmuted parameter |λ| ≤ 1, is also estimated as well as the original parameter (⊖) by methods of moments and maximum likelihood using different sample size (n=25, 50, 75, 100) and comparing the results of estimation by statistical measure (mean square error, MSE).
Parameter estimation of sub-Gaussian stable distributions
Czech Academy of Sciences Publication Activity Database
Omelchenko, Vadym
2014-01-01
Roč. 50, č. 6 (2014), s. 929-949 ISSN 0023-5954 R&D Projects: GA ČR GA13-14445S Institutional support: RVO:67985556 Keywords : stable distribution * sub-Gaussian distribution * maximum likelihood Subject RIV: AH - Economics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/omelchenko-0439707.pdf
Irradiation distribution diagrams and their use for estimating collectable energy
International Nuclear Information System (INIS)
Ronnelid, M.; Karlsson, B.
1997-01-01
A method for summarising annual or seasonal solar irradiation data in irradiation distribution diagrams, including both direct and diffuse irradiation, is outlined. The practical use of irradiation distribution diagrams is discussed in the paper. Examples are given for the calculation of collectable irradiation on flat plate collectors or trough-like concentrators like the compound parabolic concentrator (CPC), and for the calculation of overhang geometries for windows to prevent overheating of buildings. (author)
Target Tracking in 3-D Using Estimation Based Nonlinear Control Laws for UAVs
Directory of Open Access Journals (Sweden)
Mousumi Ahmed
2016-02-01
Full Text Available This paper presents an estimation based backstepping like control law design for an Unmanned Aerial Vehicle (UAV to track a moving target in 3-D space. A ground-based sensor or an onboard seeker antenna provides range, azimuth angle, and elevation angle measurements to a chaser UAV that implements an extended Kalman filter (EKF to estimate the full state of the target. A nonlinear controller then utilizes this estimated target state and the chaser’s state to provide speed, flight path, and course/heading angle commands to the chaser UAV. Tracking performance with respect to measurement uncertainty is evaluated for three cases: (1 stationary white noise; (2 stationary colored noise and (3 non-stationary (range correlated white noise. Furthermore, in an effort to improve tracking performance, the measurement model is made more realistic by taking into consideration range-dependent uncertainties in the measurements, i.e., as the chaser closes in on the target, measurement uncertainties are reduced in the EKF, thus providing the UAV with more accurate control commands. Simulation results for these cases are shown to illustrate target state estimation and trajectory tracking performance.
Estimates of global cyanobacterial biomass and its distribution
Garcia-Pichel, Ferran; Belnap, Jayne; Neuer, Susanne; Schanz, Ferdinand
2003-01-01
We estimated global cyanobacterial biomass in the main reservoirs of cyanobacteria on Earth: marine and freshwater plankton, arid land soil crusts, and endoliths. Estimates were based on typical population density values as measured during our research, or as obtained from literature surveys, which were then coupled with data on global geographical area coverage. Among the marine plankton, the global biomass of Prochlorococcus reaches 120 × 1012 grams of carbon (g C), and that of Synechoccus some 43 × 1012 g C. This makes Prochlorococcus and Synechococcus, in that order, the most abundant cyanobacteria on Earth. Tropical marine blooms of Trichodesmium account for an additional 10 × 1012 g C worldwide. In terrestrial environments, the mass of cyanobacteria in arid land soil crusts is estimated to reach 54 × 1012 g C and that of arid land endolithic communities an additional 14 × 1012 g C. The global biomass of planktic cyanobacteria in lakes is estimated to be around 3 × 1012 g C. Our conservative estimates, which did not include some potentially significant biomass reservoirs such as polar and subarctic areas, topsoils in subhumid climates, and shallow marine and freshwater benthos, indicate that the total global cyanobacterial biomass is in the order of 3 × 1014 g C, surpassing a thousand million metric tons (1015 g) of wet biomass.
Estimation of current density distribution under electrodes for external defibrillation
Directory of Open Access Journals (Sweden)
Papazov Sava P
2002-12-01
Full Text Available Abstract Background Transthoracic defibrillation is the most common life-saving technique for the restoration of the heart rhythm of cardiac arrest victims. The procedure requires adequate application of large electrodes on the patient chest, to ensure low-resistance electrical contact. The current density distribution under the electrodes is non-uniform, leading to muscle contraction and pain, or risks of burning. The recent introduction of automatic external defibrillators and even wearable defibrillators, presents new demanding requirements for the structure of electrodes. Method and Results Using the pseudo-elliptic differential equation of Laplace type with appropriate boundary conditions and applying finite element method modeling, electrodes of various shapes and structure were studied. The non-uniformity of the current density distribution was shown to be moderately improved by adding a low resistivity layer between the metal and tissue and by a ring around the electrode perimeter. The inclusion of openings in long-term wearable electrodes additionally disturbs the current density profile. However, a number of small-size perforations may result in acceptable current density distribution. Conclusion The current density distribution non-uniformity of circular electrodes is about 30% less than that of square-shaped electrodes. The use of an interface layer of intermediate resistivity, comparable to that of the underlying tissues, and a high-resistivity perimeter ring, can further improve the distribution. The inclusion of skin aeration openings disturbs the current paths, but an appropriate selection of number and size provides a reasonable compromise.
Methodology for estimation of potential for solar water heating in a target area
International Nuclear Information System (INIS)
Pillai, Indu R.; Banerjee, Rangan
2007-01-01
Proper estimation of potential of any renewable energy technology is essential for planning and promotion of the technology. The methods reported in literature for estimation of potential of solar water heating in a target area are aggregate in nature. A methodology for potential estimation (technical, economic and market potential) of solar water heating in a target area is proposed in this paper. This methodology links the micro-level factors and macro-level market effects affecting the diffusion or adoption of solar water heating systems. Different sectors with end uses of low temperature hot water are considered for potential estimation. Potential is estimated at each end use point by simulation using TRNSYS taking micro-level factors. The methodology is illustrated for a synthetic area in India with an area of 2 sq. km and population of 10,000. The end use sectors considered are residential, hospitals, nursing homes and hotels. The estimated technical potential and market potential are 1700 m 2 and 350 m 2 of collector area, respectively. The annual energy savings for the technical potential in the area is estimated as 110 kW h/capita and 0.55 million-kW h/sq. km. area, with an annual average peak saving of 1 MW. The annual savings is 650-kW h per m 2 of collector area and accounts for approximately 3% of the total electricity consumption of the target area. Some of the salient features of the model are the factors considered for potential estimation; estimation of electrical usage pattern for typical day, amount of electricity savings and savings during the peak load. The framework is general and enables accurate estimation of potential of solar water heating for a city, block. Energy planners and policy makers can use this framework for tracking and promotion of diffusion of solar water heating systems. (author)
Penalized Maximum Likelihood Estimation for univariate normal mixture distributions
International Nuclear Information System (INIS)
Ridolfi, A.; Idier, J.
2001-01-01
Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks
Yu, Chao
2013-01-01
In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…
Linear Estimation of Standard Deviation of Logistic Distribution ...
African Journals Online (AJOL)
The paper presents a theoretical method based on order statistics and a FORTRAN program for computing the variance and relative efficiencies of the standard deviation of the logistic population with respect to the Cramer-Rao lower variance bound and the best linear unbiased estimators (BLUE\\'s) when the mean is ...
Estimating Functions of Distributions Defined over Spaces of Unknown Size
Directory of Open Access Journals (Sweden)
David H. Wolpert
2013-10-01
Full Text Available We consider Bayesian estimation of information-theoretic quantities from data, using a Dirichlet prior. Acknowledging the uncertainty of the event space size m and the Dirichlet prior’s concentration parameter c, we treat both as random variables set by a hyperprior. We show that the associated hyperprior, P(c, m, obeys a simple “Irrelevance of Unseen Variables” (IUV desideratum iff P(c, m = P(cP(m. Thus, requiring IUV greatly reduces the number of degrees of freedom of the hyperprior. Some information-theoretic quantities can be expressed multiple ways, in terms of different event spaces, e.g., mutual information. With all hyperpriors (implicitly used in earlier work, different choices of this event space lead to different posterior expected values of these information-theoretic quantities. We show that there is no such dependence on the choice of event space for a hyperprior that obeys IUV. We also derive a result that allows us to exploit IUV to greatly simplify calculations, like the posterior expected mutual information or posterior expected multi-information. We also use computer experiments to favorably compare an IUV-based estimator of entropy to three alternative methods in common use. We end by discussing how seemingly innocuous changes to the formalization of an estimation problem can substantially affect the resultant estimates of posterior expectations.
Directory of Open Access Journals (Sweden)
Qian Li
Full Text Available BACKGROUND: Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. METHODOLOGY: We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671 between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. CONCLUSIONS: This article proposes a network-based multi-target computational estimation
Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie
2011-03-22
Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by
Distribution Line Parameter Estimation Under Consideration of Measurement Tolerances
DEFF Research Database (Denmark)
Prostejovsky, Alexander; Gehrke, Oliver; Kosek, Anna Magdalena
2016-01-01
conductance that the absolute compensated error is −1.05% and −1.07% for both representations, as opposed to the expected uncompensated error of −79.68%. Identification of a laboratory distribution line using real measurement data grid yields a deviation of 6.75% and 4.00%, respectively, from a calculation...
Can anchovy age structure be estimated from length distribution ...
African Journals Online (AJOL)
The analysis provides a new time-series of proportions-at-age 1, together with associated standard errors, for input into assessments of the resource. The results also caution against the danger of scientists reading more information into data than is really there. Keywords: anchovy, effective sample size, length distribution, ...
A determination of parton distributions with faithful uncertainty estimation
International Nuclear Information System (INIS)
Ball, Richard D.; Del Debbio, Luigi; Forte, Stefano; Guffanti, Alberto; Latorre, Jose I.; Piccione, Andrea; Rojo, Juan; Ubiali, Maria
2009-01-01
We present the determination of a set of parton distributions of the nucleon, at next-to-leading order, from a global set of deep-inelastic scattering data: NNPDF1.0. The determination is based on a Monte Carlo approach, with neural networks used as unbiased interpolants. This method, previously discussed by us and applied to a determination of the nonsinglet quark distribution, is designed to provide a faithful and statistically sound representation of the uncertainty on parton distributions. We discuss our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the evolution equations and its benchmarking, and the method used to compute physical observables. We discuss the parametrization and fitting of neural networks, and the algorithm used to determine the optimal fit. We finally present our set of parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent parton sets. We use it to compute the benchmark W and Z cross sections at the LHC. We discuss issues of delivery and interfacing to commonly used packages such as LHAPDF
Estimates of the Sampling Distribution of Scalability Coefficient H
Van Onna, Marieke J. H.
2004-01-01
Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…
Angular distributions of target black fragments in nucleus–nucleus collisions at high energy
International Nuclear Information System (INIS)
Liu, Fuhu; Abd Allah, N.N.; Zhang, Donghai; Duan, Maiying
2003-01-01
The experimental results of space, azimuthal, and projected angular distributions of target black fragments produced in silicon-emulsion collisions at 4.5A GeV/c (the Dubna energy) are reported. A multi-source ideal gas model is suggested to describe the experimental angular distributions. The Monte Carlo calculated results are in agreement with the experimental data. (author)
Brand market positions estimation and defining the strategic targets of its development
S.M. Makhnusha
2010-01-01
In this article the author generalizes the concept of brand characteristics which influenceits profitability and market positions. An approach to brand market positions estimation anddefining the strategic targets of its development is proposed.Keywords: brand, brand expansion, brand extension, brand value, brand power, brandrelevance, brand awareness.
Multimedia approach to estimating target cleanup levels for soils at hazardous waste sites
International Nuclear Information System (INIS)
Hwang, S.T.
1990-04-01
Contaminated soils at hazardous and nuclear waste sites pose a potential threat to human health via transport through environmental media and subsequent human intake. To minimize health risks, it is necessary to identify those risks and ensure that appropriate actions are taken to protect public health. The regulatory process may typically include identification of target cleanup levels and evaluation of the effectiveness of remedial alternatives and the corresponding reduction in risks at a site. The US Environmental Protection Agency (EPA) recommends that exposure assessments be combined with toxicity information to quantify the health risk posed by a specific site. This recommendation then forms the basis for establishing target cleanup levels. An exposure assessment must first identify the chemical concentration in a specific medium (soil, water, air, or food), estimate the exposure potential based on human intake from that media, and then combined with health criteria to estimate the upperbound health risks for noncarcinogens and carcinogens. Estimation of target cleanup levels involves the use of these same principles but can occur in reverse order. The procedure starts from establishing a permissible health effect level and ends with an estimated target cleanup level through an exposure assessment process. 17 refs
Using the Pareto Distribution to Improve Estimates of Topcoded Earnings
Philip Armour; Richard V. Burkhauser; Jeff Larrimore
2014-01-01
Inconsistent censoring in the public-use March Current Population Survey (CPS) limits its usefulness in measuring labor earnings trends. Using Pareto estimation methods with less-censored internal CPS data, we create an enhanced cell-mean series to capture top earnings in the public-use CPS. We find that previous approaches for imputing topcoded earnings systematically understate top earnings. Annual earnings inequality trends since 1963 using our series closely approximate those found by Kop...
Quantification Model for Estimating Temperature Field Distributions of Apple Fruit
Zhang , Min; Yang , Le; Zhao , Huizhong; Zhang , Leijie; Zhong , Zhiyou; Liu , Yanling; Chen , Jianhua
2009-01-01
International audience; A quantification model of transient heat conduction was provided to simulate apple fruit temperature distribution in the cooling process. The model was based on the energy variation of apple fruit of different points. It took into account, heat exchange of representative elemental volume, metabolism heat and external heat. The following conclusions could be obtained: first, the quantification model can satisfactorily describe the tendency of apple fruit temperature dis...
Marine Biodiversity in the Caribbean: Regional Estimates and Distribution Patterns
Miloslavich, Patricia; Díaz, Juan Manuel; Klein, Eduardo; Alvarado, Juan José; Díaz, Cristina; Gobin, Judith; Escobar-Briones, Elva; Cruz-Motta, Juan José; Weil, Ernesto; Cortés, Jorge; Bastidas, Ana Carolina; Robertson, Ross; Zapata, Fernando; Martín, Alberto; Castillo, Julio; Kazandjian, Aniuska; Ortiz, Manuel
2010-01-01
This paper provides an analysis of the distribution patterns of marine biodiversity and summarizes the major activities of the Census of Marine Life program in the Caribbean region. The coastal Caribbean region is a large marine ecosystem (LME) characterized by coral reefs, mangroves, and seagrasses, but including other environments, such as sandy beaches and rocky shores. These tropical ecosystems incorporate a high diversity of associated flora and fauna, and the nations that border the Caribbean collectively encompass a major global marine biodiversity hot spot. We analyze the state of knowledge of marine biodiversity based on the geographic distribution of georeferenced species records and regional taxonomic lists. A total of 12,046 marine species are reported in this paper for the Caribbean region. These include representatives from 31 animal phyla, two plant phyla, one group of Chromista, and three groups of Protoctista. Sampling effort has been greatest in shallow, nearshore waters, where there is relatively good coverage of species records; offshore and deep environments have been less studied. Additionally, we found that the currently accepted classification of marine ecoregions of the Caribbean did not apply for the benthic distributions of five relatively well known taxonomic groups. Coastal species richness tends to concentrate along the Antillean arc (Cuba to the southernmost Antilles) and the northern coast of South America (Venezuela – Colombia), while no pattern can be observed in the deep sea with the available data. Several factors make it impossible to determine the extent to which these distribution patterns accurately reflect the true situation for marine biodiversity in general: (1) highly localized concentrations of collecting effort and a lack of collecting in many areas and ecosystems, (2) high variability among collecting methods, (3) limited taxonomic expertise for many groups, and (4) differing levels of activity in the study of
Marine biodiversity in the Caribbean: regional estimates and distribution patterns.
Directory of Open Access Journals (Sweden)
Patricia Miloslavich
2010-08-01
Full Text Available This paper provides an analysis of the distribution patterns of marine biodiversity and summarizes the major activities of the Census of Marine Life program in the Caribbean region. The coastal Caribbean region is a large marine ecosystem (LME characterized by coral reefs, mangroves, and seagrasses, but including other environments, such as sandy beaches and rocky shores. These tropical ecosystems incorporate a high diversity of associated flora and fauna, and the nations that border the Caribbean collectively encompass a major global marine biodiversity hot spot. We analyze the state of knowledge of marine biodiversity based on the geographic distribution of georeferenced species records and regional taxonomic lists. A total of 12,046 marine species are reported in this paper for the Caribbean region. These include representatives from 31 animal phyla, two plant phyla, one group of Chromista, and three groups of Protoctista. Sampling effort has been greatest in shallow, nearshore waters, where there is relatively good coverage of species records; offshore and deep environments have been less studied. Additionally, we found that the currently accepted classification of marine ecoregions of the Caribbean did not apply for the benthic distributions of five relatively well known taxonomic groups. Coastal species richness tends to concentrate along the Antillean arc (Cuba to the southernmost Antilles and the northern coast of South America (Venezuela-Colombia, while no pattern can be observed in the deep sea with the available data. Several factors make it impossible to determine the extent to which these distribution patterns accurately reflect the true situation for marine biodiversity in general: (1 highly localized concentrations of collecting effort and a lack of collecting in many areas and ecosystems, (2 high variability among collecting methods, (3 limited taxonomic expertise for many groups, and (4 differing levels of activity in the study
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS
Directory of Open Access Journals (Sweden)
Dietrich Stoyan
2011-05-01
Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.
International Nuclear Information System (INIS)
Salman, Abdullahi M.; Li, Yue; Stewart, Mark G.
2015-01-01
Over the years, power distribution systems have been vulnerable to extensive damage from hurricanes which can cause power outage resulting in millions of dollars of economic losses and restoration costs. Most of the outage is as a result of failure of distribution support structures. Over the years, various methods of strengthening distribution systems have been proposed and studied. Some of these methods, such as undergrounding of the system, have been shown to be unjustified from an economic point of view. A potential cost-effective strategy is targeted hardening of the system. This, however, requires a method of determining critical parts of a system that when strengthened, will have greater impact on reliability. This paper presents a framework for studying the effectiveness of targeted hardening strategies on power distribution systems subjected to hurricanes. The framework includes a methodology for evaluating system reliability that relates failure of poles and power delivery, determination of critical parts of a system, hurricane hazard analysis, and consideration of decay of distribution poles. The framework also incorporates cost analysis that considers economic losses due to power outage. A notional power distribution system is used to demonstrate the framework by evaluating and comparing the effectiveness of three hardening measures. - Highlight: • Risk assessment of power distribution systems subjected to hurricanes is carried out. • Framework for studying effectiveness of targeted hardening strategies is presented. • A system reliability method is proposed. • Targeted hardening is cost effective for existing systems. • Economic losses due to power outage should be considered for cost analysis.
Zheng, Z. Y.; Zhang, S. Q.; Gao, L.; Gao, H.
2015-05-01
A "comb" structure of beam intensity distribution is designed and achieved to measure a target displacement of micrometer level in laser plasma propulsion. Base on the "comb" structure, the target displacement generated by nanosecond laser ablation solid target is measured and discussed. It is found that the "comb" structure is more suitable for a thin film target with a velocity lower than tens of millimeters per second. Combing with a light-electric monitor, the `comb' structure can be used to measure a large range velocity.
Directory of Open Access Journals (Sweden)
N. K. Sajeevkumar
2014-09-01
Full Text Available In this article, we derived the Best Linear Unbiased Estimator (BLUE of the location parameter of certain distributions with known coefficient of variation by record values. Efficiency comparisons are also made on the proposed estimator with some of the usual estimators. Finally we give a real life data to explain the utility of results developed in this article.
Branch current state estimation of three phase distribution networks suitable for paralellization
Blaauwbroek, N.; Nguyen, H.P.; Gibescu, M.; Slootweg, J.G.
2017-01-01
The evolution of distribution networks from passive to active distribution systems puts new requirements on the monitoring and control capabilities of these systems. The development of state estimation algorithms to gain insight in the actual system state of a distribution network has resulted in a
DEFF Research Database (Denmark)
Bessler, Sanford; Kemal, Mohammed Seifu; Silva, Nuno
2018-01-01
Demand Management uses the interaction and information exchange between multiple control functions in order to achieve goals that can vary in different application contexts. Since there are several stakeholders involved, these may have diverse objectives and even use different architectures...... to actively manage power demand. This paper utilizes an existing distributed demand management architecture in order to provide the following contributions: (1) It develops and evaluates a set of algorithms that combine the optimization of energy costs in scenarios of variable day-ahead prices with the goal...... to improve distribution grid operation reliability, here implemented by a total Power limit. (2) It evaluates the proposed scheme as a distributed system where flexibility information is exchanged with the existing industry standard OpenADR. A Hardware-in-the-Loop testbed realization demonstrates...
Order Quantity Distributions: Estimating an Adequate Aggregation Horizon
Directory of Open Access Journals (Sweden)
Eriksen Poul Svante
2016-09-01
Full Text Available In this paper an investigation into the demand, faced by a company in the form of customer orders, is performed both from an explorative numerical and analytical perspective. The aim of the research is to establish the behavior of customer orders in first-come-first-serve (FCFS systems and the impact of order quantity variation on the planning environment. A discussion of assumptions regarding demand from various planning and control perspectives underlines that most planning methods are based on the assumption that demand in the form of customer orders are independently identically distributed and stem from symmetrical distributions. To investigate and illustrate the need to aggregate demand to live up to these assumptions, a simple methodological framework to investigate the validity of the assumptions and for analyzing the behavior of orders is developed. The paper also presents an analytical approach to identify the aggregation horizon needed to achieve a stable demand. Furthermore, a case study application of the presented framework is presented and concluded on.
Target Centroid Position Estimation of Phase-Path Volume Kalman Filtering
Directory of Open Access Journals (Sweden)
Fengjun Hu
2016-01-01
Full Text Available For the problem of easily losing track target when obstacles appear in intelligent robot target tracking, this paper proposes a target tracking algorithm integrating reduced dimension optimal Kalman filtering algorithm based on phase-path volume integral with Camshift algorithm. After analyzing the defects of Camshift algorithm, compare the performance with the SIFT algorithm and Mean Shift algorithm, and Kalman filtering algorithm is used for fusion optimization aiming at the defects. Then aiming at the increasing amount of calculation in integrated algorithm, reduce dimension with the phase-path volume integral instead of the Gaussian integral in Kalman algorithm and reduce the number of sampling points in the filtering process without influencing the operational precision of the original algorithm. Finally set the target centroid position from the Camshift algorithm iteration as the observation value of the improved Kalman filtering algorithm to fix predictive value; thus to make optimal estimation of target centroid position and keep the target tracking so that the robot can understand the environmental scene and react in time correctly according to the changes. The experiments show that the improved algorithm proposed in this paper shows good performance in target tracking with obstructions and reduces the computational complexity of the algorithm through the dimension reduction.
Energy distribution of the fast electron from Cu and CH targets irradiated with fs-laser pulses
International Nuclear Information System (INIS)
Cai Dafeng; Gu Yuqiu; Zheng Zhijian; Zhou Weimin; Jiao Chunye
2014-01-01
In order to investigate the effect of target's material on fast electron energy distribution, the energy distribution of fast electrons from the front and the rear of Cu and CH targets have been measured during the interaction of femtosecond laser-foil targets. The results show that the fast electron spectrums from the front of Cu and CH targets are similar, which show energy distribution of fast electrons depends very little on material of targets. The fast electron spectrums from the rear of Cu and CH targets are obviously dissimilar, which indicate a mighty effect of target material on fast electron transport. The fast electron spectrums from the Cu target is 'soften', which is due to electron recirculation and self-magnetic field produced by electrons transported in the target. The fast electron spectrums from the CH target is a Maxwellian distribution, which is due to collision effect when electrons transport in the target. (authors)
Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target
Directory of Open Access Journals (Sweden)
Chang-jian Ru
2015-01-01
Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.
High-Speed Target Identification System Based on the Plume’s Spectral Distribution
Directory of Open Access Journals (Sweden)
Wenjie Lang
2015-01-01
Full Text Available In order to recognize the target of high speed quickly and accurately, an identification system was designed based on analysis of the distribution characteristics of the plume spectrum. In the system, the target was aligned with visible light tracking module, and the spectral analysis of the target’s plume radiation was achieved by interference module. The distinguishing factor recognition algorithm was designed on basis of ratio of multifeature band peaks and valley mean values. Effective recognition of the high speed moving target could be achieved after partition of the active region and the influence of target motion on spectral acquisition was analyzed. In the experiment the small rocket combustion was used as the target. The spectral detection experiment was conducted at different speeds 2.0 km away from the detection system. Experimental results showed that spectral distribution had significant spectral offset in the same sampling period for the target with different speeds, but the spectral distribution was basically consistent. Through calculation of the inclusion relationship between distinguishing factor and distinction interval of the peak value and the valley value at the corresponding wave-bands, effective identification of target could be achieved.
Real-time measurements and their effects on state estimation of distribution power system
DEFF Research Database (Denmark)
Han, Xue; You, Shi; Thordarson, Fannar
2013-01-01
between the estimated values (voltage and injected power) and the measurements are applied to evaluate the accuracy of the estimated grid states. Eventually, some suggestions are provided for the distribution grid operators on placing the real-time meters in the distribution grid.......This paper aims at analyzing the potential value of using different real-time metering and measuring instruments applied in the low voltage distribution networks for state-estimation. An algorithm is presented to evaluate different combinations of metering data using a tailored state estimator....... It is followed by a case study based on the proposed algorithm. A real distribution grid feeder with different types of meters installed either in the cabinets or at the customer side is selected for simulation and analysis. Standard load templates are used to initiate the state estimation. The deviations...
Estimation of direction of arrival of a moving target using subspace based approaches
Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.
2016-05-01
In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.
Estimating Runoff From Roadcuts With a Distributed Hydrologic Model
Cuhaciyan, C.; Luce, C.; Voisin, N.; Lettenmaier, D.; Black, T.
2008-12-01
Roads can have a substantial effect on hydrologic patterns of forested watersheds; the most noteworthy being the resurfacing of shallow groundwater at roadcuts. The influence of roads on hydrology has compelled hydrologists to include water routing and storage routines in rainfall-runoff models, such as those in the Distributed Hydrologic Soil Vegetation Model (DHSVM). We tested the ability of DHSVM to match observed runoff in roadcuts of a watershed in the Coast Range of Oregon. Eight roadcuts were instrumented using large tipping bucket gauges designed to capture only the water entering the roadside ditch from an 80-m long roadcut. The roadcuts were categorized by the topography of the upstream hillside as either swale, planar, or ridge. The simulation was run from December 2002 to December 2003 at a relatively fine spatial resolution (10-m). Average observed soil depths are 1.8-m across the watershed, below which there lies deep and highly weathered sandstone. DHSVM was designed for relatively impermeable bedrock and shallow soils; therefore it does not provide a mechanism for deep groundwater movement and storage. In the geologic setting of the study basin, however, water is routed through the sandstone allowing water to pass under roads through the parent material. For this reason a uniformly deep soil of 6.5-m with a decreased decay in conductivity with depth was used in the model to allow water to be routed beneath roadcuts that are up to 5.5-m in height. Up to three, typically shallow, soil layers can be modeled in DHSVM. We used the lowest of the three soil layers to mimic the hydraulically-well-connected sandstone exposed at deeper roadcuts. The model was calibrated against observed discharge at the outlet of the watershed. While model results closely matched the observed hydrograph at the watershed outlet, simulated runoff at an upstream gauge and the roadside ditches were varied and often higher than those observed in the field. The timing of the field
Directory of Open Access Journals (Sweden)
Julia Chernova
2016-07-01
Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Influence of the statistical distribution of bioassay measurement errors on the intake estimation
International Nuclear Information System (INIS)
Lee, T. Y; Kim, J. K
2006-01-01
The purpose of this study is to provide the guidance necessary for making a selection of error distributions by analyzing influence of statistical distribution for a type of bioassay measurement error on the intake estimation. For this purpose, intakes were estimated using maximum likelihood method for cases that error distributions are normal and lognormal, and comparisons between two distributions for the estimated intakes were made. According to the results of this study, in case that measurement results for lung retention are somewhat greater than the limit of detection it appeared that distribution types have negligible influence on the results. Whereas in case of measurement results for the daily excretion rate, the results obtained from assumption of a lognormal distribution were 10% higher than those obtained from assumption of a normal distribution. In view of these facts, in case where uncertainty component is governed by counting statistics it is considered that distribution type have no influence on intake estimation. Whereas in case where the others are predominant, it is concluded that it is clearly desirable to estimate the intake assuming a lognormal distribution
Depth-Dose and LET Distributions of Antiproton Beams in Various Target Materials
DEFF Research Database (Denmark)
Herrmann, Rochus; Olsen, Sune; Petersen, Jørgen B.B.
the annihilation process. Materials We have investigated the impact of substituting the target material on the depth-dose distribution of pristine and spread out antiproton beams using the FLUKA Monte Carlo transport program. Classical ICRP targets are compared to water phantoms. In addition, track average...... unrestricted LET is calculated for all configurations. Finally, we investigate which concentrations of gadolinium and boron are needed in a water target in order to observe a significant change in the antiproton depth-dose distribution. Results Results indicate, that there is no significant change...... in the depth-dose distribution and average LET when substituting the materials. Adding boron and gadolinium up to concentrations of 1 per 1000 atoms to a water phantom, did not change the depth-dose profile nor the average LET. Conclusions According to our FLUKA calculations, antiproton neutron capture...
Directory of Open Access Journals (Sweden)
Farhad Yahgmaei
2013-01-01
Full Text Available This paper proposes different methods of estimating the scale parameter in the inverse Weibull distribution (IWD. Specifically, the maximum likelihood estimator of the scale parameter in IWD is introduced. We then derived the Bayes estimators for the scale parameter in IWD by considering quasi, gamma, and uniform priors distributions under the square error, entropy, and precautionary loss functions. Finally, the different proposed estimators have been compared by the extensive simulation studies in corresponding the mean square errors and the evolution of risk functions.
Estimation of spectral distribution of sky radiance using a commercial digital camera.
Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao
2016-01-10
Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.
Distributed ISAR Subimage Fusion of Nonuniform Rotating Target Based on Matching Fourier Transform.
Li, Yuanyuan; Fu, Yaowen; Zhang, Wenpeng
2018-06-04
In real applications, the image quality of the conventional monostatic Inverse Synthetic Aperture Radar (ISAR) for the maneuvering target is subject to the strong fluctuation of Radar Cross Section (RCS), as the target aspect varies enormously. Meanwhile, the maneuvering target introduces nonuniform rotation after translation motion compensation which degrades the imaging performance of the conventional Fourier Transform (FT)-based method in the cross-range dimension. In this paper, a method which combines the distributed ISAR technique and the Matching Fourier Transform (MFT) is proposed to overcome these problems. Firstly, according to the characteristics of the distributed ISAR, the multiple channel echoes of the nonuniform rotation target from different observation angles can be acquired. Then, by applying the MFT to the echo of each channel, the defocused problem of nonuniform rotation target which is inevitable by using the FT-based imaging method can be avoided. Finally, after preprocessing, scaling and rotation of all subimages, the noncoherent fusion image containing all the RCS information in all channels can be obtained. The accumulation coefficients of all subimages are calculated adaptively according to the their image qualities. Simulation and experimental data are used to validate the effectiveness of the proposed approach, and fusion image with improved recognizability can be obtained. Therefore, by using the distributed ISAR technique and MFT, subimages of high-maneuvering target from different observation angles can be obtained. Meanwhile, by employing the adaptive subimage fusion method, the RCS fluctuation can be alleviated and more recognizable final image can be obtained.
An MCMC Algorithm for Target Estimation in Real-Time DNA Microarrays
Directory of Open Access Journals (Sweden)
Vikalo Haris
2010-01-01
Full Text Available DNA microarrays detect the presence and quantify the amounts of nucleic acid molecules of interest. They rely on a chemical attraction between the target molecules and their Watson-Crick complements, which serve as biological sensing elements (probes. The attraction between these biomolecules leads to binding, in which probes capture target analytes. Recently developed real-time DNA microarrays are capable of observing kinetics of the binding process. They collect noisy measurements of the amount of captured molecules at discrete points in time. Molecular binding is a random process which, in this paper, is modeled by a stochastic differential equation. The target analyte quantification is posed as a parameter estimation problem, and solved using a Markov Chain Monte Carlo technique. In simulation studies where we test the robustness with respect to the measurement noise, the proposed technique significantly outperforms previously proposed methods. Moreover, the proposed approach is tested and verified on experimental data.
Probabilistic Reverse dOsimetry Estimating Exposure Distribution (PROcEED)
PROcEED is a web-based application used to conduct probabilistic reverse dosimetry calculations.The tool is used for estimating a distribution of exposure concentrations likely to have produced biomarker concentrations measured in a population.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Directory of Open Access Journals (Sweden)
Chaoyang Shi
2017-12-01
Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Ali, Hussain; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Sharawi, Mohammad S.; Alouini, Mohamed-Slim
2017-01-01
Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.
Ali, Hussain
2017-01-09
Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.
On the distribution of estimators of diffusion constants for Brownian motion
International Nuclear Information System (INIS)
Boyer, Denis; Dean, David S
2011-01-01
We discuss the distribution of various estimators for extracting the diffusion constant of single Brownian trajectories obtained by fitting the squared displacement of the trajectory. The analysis of the problem can be framed in terms of quadratic functionals of Brownian motion that correspond to the Euclidean path integral for simple Harmonic oscillators with time dependent frequencies. Explicit analytical results are given for the distribution of the diffusion constant estimator in a number of cases and our results are confirmed by numerical simulations.
Energy Technology Data Exchange (ETDEWEB)
Viskari, T.
2012-07-01
Atmospheric aerosol particles have several important effects on the environment and human society. The exact impact of aerosol particles is largely determined by their particle size distributions. However, no single instrument is able to measure the whole range of the particle size distribution. Estimating a particle size distribution from multiple simultaneous measurements remains a challenge in aerosol physical research. Current methods to combine different measurements require assumptions concerning the overlapping measurement ranges and have difficulties in accounting for measurement uncertainties. In this thesis, Extended Kalman Filter (EKF) is presented as a promising method to estimate particle number size distributions from multiple simultaneous measurements. The particle number size distribution estimated by EKF includes information from prior particle number size distributions as propagated by a dynamical model and is based on the reliabilities of the applied information sources. Known physical processes and dynamically evolving error covariances constrain the estimate both over time and particle size. The method was tested with measurements from Differential Mobility Particle Sizer (DMPS), Aerodynamic Particle Sizer (APS) and nephelometer. The particle number concentration was chosen as the state of interest. The initial EKF implementation presented here includes simplifications, yet the results are positive and the estimate successfully incorporated information from the chosen instruments. For particle sizes smaller than 4 micrometers, the estimate fits the available measurements and smooths the particle number size distribution over both time and particle diameter. The estimate has difficulties with particles larger than 4 micrometers due to issues with both measurements and the dynamical model in that particle size range. The EKF implementation appears to reduce the impact of measurement noise on the estimate, but has a delayed reaction to sudden
Reduced complexity FFT-based DOA and DOD estimation for moving target in bistatic MIMO radar
Ali, Hussain
2016-06-24
In this paper, we consider a bistatic multiple-input multiple-output (MIMO) radar. We propose a reduced complexity algorithm to estimate the direction-of-arrival (DOA) and direction-of-departure (DOD) for moving target. We show that the calculation of parameter estimation can be expressed in terms of one-dimensional fast-Fourier-transforms which drastically reduces the complexity of the optimization algorithm. The performance of the proposed algorithm is compared with the two-dimension multiple signal classification (2D-MUSIC) and reduced-dimension MUSIC (RD-MUSIC) algorithms. It is shown by simulations, our proposed algorithm has better estimation performance and lower computational complexity compared to the 2D-MUSIC and RD-MUSIC algorithms. Moreover, simulation results also show that the proposed algorithm achieves the Cramer-Rao lower bound. © 2016 IEEE.
Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates.
Directory of Open Access Journals (Sweden)
Caroline A Curtis
Full Text Available Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance.We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the 'plant characteristics' information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN, and tested whether ΔCN was influenced by growth form or range size.Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001. The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation.Our results show that distribution data are consistently broader than USDA PLANTS experts' knowledge and likely provide more robust estimates of climatic tolerance, especially for
The Spatial Distribution of Poverty in Vietnam and the Potential for Targeting
Minot, Nicholas; Baulch, Bob
2002-01-01
The authors combine household survey and census data to construct a provincial poverty map of Vietnam and evaluate the accuracy of geographically targeted antipoverty programs. First, they estimate per capita expenditure as a function of selected household and geographic characteristics using the 1998 Vietnam Living Standards Survey. Next, they combine the results with data on the same hou...
Energy flow models for the estimation of technical losses in distribution network
International Nuclear Information System (INIS)
Au, Mau Teng; Tan, Chin Hooi
2013-01-01
This paper presents energy flow models developed to estimate technical losses in distribution network. Energy flow models applied in this paper is based on input energy and peak demand of distribution network, feeder length and peak demand, transformer loading capacity, and load factor. Two case studies, an urban distribution network and a rural distribution network are used to illustrate application of the energy flow models. Results on technical losses obtained for the two distribution networks are consistent and comparable to network of similar types and characteristics. Hence, the energy flow models are suitable for practical application.
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Nonparametric estimation of the stationary M/G/1 workload distribution function
DEFF Research Database (Denmark)
Hansen, Martin Bøgsted
2005-01-01
In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary associ...
Han, Fang; Liu, Han
2016-01-01
Correlation matrices play a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, it is not an effective estimator when facing heavy-tailed distributions. As a robust alternative, Han and Liu [J. Am. Stat. Assoc. 109 (2015) 275-2...
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
Estimation of rates-across-sites distributions in phylogenetic substitution models.
Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J
2003-10-01
Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.
Effects of target size on the comparison of photon and charged particle dose distributions
International Nuclear Information System (INIS)
Phillips, M.H.; Frankel, K.A.; Tjoa, T.; Lyman, J.T.; Fabrikant, J.I.; Levy, R.P.
1989-12-01
The work presented here is part of an ongoing project to quantify and evaluate the differences in the use of different radiation types and irradiation geometries in radiosurgery. We are examining dose distributions for photons using the ''Gamma Knife'' and the linear accelerator arc methods, as well as different species of charged particles from protons to neon ions. A number of different factors need to be studied to accurately compare the different modalities such as target size, shape and location, the irradiation geometry, and biological response. This presentation focuses on target size, which has a large effect on the dose distributions in normal tissue surrounding the lesion. This work concentrates on dose distributions found in radiosurgery, as opposed to those usually found in radiotherapy. 5 refs., 2 figs
International Nuclear Information System (INIS)
Musleh, Rola M.; Helu, Amal
2014-01-01
In this article we consider statistical inferences about the unknown parameters of the Inverse Weibull distribution based on progressively type-II censoring using classical and Bayesian procedures. For classical procedures we propose using the maximum likelihood; the least squares methods and the approximate maximum likelihood estimators. The Bayes estimators are obtained based on both the symmetric and asymmetric (Linex, General Entropy and Precautionary) loss functions. There are no explicit forms for the Bayes estimators, therefore, we propose Lindley's approximation method to compute the Bayes estimators. A comparison between these estimators is provided by using extensive simulation and three criteria, namely, Bias, mean squared error and Pitman nearness (PN) probability. It is concluded that the approximate Bayes estimators outperform the classical estimators most of the time. Real life data example is provided to illustrate our proposed estimators. - Highlights: • We consider progressively type-II censored data from the Inverse Weibull distribution (IW). • We derive MLEs, approximate MLEs, LS and Bayes estimate methods of scale and shape parameters of the IW. • Bayes estimator of shape parameter cannot be expressed in closed forms. • We suggest using Lindley's approximation. • We conclude that the Bayes estimates outperform the classical methods
Application of the Unbounded Probability Distribution of the Johnson System for Floods Estimation
Directory of Open Access Journals (Sweden)
Campos-Aranda Daniel Francisco
2015-09-01
Full Text Available Floods designs constitute a key to estimate the sizing of new water works and to review the hydrological security of existing ones. The most reliable method for estimating their magnitudes associated with certain return periods is to fit a probabilistic model to available records of maximum annual flows. Since such model is at first unknown, several models need to be tested in order to select the most appropriate one according to an arbitrary statistical index, commonly the standard error of fit. Several probability distributions have shown versatility and consistency of results when processing floods records and therefore, its application has been established as a norm or precept. The Johnson System has three families of distributions, one of which is the Log–Normal model with three parameters of fit, which is also the border between the bounded distributions and those with no upper limit. These families of distributions have four adjustment parameters and converge to the standard normal distribution, so that their predictions are obtained with such a model. Having contrasted the three probability distributions established by precept in 31 historical records of hydrological events, the Johnson system is applied to such data. The results of the unbounded distribution of the Johnson system (SJU are compared to the optimal results from the three distributions. It was found that the predictions of the SJU distribution are similar to those obtained with the other models in the low return periods ( 1000 years. Because of its theoretical support, the SJU model is recommended in flood estimation.
International Nuclear Information System (INIS)
Banks, H T; Davis, Jimena L; Ernstberger, Stacey L; Hu, Shuhua; Artimovich, Elena; Dhar, Arun K
2009-01-01
We discuss inverse problem results for problems involving the estimation of probability distributions using aggregate data for growth in populations. We begin with a mathematical model describing variability in the early growth process of size-structured shrimp populations and discuss a computational methodology for the design of experiments to validate the model and estimate the growth-rate distributions in shrimp populations. Parameter-estimation findings using experimental data from experiments so designed for shrimp populations cultivated at Advanced BioNutrition Corporation are presented, illustrating the usefulness of mathematical and statistical modeling in understanding the uncertainty in the growth dynamics of such populations
Directory of Open Access Journals (Sweden)
G. R. Pasha
2006-07-01
Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.
Investigating the impact of uneven magnetic flux density distribution on core loss estimation
DEFF Research Database (Denmark)
Niroumand, Farideh Javidi; Nymand, Morten; Wang, Yiren
2017-01-01
is calculated according to an effective flux density value and the macroscopic dimensions of the cores. However, the flux distribution in the core can alter by core shapes and/or operating conditions due to nonlinear material properties. This paper studies the element-wise estimation of the loss in magnetic......There are several approaches for loss estimation in magnetic cores, and all these approaches highly rely on accurate information about flux density distribution in the cores. It is often assumed that the magnetic flux density evenly distributes throughout the core and the overall core loss...
Directory of Open Access Journals (Sweden)
Anupam Pathak
2014-11-01
Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed. We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.
Adaptive estimation for control of uncertain nonlinear systems with applications to target tracking
Madyastha, Venkatesh Kattigari
2005-08-01
Design of nonlinear observers has received considerable attention since the early development of methods for linear state estimation. The most popular approach is the extended Kalman filter (EKF), that goes through significant degradation in the presence of nonlinearities, particularly if unmodeled dynamics are coupled to the process and the measurement. For uncertain nonlinear systems, adaptive observers have been introduced to estimate the unknown state variables where no priori information about the unknown parameters is available. While establishing global results, these approaches are applicable only to systems transformable to output feedback form. Over the recent years, neural network (NN) based identification and estimation schemes have been proposed that relax the assumptions on the system at the price of sacrificing on the global nature of the results. However, most of the NN based adaptive observer approaches in the literature require knowledge of the full dimension of the system, therefore may not be suitable for systems with unmodeled dynamics. We first propose a novel approach to nonlinear state estimation from the perspective of augmenting a linear time invariant observer with an adaptive element. The class of nonlinear systems treated here are finite but of otherwise unknown dimension. The objective is to improve the performance of the linear observer when applied to a nonlinear system. The approach relies on the ability of the NNs to approximate the unknown dynamics from finite time histories of available measurements. Next we investigate nonlinear state estimation from the perspective of adaptively augmenting an existing time varying observer, such as an EKF. EKFs find their applications mostly in target tracking problems. The proposed approaches are robust to unmodeled dynamics, including unmodeled disturbances. Lastly, we consider the problem of adaptive estimation in the presence of feedback control for a class of uncertain nonlinear systems
International Nuclear Information System (INIS)
Chernov, N.I.; Kurbatov, V.S.; Ososkov, G.A.
1988-01-01
Parameter estimation for multivariate probability distributions is studied in experiments where data are presented as one-dimensional hystograms. For this model a statistics defined as a quadratic form of the observed frequencies which has a limitig x 2 -distribution is proposed. The efficiency of the estimator minimizing the value of that statistics is proved whithin the class of all unibased estimates obtained via minimization of quadratic forms of observed frequencies. The elaborated method was applied to the physical problem of analysis of the secondary pion energy distribution in the isobar model of pion-nucleon interactions with the production of an additional pion. The numerical experiments showed that the accuracy of estimation is twice as much if comparing the conventional methods
Hybrid fuzzy charged system search algorithm based state estimation in distribution networks
Directory of Open Access Journals (Sweden)
Sachidananda Prasad
2017-06-01
Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.
The Burr X Pareto Distribution: Properties, Applications and VaR Estimation
Directory of Open Access Journals (Sweden)
Mustafa Ç. Korkmaz
2017-12-01
Full Text Available In this paper, a new three-parameter Pareto distribution is introduced and studied. We discuss various mathematical and statistical properties of the new model. Some estimation methods of the model parameters are performed. Moreover, the peaks-over-threshold method is used to estimate Value-at-Risk (VaR by means of the proposed distribution. We compare the distribution with a few other models to show its versatility in modelling data with heavy tails. VaR estimation with the Burr X Pareto distribution is presented using time series data, and the new model could be considered as an alternative VaR model against the generalized Pareto model for financial institutions.
McLaren, Alexander
2011-11-01
Due to their great ecological significance, mesopelagic fishes are attracting a wider audience on account of the large biomass they represent. Data from the National Marine Fisheries Service (NMFS) provided the opportunity to explore an unknown region of the North-West Atlantic, adjacent to one of the most productive fisheries in the world. Acoustic data collected during the cruise required the identification of acoustically distinct scattering types to make inferences on the migrations, distributions and biomass of mesopelagic scattering layers. Six scattering types were identified by the proposed method in our data and traces their migrations and distributions in the top 200m of the water column. This method was able to detect and trace the movements of three scattering types to 1000m depth, two of which can be further subdivided. This process of identification enabled the development of three physically-derived target-strength models adapted to traceable acoustic scattering types for the analysis of biomass and length distribution to 1000m depth. The abundance and distribution of acoustic targets varied closely in relation to varying physical environments associated with a warm core ring in the New England continental Shelf break region. The continental shelf break produces biomass density estimates that are twice as high as the warm core ring and the surrounding continental slope waters are an order of magnitude lower than either estimate. Biomass associated with distinct layers is assessed and any benefits brought about by upwelling at the edge of the warm core ring are shown not to result in higher abundance of deepwater species. Finally, asymmetric diurnal migrations in shelf break waters contrasts markedly with the symmetry of migrating layers within the warm ring, both in structure and density estimates, supporting a theory of predatorial and nutritional constraints to migrating pelagic species.
Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.
2014-01-01
The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.
DEFF Research Database (Denmark)
Kjeldsen, Thomas Rodding; Rosbjerg, Dan
2002-01-01
the prediction uncertainty and that the presence of intersite correlation tends to increase the uncertainty. A simulation study revealed that in regional index-flood estimation the method of probability weighted moments is preferable to method of moment estimation with regard to bias and RMSE.......A comparison of different methods for estimating T-year events is presented, all based on the Extreme Value Type I distribution. Series of annual maximum flood from ten gauging stations at the New Zealand South island have been used. Different methods of predicting the 100-year event...... and the connected uncertainty have been applied: At-site estimation and regional index-flood estimation with and without accounting for intersite correlation using either the method of moments or the method of probability weighted moments for parameter estimation. Furthermore, estimation at ungauged sites were...
The current duration design for estimating the time to pregnancy distribution
DEFF Research Database (Denmark)
Gasbarra, Dario; Arjas, Elja; Vehtari, Aki
2015-01-01
This paper was inspired by the studies of Niels Keiding and co-authors on estimating the waiting time-to-pregnancy (TTP) distribution, and in particular on using the current duration design in that context. In this design, a cross-sectional sample of women is collected from those who are currently...... attempting to become pregnant, and then by recording from each the time she has been attempting. Our aim here is to study the identifiability and the estimation of the waiting time distribution on the basis of current duration data. The main difficulty in this stems from the fact that very short waiting...... times are only rarely selected into the sample of current durations, and this renders their estimation unstable. We introduce here a Bayesian method for this estimation problem, prove its asymptotic consistency, and compare the method to some variants of the non-parametric maximum likelihood estimators...
A new approach to the estimation of radiopharmaceutical radiation dose distributions
International Nuclear Information System (INIS)
Hetherington, E.L.R.; Wood, N.R.
1975-03-01
For a photon energy of 150 keV, the Monte Carlo technique of photon history simulation was used to obtain estimates of the dose distribution in a human phantom for three activity distributions relevant to diagnostic nuclear medicine. In this preliminary work, the number of photon histories considered was insufficient to produce complete dose contours and the dose distributions are presented in the form of colour-coded diagrams. The distribution obtained illustrate an important deficiency in the MIRD Schema for dose estimation. Although the Schema uses the same mathematical technique for calculating photon doses, the results are obtained as average values for the whole body and for complete organs. It is shown that the actual dose distributions, particularly those for the whole body may, differ significantly from the average value calculated using the MIRD Schema and published absorbed fractions. (author)
An algorithm for 3D target scatterer feature estimation from sparse SAR apertures
Jackson, Julie Ann; Moses, Randolph L.
2009-05-01
We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.
Zahari, Zakirah Mohd; Zubaidah Adnan, Siti; Kanthasamy, Ramesh; Saleh, Suriyati; Samad, Noor Asma Fazli Abdul
2018-03-01
The specification of the crystal product is usually given in terms of crystal size distribution (CSD). To this end, optimal cooling strategy is necessary to achieve the CSD. The direct design control involving analytical CSD estimator is one of the approaches that can be used to generate the set-point. However, the effects of temperature on the crystal growth rate are neglected in the estimator. Thus, the temperature dependence on the crystal growth rate needs to be considered in order to provide an accurate set-point. The objective of this work is to extend the analytical CSD estimator where Arrhenius expression is employed to cover the effects of temperature on the growth rate. The application of this work is demonstrated through a potassium sulphate crystallisation process. Based on specified target CSD, the extended estimator is capable of generating the required set-point where a proposed controller successfully maintained the operation at the set-point to achieve the target CSD. Comparison with other cooling strategies shows a reduction up to 18.2% of the total number of undesirable crystals generated from secondary nucleation using linear cooling strategy is achieved.
Directory of Open Access Journals (Sweden)
Shanker Man Shrestha
2003-11-01
Full Text Available Super-resolution is very important for the signal processing of GPR (ground penetration radar to resolve closely buried targets. However, it is not easy to get high resolution as GPR signals are very weak and enveloped by the noise. The MUSIC (multiple signal classification algorithm, which is well known for its super-resolution capacity, has been implemented for signal and image processing of GPR. In addition, conventional spectral estimation technique, FFT (fast Fourier transform, has also been implemented for high-precision receiving signal level. In this paper, we propose CPM (combined processing method, which combines time domain response of MUSIC algorithm and conventional IFFT (inverse fast Fourier transform to obtain a super-resolution and high-precision signal level. In order to support the proposal, detailed simulation was performed analyzing SNR (signal-to-noise ratio. Moreover, a field experiment at a research field and a laboratory experiment at the University of Electro-Communications, Tokyo, were also performed for thorough investigation and supported the proposed method. All the simulation and experimental results are presented.
On the estimation of the spherical contact distribution Hs(y) for spatial point processes
International Nuclear Information System (INIS)
Doguwa, S.I.
1990-08-01
RIPLEY (1977, Journal of the Royal Statistical Society, B39 172-212) proposed an estimator for the spherical contact distribution H s (s), of a spatial point process observed in a bounded planar region. However, this estimator is not defined for some distances of interest, in this bounded region. A new estimator for H s (y), is proposed for use with regular grid of sampling locations. This new estimator is defined for all distances of interest. It also appears to have a smaller bias and a smaller mean squared error than the previously suggested alternative. (author). 11 refs, 4 figs, 1 tab
Target selection and mass estimation for manned NEO exploration using a baseline mission design
Boden, Ralf C.; Hein, Andreas M.; Kawaguchi, Junichiro
2015-06-01
In recent years Near-Earth Objects (NEOs) have received an increased amount of interest as a target for human exploration. NEOs offer scientifically interesting targets, and at the same time function as a stepping stone for achieving future Mars missions. The aim of this research is to identify promising targets from the large number of known NEOs that qualify for a manned sample-return mission with a maximum duration of one year. By developing a baseline mission design and a mass estimation model, mission opportunities are evaluated based on on-orbit mass requirements, safety considerations, and the properties of the potential targets. A selection of promising NEOs is presented and the effects of mission requirements and restrictions are discussed. Regarding safety aspects, the use of free-return trajectories provides the lowest on-orbit mass, when compared to an alternative design that uses system redundancies to ensure return of the spacecraft to Earth. It is discovered that, although a number of targets are accessible within the analysed time frame, no NEO offers both easy access and high incentive for its exploration. Under the discussed aspects a first human exploration mission going beyond the vicinity of Earth will require a trade off between targets that provide easy access and those that are of scientific interest. This lack of optimal mission opportunities can be seen in the small number of only 4 NEOs that meet all requirements for a sample-return mission and remain below an on-orbit mass of 500 metric Tons (mT). All of them require a mass between 315 and 492 mT. Even less ideal, smaller asteroids that are better accessible require an on-orbit mass that exceeds the launch capability of future heavy lift vehicles (HLV) such as SLS by at least 30 mT. These mass requirements show that additional efforts are necessary to increase the number of available targets and reduce on-orbit mass requirements through advanced mission architectures. The need for on
Atomic displacement distributions for light energetic atoms incident on heavy atom targets
International Nuclear Information System (INIS)
Brice, D.K.
1975-01-01
The depth distributions of atomic displacements produced by 4 to 100 keV H, D, and He ions incident on Cr, Mo, and W targets have been calculated using a sharp displacement threshold, E/sub d/ = 35 eV, and a previously described calculational procedure. These displacement depth distributions have been compared with the depth distributions of energy deposited into atomic processes to determine if a proportionality (modified Kinchin--Pease relationship) can be established. Such a relationship does exist for He ions and D ions incident on these metals at energies above 4 keV and 20 keV, respectively. For H ions the two distributions have significantly different shapes at all incident energies considered
Estimation of current density distribution of PAFC by analysis of cell exhaust gas
Energy Technology Data Exchange (ETDEWEB)
Kato, S.; Seya, A. [Fuji Electric Co., Ltd., Ichihara-shi (Japan); Asano, A. [Fuji Electric Corporate, Ltd., Yokosuka-shi (Japan)
1996-12-31
To estimate distributions of Current densities, voltages, gas concentrations, etc., in phosphoric acid fuel cell (PAFC) stacks, is very important for getting fuel cells with higher quality. In this work, we leave developed a numerical simulation tool to map out the distribution in a PAFC stack. And especially to Study Current density distribution in the reaction area of the cell, we analyzed gas composition in several positions inside a gas outlet manifold of the PAFC stack. Comparing these measured data with calculated data, the current density distribution in a cell plane calculated by the simulation, was certified.
Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi
2016-01-01
The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.
Nishiura, Hiroshi; Yan, Ping; Sleeman, Candace K; Mode, Charles J
2012-02-07
Use of the final size distribution of minor outbreaks for the estimation of the reproduction numbers of supercritical epidemic processes has yet to be considered. We used a branching process model to derive the final size distribution of minor outbreaks, assuming a reproduction number above unity, and applying the method to final size data for pneumonic plague. Pneumonic plague is a rare disease with only one documented major epidemic in a spatially limited setting. Because the final size distribution of a minor outbreak needs to be normalized by the probability of extinction, we assume that the dispersion parameter (k) of the negative-binomial offspring distribution is known, and examine the sensitivity of the reproduction number to variation in dispersion. Assuming a geometric offspring distribution with k=1, the reproduction number was estimated at 1.16 (95% confidence interval: 0.97-1.38). When less dispersed with k=2, the maximum likelihood estimate of the reproduction number was 1.14. These estimates agreed with those published from transmission network analysis, indicating that the human-to-human transmission potential of the pneumonic plague is not very high. Given only minor outbreaks, transmission potential is not sufficiently assessed by directly counting the number of offspring. Since the absence of a major epidemic does not guarantee a subcritical process, the proposed method allows us to conservatively regard epidemic data from minor outbreaks as supercritical, and yield estimates of threshold values above unity. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Smolyar, V A; Eremin, V V
2002-01-01
In terms of a kinetic equation diffusion model for a beam of electrons falling on a target along the normal one derived analytical formulae for distributions of separated energy and injected charge. In this case, no empirical adjustable parameters are introduced to the theory. The calculated distributions of separated energy for an electron plate directed source within infinite medium for C, Al, Sn and Pb are in good consistency with the Spencer data derived on the basis of the accurate solution of the Bethe equation being the source one in assumption of a diffusion model, as well
International Nuclear Information System (INIS)
Smolyar, V.A.; Eremin, A.V.; Eremin, V.V.
2002-01-01
In terms of a kinetic equation diffusion model for a beam of electrons falling on a target along the normal one derived analytical formulae for distributions of separated energy and injected charge. In this case, no empirical adjustable parameters are introduced to the theory. The calculated distributions of separated energy for an electron plate directed source within infinite medium for C, Al, Sn and Pb are in good consistency with the Spencer data derived on the basis of the accurate solution of the Bethe equation being the source one in assumption of a diffusion model, as well [ru
Determination of the axial 235U distribution in target fuel rods
International Nuclear Information System (INIS)
Huettig, G.; Bernhard, G.; Niese, U.
1989-01-01
The homogenity of the axial 235 U distribution in target fuel rods is an important quality criterion for the production of 99 Mo. The 235 U distribution has been analyzed automatically and nondestructively by measuring the 235 U gamma ray peak at 285.7 keV. For the quantitative assessment a calibration curve was prepared by the help of X-ray fluorescence analysis, colorimetry, and photometric titration. The accuracy of the method is ≤ 1.5% uranium per centimeter of the fuel rod
Angular distributions of particles sputtered from multicomponent targets with gas cluster ions
Energy Technology Data Exchange (ETDEWEB)
Ieshkin, A.E. [Faculty of Physics, Lomonosov Moscow State University, Leninskie Gory, Moscow 119991 (Russian Federation); Ermakov, Yu.A., E-mail: yuriermak@yandex.ru [Skobeltsyn Nuclear Physics Research Institute, Lomonosov Moscow State University, Leninskie Gory, Moscow 119991 (Russian Federation); Chernysh, V.S. [Faculty of Physics, Lomonosov Moscow State University, Leninskie Gory, Moscow 119991 (Russian Federation)
2015-07-01
The experimental angular distributions of atoms sputtered from polycrystalline W, Cd and Ni based alloys with 10 keV Ar cluster ions are presented. RBS was used to analyze a material deposited on a collector. It has been found that the mechanism of sputtering, connected with elastic properties of materials, has a significant influence on the angular distributions of sputtered components. The effect of non-stoichiometric sputtering at different emission angles has been found for the alloys under cluster ion bombardment. Substantial smoothing of the surface relief was observed for all targets irradiated with cluster ions.
Integrated detection, estimation, and guidance in pursuit of a maneuvering target
Dionne, Dany
The thesis focuses on efficient solutions of non-cooperative pursuit-evasion games with imperfect information on the state of the system. This problem is important in the context of interception of future maneuverable ballistic missiles. However, the theoretical developments are expected to find application to a broad class of hybrid control and estimation problems in industry. The validity of the results is nevertheless confirmed using a benchmark problem in the area of terminal guidance. A specific interception scenario between an incoming target with no information and a single interceptor missile with noisy measurements is analyzed in the form of a linear hybrid system subject to additive abrupt changes. The general research is aimed to achieve improved homing accuracy by integrating ideas from detection theory, state estimation theory and guidance. The results achieved can be summarized as follows. (i) Two novel maneuver detectors are developed to diagnose abrupt changes in a class of hybrid systems (detection and isolation of evasive maneuvers): a new implementation of the GLR detector and the novel adaptive- H0 GLR detector. (ii) Two novel state estimators for target tracking are derived using the novel maneuver detectors. The state estimators employ parameterized family of functions to described possible evasive maneuvers. (iii) A novel adaptive Bayesian multiple model predictor of the ballistic miss is developed which employs semi-Markov models and ideas from detection theory. (iv) A novel integrated estimation and guidance scheme that significantly improves the homing accuracy is also presented. The integrated scheme employs banks of estimators and guidance laws, a maneuver detector, and an on-line governor; the scheme is adaptive with respect to the uncertainty affecting the probability density function of the filtered state. (v) A novel discretization technique for the family of continuous-time, game theoretic, bang-bang guidance laws is introduced. The
Hortos, William S.
2008-04-01
participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.
Spatial distribution of carbon species in laser ablation of graphite target
International Nuclear Information System (INIS)
Ikegami, T.; Ishibashi, S.; Yamagata, Y.; Ebihara, K.; Thareja, R.K.; Narayan, J.
2001-01-01
We report on the temporal evolution and spatial distribution of C 2 and C 3 molecules produced by KrF laser ablation of a graphite target using laser induced fluorescence imaging and optical emission spectroscopy. Spatial density profiles of C 2 were measured using two-dimensional fluorescence in various pressures of different ambient (vacuum, nitrogen, oxygen, hydrogen, helium, and argon) gases at various ablation laser fluences and ablation area. A large yield of C 2 is observed in the central part of the plume and near the target surface and its density and distribution was affected by the laser fluence and ambient gas. Fluorescent C 3 was studied in Ar gas and the yield of C 3 is enhanced at higher gas pressure and longer delay times after ablation
Fast neutron forward distributions from C, Be and U thick targets bombarded by deuterons
International Nuclear Information System (INIS)
Menard, S.; Clapier, F.; Pauwels, N.; Proust, J.; Donzaud, C.; Guillemaud-Mueller, D.; Lhenry, I.; Mueller, A.C.; Scarpaci, J.A.; Sorlin, O.; Mirea, M.
1999-01-01
In principle, to produce neutron rich radioactive beams with sufficient intensities, a source of isotopes far from the valley of β--stability can be obtained through the fission of 238 U induced by fast neutrons. A very promising way to assess the feasibility of these very intense neutron beams is to break an intense 2 H beam in a dedicated converter. The main objective of SPIRAL and PARRNe R - D projects is the investigation of the optimum parameters for a neutron rich isotope source in accordance with the scheme presented above. In such conditions, the charge particle energy loss can prevent the destruction of the fission target. In the frame of these project, a special attention is dedicated to the energetic and angular distributions of the neutrons emerging from a set of converters at a series of 2 H incident energies. Deuteron beams at energies less than 30 MeV are particularly interesting because it is expected that, after the decay in the 238 U target, the neutron rich radioactive fission products are cold enough, thus avoiding the evaporation of a too large number of neutrons. For such purposes, one needs experimental angular distributions at given energies for different types of converters and to elaborate a theoretical tool in order to estimate accurately the characteristics of the secondary neutron beam. In this paper, the experimental results were obtained with 17, 20 and 28 MeV deuteron energies on Be, C and U converters using the time of flight method. These data are compared to results given by a model valid at higher energy in order to obtain pertinent simulations in a large range of incident energies. Many theoretical tools were developed to characterize the properties of the neutron beams emerging from thick targets. In this contribution the Serber's model, considered with its improvements which account for the Coulomb deflection and the mean straggling of the beam in the material, is compared to experimental data in order to verify the validity
Spatial distribution of moderated neutrons along a Pb target irradiated by high-energy protons
International Nuclear Information System (INIS)
Fragopoulou, M.; Manolopoulou, M.; Stoulos, S.; Brandt, R.; Westmeier, W.; Kulakov, B.A.; Krivopustov, M.I.; Sosnin, A.N.; Debeauvais, M.; Adloff, J.C.; Zamani Valasiadou, M.
2006-01-01
High-energy protons in the range of 0.5-7.4 GeV have irradiated an extended Pb target covered with a paraffin moderator. The moderator was used in order to shift the hard Pb spallation neutron spectrum to lower energies and to increase the transmutation efficiency via (n,γ) reactions. Neutron distributions along and inside the paraffin moderator were measured. An analysis of the experimental results was performed based on particle production by high-energy interactions with heavy targets and neutron spectrum shifting by the paraffin. Conclusions about the spallation neutron production in the target and moderation through the paraffin are presented. The study of the total neutron fluence on the moderator surface as a function of the proton beam energy shows that neutron cost is improved up to 1 GeV. For higher proton beam energies it remains constant with a tendency to decline
In vivo estimation of target registration errors during augmented reality laparoscopic surgery.
Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2018-06-01
Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
Preliminary estimation of minimum target dose in intracavitary radiotherapy for cervical cancer
Energy Technology Data Exchange (ETDEWEB)
Ohara, Kiyoshi; Oishi-Tanaka, Yumiko; Sugahara, Shinji; Itai, Yuji [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine
2001-08-01
In intracavitary radiotherapy (ICRT) for cervical cancer, minimum target dose (D{sub min}) will pertain to local disease control more directly than will reference point A dose (D{sub A}). However, ICRT has been performed traditionally without specifying D{sub min} since the target volume was not identified. We have estimated D{sub min} retrospectively by identifying tumors using magnetic resonance (MR) images. Pre- and posttreatment MR images of 31 patients treated with high-dose-rate ICRT were used. ICRT was performed once weekly at 6.0 Gy D{sub A}, and involved 2-5 insertions for each patient, 119 insertions in total. D{sub min} was calculated arbitrarily simply at the point A level using the tumor width (W{sub A}) to compare with D{sub A}. W{sub A} at each insertion was estimated by regression analysis with pre- and posttreatment W{sub A}. D{sub min} for each insertion varied from 3.0 to 46.0 Gy, a 16-fold difference. The ratio of total D{sub min} to total D{sub A} for each patient varied from 0.5 to 6.5. Intrapatient D{sub min} difference between the initial insertion and final insertion varied from 1.1 to 3.4. Preliminary estimation revealed that D{sub min} varies widely under generic dose prescription. Thorough D{sub min} specification will be realized when ICRT-applicator insertion is performed under MR imaging. (author)
Directory of Open Access Journals (Sweden)
SANKU DEY
2010-11-01
Full Text Available The generalized exponential (GE distribution proposed by Gupta and Kundu (1999 is an important lifetime distribution in survival analysis. In this article, we propose to obtain Bayes estimators and its associated risk based on a class of non-informative prior under the assumption of three loss functions, namely, quadratic loss function (QLF, squared log-error loss function (SLELF and general entropy loss function (GELF. The motivation is to explore the most appropriate loss function among these three loss functions. The performances of the estimators are, therefore, compared on the basis of their risks obtained under QLF, SLELF and GELF separately. The relative efficiency of the estimators is also obtained. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different situations.
On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution
Directory of Open Access Journals (Sweden)
Navid Feroze
2015-12-01
Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.
Gardner, W. P.
2017-12-01
A model which simulates tracer concentration in surface water as a function the age distribution of groundwater discharge is used to characterize groundwater flow systems at a variety of spatial scales. We develop the theory behind the model and demonstrate its application in several groundwater systems of local to regional scale. A 1-D stream transport model, which includes: advection, dispersion, gas exchange, first-order decay and groundwater inflow is coupled a lumped parameter model that calculates the concentration of environmental tracers in discharging groundwater as a function of the groundwater residence time distribution. The lumped parameters, which describe the residence time distribution, are allowed to vary spatially, and multiple environmental tracers can be simulated. This model allows us to calculate the longitudinal profile of tracer concentration in streams as a function of the spatially variable groundwater age distribution. By fitting model results to observations of stream chemistry and discharge, we can then estimate the spatial distribution of groundwater age. The volume of groundwater discharge to streams can be estimated using a subset of environmental tracers, applied tracers, synoptic stream gauging or other methods, and the age of groundwater then estimated using the previously calculated groundwater discharge and observed environmental tracer concentrations. Synoptic surveys of SF6, CFC's, 3H and 222Rn, along with measured stream discharge are used to estimate the groundwater inflow distribution and mean age for regional scale surveys of the Berland River in west-central Alberta. We find that groundwater entering the Berland has observable age, and that the age estimated using our stream survey is of similar order to limited samples from groundwater wells in the region. Our results show that the stream can be used as an easily accessible location to constrain the regional scale spatial distribution of groundwater age.
International Nuclear Information System (INIS)
Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.
2001-01-01
Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition
ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD
Directory of Open Access Journals (Sweden)
Yuri Gulbin
2011-05-01
Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.
Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing
2017-10-11
In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.
A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution
Directory of Open Access Journals (Sweden)
Enrique Calderín-Ojeda
2018-02-01
Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
On the estimation of channel power distribution for PHWRs (Paper No. HMT-66-87)
International Nuclear Information System (INIS)
Parikh, M.V.; Kumar, A.N.; Krishnamohan, B.; Bhaskara Rao, P.
1987-01-01
In the case of PHWRs the estimation of channel power distribution is an important safety criteria. In this paper two methods based on theoretical estimation and the measured parameter are described. The comparison made shows good agreement in the prediction of channel power by both the methods. A parametric study in one of the measured parameters is also made which gives better agreement in results obtained. (author). 3 tabs
Impact of smart metering data aggregation on distribution system state estimation
Chen, Qipeng; Kaleshi, Dritan; Fan, Zhong; Armour, Simon
2016-01-01
Pseudo medium/low voltage (MV/LV) transformer loads are usually used as partial inputs to the distribution system state estimation (DSSE) in MV systems. Such pseudo load can be represented by the aggregation of smart metering (SM) data. This follows the government restriction that distribution network operators (DNOs) can only use aggregated SM data. Therefore, we assess the subsequent performance of the DSSE, which shows the impact of this restriction - it affects the voltage angle estimatio...
A New Method for the 2D DOA Estimation of Coherently Distributed Sources
Directory of Open Access Journals (Sweden)
Liang Zhou
2014-03-01
Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.
Han, Fang; Liu, Han
2017-02-01
Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.
Exact run length distribution of the double sampling x-bar chart with estimated process parameters
Directory of Open Access Journals (Sweden)
Teoh, W. L.
2016-05-01
Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.
Energy Technology Data Exchange (ETDEWEB)
Takamiya, Masanori [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501, Japan and Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Nakamura, Mitsuhiro, E-mail: m-nkmr@kuhp.kyoto-u.ac.jp; Akimoto, Mami; Ueki, Nami; Yamada, Masahiro; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Tanabe, Hiroaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047 (Japan); Kokubo, Masaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047, Japan and Department of Radiation Oncology, Kobe City Medical Center General Hospital, Kobe 650-0047 (Japan); Itoh, Akio [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501 (Japan)
2016-04-15
Purpose: To assess the target localization error (TLE) in terms of the distance between the target and the localization point estimated from the surrogates (|TMD|), the average of respiratory motion for the surrogates and the target (|aRM|), and the number of fiducial markers used for estimating the target (n). Methods: This study enrolled 17 lung cancer patients who subsequently underwent four fractions of real-time tumor tracking irradiation. Four or five fiducial markers were implanted around the lung tumor. The three-dimensional (3D) distance between the tumor and markers was at maximum 58.7 mm. One of the markers was used as the target (P{sub t}), and those markers with a 3D |TMD{sub n}| ≤ 58.7 mm at end-exhalation were then selected. The estimated target position (P{sub e}) was calculated from a localization point consisting of one to three markers except P{sub t}. Respiratory motion for P{sub t} and P{sub e} was defined as the root mean square of each displacement, and |aRM| was calculated from the mean value. TLE was defined as the root mean square of each difference between P{sub t} and P{sub e} during the monitoring of each fraction. These procedures were performed repeatedly using the remaining markers. To provide the best guidance on the answer with n and |TMD|, fiducial markers with a 3D |aRM ≥ 10 mm were selected. Finally, a total of 205, 282, and 76 TLEs that fulfilled the 3D |TMD| and 3D |aRM| criteria were obtained for n = 1, 2, and 3, respectively. Multiple regression analysis (MRA) was used to evaluate TLE as a function of |TMD| and |aRM| in each n. Results: |TMD| for n = 1 was larger than that for n = 3. Moreover, |aRM| was almost constant for all n, indicating a similar scale for the marker’s motion near the lung tumor. MRA showed that |aRM| in the left–right direction was the major cause of TLE; however, the contribution made little difference to the 3D TLE because of the small amount of motion in the left–right direction. The TLE
Zhang, Ke; Jiang, Bin; Shi, Peng
2017-02-01
In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.
Estimation of Bimodal Urban Link Travel Time Distribution and Its Applications in Traffic Analysis
Directory of Open Access Journals (Sweden)
Yuxiong Ji
2015-01-01
Full Text Available Vehicles travelling on urban streets are heavily influenced by traffic signal controls, pedestrian crossings, and conflicting traffic from cross streets, which would result in bimodal travel time distributions, with one mode corresponding to travels without delays and the other travels with delays. A hierarchical Bayesian bimodal travel time model is proposed to capture the interrupted nature of urban traffic flows. The travel time distributions obtained from the proposed model are then considered to analyze traffic operations and estimate travel time distribution in real time. The advantage of the proposed bimodal model is demonstrated using empirical data, and the results are encouraging.
Directory of Open Access Journals (Sweden)
Tingting Jin
2017-04-01
Full Text Available Multichannel synthetic aperture radar (SAR is a significant breakthrough to the inherent limitation between high-resolution and wide-swath (HRWS compared with conventional SAR. Moving target indication (MTI is an important application of spaceborne HRWS SAR systems. In contrast to previous studies of SAR MTI, the HRWS SAR mainly faces the problem of under-sampled data of each channel, causing single-channel imaging and processing to be infeasible. In this study, the estimation of velocity is equivalent to the estimation of the cone angle according to their relationship. The maximum likelihood (ML based algorithm is proposed to estimate the radial velocity in the existence of Doppler ambiguities. After that, the signal reconstruction and compensation for the phase offset caused by radial velocity are processed for a moving target. Finally, the traditional imaging algorithm is applied to obtain a focused moving target image. Experiments are conducted to evaluate the accuracy and effectiveness of the estimator under different signal-to-noise ratios (SNR. Furthermore, the performance is analyzed with respect to the motion ship that experiences interference due to different distributions of sea clutter. The results verify that the proposed algorithm is accurate and efficient with low computational complexity. This paper aims at providing a solution to the velocity estimation problem in the future HRWS SAR systems with multiple receive channels.
Directory of Open Access Journals (Sweden)
Dongwoo Jang
2018-03-01
Full Text Available Leaks in a water distribution network (WDS constitute losses of water supply caused by pipeline failure, operational loss, and physical factors. This has raised the need for studies on the factors affecting the leakage ratio and estimation of leakage volume in a water supply system. In this study, principal component analysis (PCA and artificial neural network (ANN were used to estimate the volume of water leakage in a WDS. For the study, six main effective parameters were selected and standardized data obtained through the Z-score method. The PCA-ANN model was devised and the leakage ratio was estimated. An accuracy assessment was performed to compare the measured leakage ratio to that of the simulated model. The results showed that the PCA-ANN method was more accurate for estimating the leakage ratio than a single ANN simulation. In addition, the estimation results differed according to the number of neurons in the ANN model’s hidden layers. In this study, an ANN with multiple hidden layers was found to be the best method for estimating the leakage ratio with 12–12 neurons. This suggested approaches to improve the accuracy of leakage ratio estimation, as well as a scientific approach toward the sustainable management of water distribution systems.
International Nuclear Information System (INIS)
Ishitani, Kazuki; Yamane, Yoshihiro
1999-01-01
In nuclear fuel reprocessing plants, monitoring the spatial profile of neutron flux to infer subcriticality and distribution of fuel concentration using detectors such as PSPC, is very beneficial in sight of criticality safety. In this paper a method of subcriticality and fuel concentration estimation which is supposed to use under non-uniformed system is proposed. Its basic concept is the pattern matching between measured neutron flux distribution and beforehand calculated one. In any kind of subcriticality estimation, we can regard that measured neutron counts put any kind of black box, and then this black box outputs subcriticality. We proposed the use of artificial neural network or 'pattern matching' as black box which have no theoretical clear base. These method are wholly based on the calculated value as recently advancement of computer code accuracy for criticality safety. The most difference between indirect bias estimation method and our method is that our new approach target are the unknown non-uniform system. (J.P.N.)
Directory of Open Access Journals (Sweden)
Abass Ali K
2010-06-01
Full Text Available Abstract Background Insecticide-treated nets (ITN and long-lasting insecticidal treated nets (LLIN are important means of malaria prevention. Although there is consensus regarding their importance, there is uncertainty as to which delivery strategies are optimal for dispensing these life saving interventions. A targeted mass distribution of free LLINs to children under five and pregnant women was implemented in Zanzibar between August 2005 and January 2006. The outcomes of this distribution among children under five were evaluated, four to nine months after implementation. Methods Two cross-sectional surveys were conducted in May 2006 in two districts of Zanzibar: Micheweni (MI on Pemba Island and North A (NA on Unguja Island. Household interviews were conducted with 509 caretakers of under-five children, who were surveyed for socio-economic status, the net distribution process, perceptions and use of bed nets. Each step in the distribution process was assessed in all children one to five years of age for unconditional and conditional proportion of success. System effectiveness (the accumulated proportion of success and equity effectiveness were calculated, and predictors for LLIN use were identified. Results The overall proportion of children under five sleeping under any type of treated net was 83.7% (318/380 in MI and 91.8% (357/389 in NA. The LLIN usage was 56.8% (216/380 in MI and 86.9% (338/389 in NA. Overall system effectiveness was 49% in MI and 87% in NA, and equity was found in the distribution scale-up in NA. In both districts, the predicting factor of a child sleeping under an LLIN was caretakers thinking that LLINs are better than conventional nets (OR = 2.8, p = 0.005 in MI and 2.5, p = 0.041 in NA, in addition to receiving an LLIN (OR = 4.9, p Conclusions Targeted free mass distribution of LLINs can result in high and equitable bed net coverage among children under five. However, in order to sustain high effective coverage, there
International Nuclear Information System (INIS)
Briggs, C.K.; Tsugawa, R.T.; Hendricks, C.D.; Souers, P.C.
1975-01-01
The literature values for the 0.55-μm refractive index N of liquid and gaseous H 2 and D 2 are combined to yield the equation (N - 1) = [(3.15 +- 0.12) x 10 -6 ]rho, where rho is the density in moles per cubic meter. This equation can be extrapolated to 300 0 K for use on DT in solid, liquid, and gas phases. The equation is based on a review of solid-hydrogen densities measured in bulk and also by diffraction methods. By extrapolation, the estimated densities and 0.55-μm refractive indices for DT are given. Radiation-induced point defects could possibly cause optical absorption and a resulting increased refractive index in solid DT and T 2 . The effect of the DT refractive index in measuring glass and cryogenic DT laser targets is also described
AKaplan-Meier estimators of distance distributions for spatial point processes
Baddeley, A.J.; Gill, R.D.
1997-01-01
When a spatial point process is observed through a bounded window, edge effects hamper the estimation of characteristics such as the empty space function $F$, the nearest neighbour distance distribution $G$, and the reduced second order moment function $K$. Here we propose and study product-limit
Energy Technology Data Exchange (ETDEWEB)
Jochem, Warren C [ORNL; Sims, Kelly M [ORNL; Bright, Eddie A [ORNL; Urban, Marie L [ORNL; Rose, Amy N [ORNL; Coleman, Phil R [ORNL; Bhaduri, Budhendra L [ORNL
2013-01-01
In recent years, uses of high-resolution population distribution databases are increasing steadily for environmental, socioeconomic, public health, and disaster-related research and operations. With the development of daytime population distribution, temporal resolution of such databases has been improved. However, the lack of incorporation of transitional population, namely business and leisure travelers, leaves a significant population unaccounted for within the critical infrastructure networks, such as at transportation hubs. This paper presents two general methodologies for estimating passenger populations in airport and cruise port terminals at a high temporal resolution which can be incorporated into existing population distribution models. The methodologies are geographically scalable and are based on, and demonstrate how, two different transportation hubs with disparate temporal population dynamics can be modeled utilizing publicly available databases including novel data sources of flight activity from the Internet which are updated in near-real time. The airport population estimation model shows great potential for rapid implementation for a large collection of airports on a national scale, and the results suggest reasonable accuracy in the estimated passenger traffic. By incorporating population dynamics at high temporal resolutions into population distribution models, we hope to improve the estimates of populations exposed to or at risk to disasters, thereby improving emergency planning and response, and leading to more informed policy decisions.
M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples
2013-01-01
Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...
Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding
DEFF Research Database (Denmark)
Salmistraro, Matteo; Raket, Lars Lau; Brites, Catarina
2014-01-01
Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and c...
A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model
Directory of Open Access Journals (Sweden)
AMAD HAMZA
2016-10-01
Full Text Available In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time.For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation. Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods
A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model
International Nuclear Information System (INIS)
Hamza, A.; Jan, T.; Ali, A.
2016-01-01
In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time). For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation). Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods. (author)
Why liquid displacement methods are sometimes wrong in estimating the pore-size distribution
Gijsbertsen-Abrahamse, A.J.; Boom, R.M.; Padt, van der A.
2004-01-01
The liquid displacement method is a commonly used method to determine the pore size distribution of micro- and ultrafiltration membranes. One of the assumptions for the calculation of the pore sizes is that the pores are parallel and thus are not interconnected. To show that the estimated pore size
Impact of dose-distribution uncertainties on rectal ntcp modeling I: Uncertainty estimates
International Nuclear Information System (INIS)
Fenwick, John D.; Nahum, Alan E.
2001-01-01
A trial of nonescalated conformal versus conventional radiotherapy treatment of prostate cancer has been carried out at the Royal Marsden NHS Trust (RMH) and Institute of Cancer Research (ICR), demonstrating a significant reduction in the rate of rectal bleeding reported for patients treated using the conformal technique. The relationship between planned rectal dose-distributions and incidences of bleeding has been analyzed, showing that the rate of bleeding falls significantly as the extent of the rectal wall receiving a planned dose-level of more than 57 Gy is reduced. Dose-distributions delivered to the rectal wall over the course of radiotherapy treatment inevitably differ from planned distributions, due to sources of uncertainty such as patient setup error, rectal wall movement and variation in the absolute rectal wall surface area. In this paper estimates of the differences between planned and treated rectal dose-distribution parameters are obtained for the RMH/ICR nonescalated conformal technique, working from a distribution of setup errors observed during the RMH/ICR trial, movement data supplied by Lebesque and colleagues derived from repeat CT scans, and estimates of rectal circumference variations extracted from the literature. Setup errors and wall movement are found to cause only limited systematic differences between mean treated and planned rectal dose-distribution parameter values, but introduce considerable uncertainties into the treated values of some dose-distribution parameters: setup errors lead to 22% and 9% relative uncertainties in the highly dosed fraction of the rectal wall and the wall average dose, respectively, with wall movement leading to 21% and 9% relative uncertainties. Estimates obtained from the literature of the uncertainty in the absolute surface area of the distensible rectal wall are of the order of 13%-18%. In a subsequent paper the impact of these uncertainties on analyses of the relationship between incidences of bleeding
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Time difference of arrival estimation of microseismic signals based on alpha-stable distribution
Jia, Rui-Sheng; Gong, Yue; Peng, Yan-Jun; Sun, Hong-Mei; Zhang, Xing-Li; Lu, Xin-Ming
2018-05-01
Microseismic signals are generally considered to follow the Gauss distribution. A comparison of the dynamic characteristics of sample variance and the symmetry of microseismic signals with the signals which follow α-stable distribution reveals that the microseismic signals have obvious pulse characteristics and that the probability density curve of the microseismic signal is approximately symmetric. Thus, the hypothesis that microseismic signals follow the symmetric α-stable distribution is proposed. On the premise of this hypothesis, the characteristic exponent α of the microseismic signals is obtained by utilizing the fractional low-order statistics, and then a new method of time difference of arrival (TDOA) estimation of microseismic signals based on fractional low-order covariance (FLOC) is proposed. Upon applying this method to the TDOA estimation of Ricker wavelet simulation signals and real microseismic signals, experimental results show that the FLOC method, which is based on the assumption of the symmetric α-stable distribution, leads to enhanced spatial resolution of the TDOA estimation relative to the generalized cross correlation (GCC) method, which is based on the assumption of the Gaussian distribution.
Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation.
Zhao, Wei; Wang, Han
2016-06-28
Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages.
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu
2017-06-21
For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed-referred to as the Two-Dimensional product modified Lv's distribution (2D-PMLVD)-for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.
Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems: Preprint
Energy Technology Data Exchange (ETDEWEB)
Wang, Dexin; Yang, Liuqing; Florita, Anthony; Alam, S.M. Shafiul; Elgindy, Tarek; Hodge, Bri-Mathias
2016-08-01
The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the help of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.
Directory of Open Access Journals (Sweden)
Hamdy Mohamed Salem
2018-03-01
Full Text Available This paper considers life-testing experiments and how it is effected by stress factors: namely temperature, electricity loads, cycling rate and pressure. A major type of accelerated life tests is a step-stress model that allows the experimenter to increase stress levels more than normal use during the experiment to see the failure items. The test items are assumed to follow Gamma Dual Weibull distribution. Different methods for estimating the parameters are discussed. These include Maximum Likelihood Estimations and Confidence Interval Estimations which is based on asymptotic normality generate narrow intervals to the unknown distribution parameters with high probability. MathCAD (2001 program is used to illustrate the optimal time procedure through numerical examples.
An Estimation of the Gamma-Ray Burst Afterglow Apparent Optical Brightness Distribution Function
Akerlof, Carl W.; Swan, Heather F.
2007-12-01
By using recent publicly available observational data obtained in conjunction with the NASA Swift gamma-ray burst (GRB) mission and a novel data analysis technique, we have been able to make some rough estimates of the GRB afterglow apparent optical brightness distribution function. The results suggest that 71% of all burst afterglows have optical magnitudes with mRa strong indication that the apparent optical magnitude distribution function peaks at mR~19.5. Such estimates may prove useful in guiding future plans to improve GRB counterpart observation programs. The employed numerical techniques might find application in a variety of other data analysis problems in which the intrinsic distributions must be inferred from a heterogeneous sample.
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2017-11-01
This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.
Directory of Open Access Journals (Sweden)
Álvaro Gutiérrez
2011-11-01
Full Text Available Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA, previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce.
Influence of boundary effects on electron beam dose distribution formation in multilayer targets
International Nuclear Information System (INIS)
Kaluska, I.; Zimek, Z.; Lazurik, V.T.; Lazurik, V.M.; Popov, G.F.; Rogov, Y.V.
2010-01-01
Computational dosimetry play a significant role in an industrial radiation processing at dose measurements in the product irradiated with electron beams (EB), X-ray and gamma ray from radionuclide sources. Accurate and validated programs for absorbed dose calculations are required for computational dosimetry. The program ModeStEB (modelling of EB processing in a three-dimensional (3D) multilayer flat targets) was designed specially for simulation and optimization of industrial radiation processing, calculation of the 3D absorbed dose distribution within multilayer packages. The package is irradiated with scanned EB on an industrial radiation facility that is based on the pulsed or continuous type of electron accelerators in the electron energy range from 0.1 to 25 MeV. Simulation of EB dose distributions in the multilayer targets was accomplished using the Monte Carlo (MC) method. Experimental verification of MC simulation prediction for EB dose distribution formation in a stack of plates interleaved with polyvinylchloride (PVC) dosimetric films (DF), within a packing box, and irradiated with a scanned 10 MeV EB on a moving conveyer is discussed. (authors)
Estimation of CO2 flux from targeted satellite observations: a Bayesian approach
International Nuclear Information System (INIS)
Cox, Graham
2014-01-01
We consider the estimation of carbon dioxide flux at the ocean–atmosphere interface, given weighted averages of the mixing ratio in a vertical atmospheric column. In particular we examine the dependence of the posterior covariance on the weighting function used in taking observations, motivated by the fact that this function is instrument-dependent, hence one needs the ability to compare different weights. The estimation problem is considered using a variational data assimilation method, which is shown to admit an equivalent infinite-dimensional Bayesian formulation. The main tool in our investigation is an explicit formula for the posterior covariance in terms of the prior covariance and observation operator. Using this formula, we compare weighting functions concentrated near the surface of the earth with those concentrated near the top of the atmosphere, in terms of the resulting covariance operators. We also consider the problem of observational targeting, and ask if it is possible to reduce the covariance in a prescribed direction through an appropriate choice of weighting function. We find that this is not the case—there exist directions in which one can never gain information, regardless of the choice of weight. (paper)
Estimating distribution of hidden objects with drones: from tennis balls to manatees.
Directory of Open Access Journals (Sweden)
Julien Martin
Full Text Available Unmanned aerial vehicles (UAV, or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection. UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348. The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.
Estimating distribution of hidden objects with drones: from tennis balls to manatees.
Martin, Julien; Edwards, Holly H; Burgess, Matthew A; Percival, H Franklin; Fagan, Daniel E; Gardner, Beth E; Ortega-Ortiz, Joel G; Ifju, Peter G; Evers, Brandon S; Rambo, Thomas J
2012-01-01
Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.
Estimation of Inflationary Expectations and the Effectiveness of Inflation Targeting Strategy
Directory of Open Access Journals (Sweden)
Amalia CRISTESCU
2011-02-01
Full Text Available The credibility and accountability of a central bank, acting in an inflation targeting regime, are essential because they allow a sustainable anchoring of the inflationary anticipation of economic agents. Their decisions and behavior will increasingly be grounded on information provided by the central bank, especially if it shows transparency in the process of communicating with the public. Thus, inflationary anticipations are one of the most important channels through which the monetary policy affects the economic activity. They are crucial in the formation of the consumer prices among producers and traders, especially since it is relatively expensive for the economic agents to adjust their prices at short intervals. That is why many central banks use response functions containing inflationary anticipations, in their inflation targeting models. The most frequently problem in relation to these anticipations is that they are based on the assumption of optimal forecasts of future inflation, which are, implicitly, rational anticipations. In fact, the economic agents’ inflationary anticipations are most often adaptive or even irrational. Thus, rational anticipations cannot be used to estimate equations for the Romanian economy because the agents who form their expectations do not have sufficient information and an inflationary environment stable enough to fully anticipate the inflation evolution. The inflation evolution in the Romanian economy helps to calculate adaptive forecasts for which the weight of the "forward looking" component has to be rather important. The economic agents form their inflation expectations for periods of time that, usually, coincide with a production cycle (one year and consider the official and unofficial inflation forecasts present on the market in order to make strategic decisions. Thus, in recent research on inflation modeling, actual inflationary anticipations of economic agents which are revealed based on national
Targeted alpha therapy of mCRPC. Dosimetry estimate of {sup 213}bismuth-PSMA-617
Energy Technology Data Exchange (ETDEWEB)
Kratochwil, Clemens; Afshar-Oromieh, Ali; Rathke, Hendrik; Giesel, Frederik L. [University Hospital Heidelberg, Department of Nuclear Medicine, Heidelberg (Germany); Schmidt, Karl [ABX-CRO, Dresden (Germany); Bruchertseifer, Frank; Morgenstern, Alfred [European Commission - Joint Research Centre, Directorate for Nuclear Safety and Security, Karlsruhe (Germany); Haberkorn, Uwe [University Hospital Heidelberg, Department of Nuclear Medicine, Heidelberg (Germany); German Cancer Research Center (DKFZ), Cooperation Unit Nuclear Medicine, Heidelberg (Germany)
2018-01-15
PSMA-617 is a small molecule targeting the prostate-specific membrane antigen (PSMA). In this work, we estimate the radiation dosimetry for this ligand labeled with the alpha-emitter {sup 213}Bi. Three patients with metastatic prostate cancer underwent PET scans 0.1 h, 1 h, 2 h, 3 h, 4 h and 5 h after injection of {sup 68}Ga-PSMA-617. Source organs were kidneys, liver, spleen, salivary glands, bladder, red marrow and representative tumor lesions. The imaging nuclide {sup 68}Ga was extrapolated to the half-life of {sup 213}Bi. The residence times of {sup 213}Bi were forwarded to the instable daughter nuclides. OLINDA was used for dosimetry calculation. Results are discussed in comparison to literature data for {sup 225}Ac-PSMA-617. Assuming a relative biological effectiveness of 5 for alpha radiation, the dosimetry estimate revealed equivalent doses of mean 8.1 Sv{sub RBE5}/GBq for salivary glands, 8.1 Sv{sub RBE5}/GBq for kidneys and 0.52 Sv{sub RBE5}/GBq for red marrow. Liver (1.2 Sv{sub RBE5}/GBq), spleen (1.4 Sv{sub RBE5}/GBq), bladder (0.28 Sv{sub RBE5}/GBq) and other organs (0.26 Sv{sub RBE5}/GBq) were not dose-limiting. The effective dose is 0.56 Sv{sub RBE5}/GBq. Tumor lesions were in the range 3.2-9.0 Sv{sub RBE5}/GBq (median 7.6 Sv{sub RBE5}/GBq). Kidneys would limit the cumulative treatment activity to 3.7 GBq; red marrow might limit the maximum single fraction to 2 GBq. Despite promising results, the therapeutic index was inferior compared to {sup 225}Ac-PSMA-617. Dosimetry of {sup 213}Bi-PSMA-617 is in a range traditionally considered reasonable for clinical application. Nevertheless, compared to {sup 225}Ac-PSMA-617, it suffers from higher perfusion-dependent off-target radiation and a longer biological half-life of PSMA-617 in dose-limiting organs than the physical half-life of {sup 213}Bi, rendering this nuclide as a second choice radiolabel for targeted alpha therapy of prostate cancer. (orig.)
International Nuclear Information System (INIS)
Haga, Katsuhiro; Terada, Atsuhiko; Ishikura, Shuichi; Teshigawara, Makoto; Kinoshita, Hidetaka; Kobayashi, Kaoru; Kaminaga, Masaki; Hino, Ryutaro; Susuki, Akira
1999-11-01
A solid target cooled by heavy water is presently under development under the Neutron Science Research Project of the Japan Atomic Energy Research Institute (JAERI). Target plates of several millimeters thickness made of heavy metal are used as the spallation target material and they are put face to face in a row with one to two millimeters gaps in between though which heavy water flows, as the coolant. Based on the design criteria regarding the target plate cooling, the volume percentage of the coolant, and the thermal stress produced in the target plates, we conducted thermal and hydraulic analysis with a one dimensional target plate model. We choosed tungsten as the target material, and decided on various target plate thicknesses. We then calculated the temperature and the thermal stress in the target plates using a two dimensional model, and confirmed the validity of the target plate thicknesses. Based on these analytical results, we proposed a target structure in which forty target plates are divided into six groups and each group is cooled using a single pass of coolant. In order to investigate the relationship between the distribution of the coolant flow, the pressure drop, and the coolant velocity, we conducted a hydraulic analysis using the general purpose hydraulic analysis code. As a result, we realized that an uniform coolant flow distribution can be achieved under a wide range of flow velocity conditions in the target plate cooling channels from 1 m/s to 10 m/s. The pressure drop along the coolant path was 0.09 MPa and 0.17 MPa when the coolant flow velocity was 5 m/s and 7 m/s respectively, which is required to cool the 1.5 MW and 2.5 MW solid targets. (author)
The Spatial Distribution of Forest Biomass in the Brazilian Amazon: A Comparison of Estimates
Houghton, R. A.; Lawrence, J. L.; Hackler, J. L.; Brown, S.
2001-01-01
The amount of carbon released to the atmosphere as a result of deforestation is determined, in part, by the amount of carbon held in the biomass of the forests converted to other uses. Uncertainty in forest biomass is responsible for much of the uncertainty in current estimates of the flux of carbon from land-use change. We compared several estimates of forest biomass for the Brazilian Amazon, based on spatial interpolations of direct measurements, relationships to climatic variables, and remote sensing data. We asked three questions. First, do the methods yield similar estimates? Second, do they yield similar spatial patterns of distribution of biomass? And, third, what factors need most attention if we are to predict more accurately the distribution of forest biomass over large areas? Amazonian forests (including dead and below-ground biomass) vary by more than a factor of two, from a low of 39 PgC to a high of 93 PgC. Furthermore, the estimates disagree as to the regions of high and low biomass. The lack of agreement among estimates confirms the need for reliable determination of aboveground biomass over large areas. Potential methods include direct measurement of biomass through forest inventories with improved allometric regression equations, dynamic modeling of forest recovery following observed stand-replacing disturbances (the approach used in this research), and estimation of aboveground biomass from airborne or satellite-based instruments sensitive to the vertical structure plant canopies.
Mat Jan, Nur Amalina; Shabri, Ani
2017-01-01
TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.
Estimation of monthly solar radiation distribution for solar energy system analysis
International Nuclear Information System (INIS)
Coskun, C.; Oktay, Z.; Dincer, I.
2011-01-01
The concept of probability density frequency, which is successfully used for analyses of wind speed and outdoor temperature distributions, is now modified and proposed for estimating solar radiation distributions for design and analysis of solar energy systems. In this study, global solar radiation distribution is comprehensively analyzed for photovoltaic (PV) panel and thermal collector systems. In this regard, a case study is conducted with actual global solar irradiation data of the last 15 years recorded by the Turkish State Meteorological Service. It is found that intensity of global solar irradiance greatly affects energy and exergy efficiencies and hence the performance of collectors. -- Research highlights: → The first study to apply global solar radiation distribution in solar system analyzes. → The first study showing global solar radiation distribution as a parameter of the solar irradiance intensity. → Time probability intensity frequency and probability power distribution do not have similar distribution patterns for each month. → There is no relation between the distribution of annual time lapse and solar energy with the intensity of solar irradiance.
International Nuclear Information System (INIS)
El-Shanshoury, Gh.I.
2015-01-01
Assessing the adequacy of probability distributions for estimating the extreme events of air temperature in Dabaa region is one of the pre-requisite s for any design purpose at Dabaa site which can be achieved by probability approach. In the present study, three extreme value distributions are considered and compared to estimate the extreme events of monthly and annual maximum and minimum temperature. These distributions include the Gumbel/Frechet distributions for estimating the extreme maximum values and Gumbel /Weibull distributions for estimating the extreme minimum values. Lieblein technique and Method of Moments are applied for estimating the distribution para meters. Subsequently, the required design values with a given return period of exceedance are obtained. Goodness-of-Fit tests involving Kolmogorov-Smirnov and Anderson-Darling are used for checking the adequacy of fitting the method/distribution for the estimation of maximum/minimum temperature. Mean Absolute Relative Deviation, Root Mean Square Error and Relative Mean Square Deviation are calculated, as the performance indicators, to judge which distribution and method of parameters estimation are the most appropriate one to estimate the extreme temperatures. The present study indicated that the Weibull distribution combined with Method of Moment estimators gives the highest fit, most reliable, accurate predictions for estimating the extreme monthly and annual minimum temperature. The Gumbel distribution combined with Method of Moment estimators showed the highest fit, accurate predictions for the estimation of the extreme monthly and annual maximum temperature except for July, August, October and November. The study shows that the combination of Frechet distribution with Method of Moment is the most accurate for estimating the extreme maximum temperature in July, August and November months while t he Gumbel distribution and Lieblein technique is the best for October
Research on Key Technologies of Network Centric System Distributed Target Track Fusion
Directory of Open Access Journals (Sweden)
Yi Mao
2017-01-01
Full Text Available To realize common tactical picture in network-centered system, this paper proposes a layered architecture for distributed information processing and a method for distributed track fusion on the basis of analyzing the characteristics of network-centered systems. Basing on the noncorrelation of three-dimensional measurement of surveillance and reconnaissance sensors under polar coordinates, it also puts forward an algorithm for evaluating track quality (TQ using statistical decision theory. According to simulation results, the TQ value is associated with the measurement accuracy of sensors and the motion state of targets, which is well matched with the convergence process of tracking filters. Besides, the proposed algorithm has good reliability and timeliness in track quality evaluation.
A revival of the autoregressive distributed lag model in estimating energy demand relationships
Energy Technology Data Exchange (ETDEWEB)
Bentzen, J.; Engsted, T.
1999-07-01
The findings in the recent energy economics literature that energy economic variables are non-stationary, have led to an implicit or explicit dismissal of the standard autoregressive distribution lag (ARDL) model in estimating energy demand relationships. However, Pesaran and Shin (1997) show that the ARDL model remains valid when the underlying variables are non-stationary, provided the variables are co-integrated. In this paper we use the ARDL approach to estimate a demand relationship for Danish residential energy consumption, and the ARDL estimates are compared to the estimates obtained using co-integration techniques and error-correction models (ECM's). It turns out that both quantitatively and qualitatively, the ARDL approach and the co-integration/ECM approach give very similar results. (au)
A revival of the autoregressive distributed lag model in estimating energy demand relationships
Energy Technology Data Exchange (ETDEWEB)
Bentzen, J; Engsted, T
1999-07-01
The findings in the recent energy economics literature that energy economic variables are non-stationary, have led to an implicit or explicit dismissal of the standard autoregressive distribution lag (ARDL) model in estimating energy demand relationships. However, Pesaran and Shin (1997) show that the ARDL model remains valid when the underlying variables are non-stationary, provided the variables are co-integrated. In this paper we use the ARDL approach to estimate a demand relationship for Danish residential energy consumption, and the ARDL estimates are compared to the estimates obtained using co-integration techniques and error-correction models (ECM's). It turns out that both quantitatively and qualitatively, the ARDL approach and the co-integration/ECM approach give very similar results. (au)
Anthropogenic CO2 in the oceans estimated using transit time distributions
International Nuclear Information System (INIS)
Waugh, D.W.; McNeil, B.I.
2006-01-01
The distribution of anthropogenic carbon (Cant) in the oceans is estimated using the transit time distribution (TTD) method applied to global measurements of chlorofluorocarbon-12 (CFC12). Unlike most other inference methods, the TTD method does not assume a single ventilation time and avoids the large uncertainty incurred by attempts to correct for the large natural carbon background in dissolved inorganic carbon measurements. The highest concentrations and deepest penetration of anthropogenic carbon are found in the North Atlantic and Southern Oceans. The estimated total inventory in 1994 is 134 Pg-C. To evaluate uncertainties the TTD method is applied to output from an ocean general circulation model (OGCM) and compared the results to the directly simulated Cant. Outside of the Southern Ocean the predicted Cant closely matches the directly simulated distribution, but in the Southern Ocean the TTD concentrations are biased high due to the assumption of 'constant disequilibrium'. The net result is a TTD overestimate of the global inventory by about 20%. Accounting for this bias and other centred uncertainties, an inventory range of 94-121 Pg-C is obtained. This agrees with the inventory of Sabine et al., who applied the DeltaC* method to the same data. There are, however, significant differences in the spatial distributions: The TTD estimates are smaller than DeltaC* in the upper ocean and larger at depth, consistent with biases expected in DeltaC* given its assumption of a single parcel ventilation time
Directory of Open Access Journals (Sweden)
Federico Scarpa
2015-01-01
Full Text Available The identification of thermophysical properties of materials in dynamic experiments can be conveniently performed by the inverse solution of the associated heat conduction problem (IHCP. The inverse technique demands the knowledge of the initial temperature distribution within the material. As only a limited number of temperature sensors (or no sensor at all are arranged inside the test specimen, the knowledge of the initial temperature distribution is affected by some uncertainty. This uncertainty, together with other possible sources of bias in the experimental procedure, will propagate in the estimation process and the accuracy of the reconstructed thermophysical property values could deteriorate. In this work the effect on the estimated thermophysical properties due to errors in the initial temperature distribution is investigated along with a practical method to quantify this effect. Furthermore, a technique for compensating this kind of bias is proposed. The method consists in including the initial temperature distribution among the unknown functions to be estimated. In this way the effect of the initial bias is removed and the accuracy of the identified thermophysical property values is highly improved.
Release the BEESTS: Bayesian Estimation of Ex-Gaussian STop-Signal Reaction Time Distributions
Directory of Open Access Journals (Sweden)
Dora eMatzke
2013-12-01
Full Text Available The stop-signal paradigm is frequently used to study response inhibition. Inthis paradigm, participants perform a two-choice response time task wherethe primary task is occasionally interrupted by a stop-signal that promptsparticipants to withhold their response. The primary goal is to estimatethe latency of the unobservable stop response (stop signal reaction timeor SSRT. Recently, Matzke, Dolan, Logan, Brown, and Wagenmakers (inpress have developed a Bayesian parametric approach that allows for theestimation of the entire distribution of SSRTs. The Bayesian parametricapproach assumes that SSRTs are ex-Gaussian distributed and uses Markovchain Monte Carlo sampling to estimate the parameters of the SSRT distri-bution. Here we present an efficient and user-friendly software implementa-tion of the Bayesian parametric approach —BEESTS— that can be appliedto individual as well as hierarchical stop-signal data. BEESTS comes withan easy-to-use graphical user interface and provides users with summarystatistics of the posterior distribution of the parameters as well various diag-nostic tools to assess the quality of the parameter estimates. The softwareis open source and runs on Windows and OS X operating systems. In sum,BEESTS allows experimental and clinical psychologists to estimate entiredistributions of SSRTs and hence facilitates the more rigorous analysis ofstop-signal data.
Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations
Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong
2017-08-01
Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is
International Nuclear Information System (INIS)
Thakur, Amit; Singh, Baltej; Gupta, Anurag; Duggal, Vibhuti; Bhatt, Kislay; Krishnani, P.D.
2016-01-01
Highlights: • EDA has been applied to optimize initial core of AHWR-LEU. • Suitable value of weighing factor ‘α’ and population size in EDA was estimated. • The effect of varying initial distribution function on optimized solution was studied. • For comparison, Genetic algorithm was also applied. - Abstract: Population based evolutionary algorithms now form an integral part of fuel management in nuclear reactors and are frequently being used for fuel loading pattern optimization (LPO) problems. In this paper we have applied Estimation of distribution algorithm (EDA) to optimize initial core loading pattern (LP) of AHWR-LEU. In EDA, new solutions are generated by sampling the probability distribution model estimated from the selected best candidate solutions. The weighing factor ‘α’ decides the fraction of current best solution for updating the probability distribution function after each generation. A wider use of EDA warrants a comprehensive study on parameters like population size, weighing factor ‘α’ and initial probability distribution function. In the present study, we have done an extensive analysis on these parameters (population size, weighing factor ‘α’ and initial probability distribution function) in EDA. It is observed that choosing a very small value of ‘α’ may limit the search of optimized solutions in the near vicinity of initial probability distribution function and better loading patterns which are away from initial distribution function may not be considered with due weightage. It is also observed that increasing the population size improves the optimized loading pattern, however the algorithm still fails if the initial distribution function is not close to the expected optimized solution. We have tried to find out the suitable values for ‘α’ and population size to be considered for AHWR-LEU initial core loading pattern optimization problem. For sake of comparison and completeness, we have also addressed the
A "total parameter estimation" method in the varification of distributed hydrological models
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in
Distortion-Rate Bounds for Distributed Estimation Using Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Nihar Jindal
2008-03-01
Full Text Available We deal with centralized and distributed rate-constrained estimation of random signal vectors performed using a network of wireless sensors (encoders communicating with a fusion center (decoder. For this context, we determine lower and upper bounds on the corresponding distortion-rate (D-R function. The nonachievable lower bound is obtained by considering centralized estimation with a single-sensor which has all observation data available, and by determining the associated D-R function in closed-form. Interestingly, this D-R function can be achieved using an estimate first compress afterwards (EC approach, where the sensor (i forms the minimum mean-square error (MMSE estimate for the signal of interest; and (ii optimally (in the MSE sense compresses and transmits it to the FC that reconstructs it. We further derive a novel alternating scheme to numerically determine an achievable upper bound of the D-R function for general distributed estimation using multiple sensors. The proposed algorithm tackles an analytically intractable minimization problem, while it accounts for sensor data correlations. The obtained upper bound is tighter than the one determined by having each sensor performing MSE optimal encoding independently of the others. Numerical examples indicate that the algorithm performs well and yields D-R upper bounds which are relatively tight with respect to analytical alternatives obtained without taking into account the cross-correlations among sensor data.
Directory of Open Access Journals (Sweden)
Kaifeng Yang
2014-01-01
Full Text Available A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.
Yang, Kaifeng; Mu, Li; Yang, Dongdong; Zou, Feng; Wang, Lei; Jiang, Qiaoyong
2014-01-01
A novel hybrid multiobjective algorithm is presented in this paper, which combines a new multiobjective estimation of distribution algorithm, an efficient local searcher and ε-dominance. Besides, two multiobjective problems with variable linkages strictly based on manifold distribution are proposed. The Pareto set to the continuous multiobjective optimization problems, in the decision space, is a piecewise low-dimensional continuous manifold. The regularity by the manifold features just build probability distribution model by globally statistical information from the population, yet, the efficiency of promising individuals is not well exploited, which is not beneficial to search and optimization process. Hereby, an incremental tournament local searcher is designed to exploit local information efficiently and accelerate convergence to the true Pareto-optimal front. Besides, since ε-dominance is a strategy that can make multiobjective algorithm gain well distributed solutions and has low computational complexity, ε-dominance and the incremental tournament local searcher are combined here. The novel memetic multiobjective estimation of distribution algorithm, MMEDA, was proposed accordingly. The algorithm is validated by experiment on twenty-two test problems with and without variable linkages of diverse complexities. Compared with three state-of-the-art multiobjective optimization algorithms, our algorithm achieves comparable results in terms of convergence and diversity metrics.
Inverse estimation of the particle size distribution using the Fruit Fly Optimization Algorithm
International Nuclear Information System (INIS)
He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming
2015-01-01
The Fruit Fly Optimization Algorithm (FOA) is applied to retrieve the particle size distribution (PSD) for the first time. The direct problems are solved by the modified Anomalous Diffraction Approximation (ADA) and the Lambert–Beer Law. Firstly, three commonly used monomodal PSDs, i.e. the Rosin–Rammer (R–R) distribution, the normal (N–N) distribution and the logarithmic normal (L–N) distribution, and the bimodal Rosin–Rammer distribution function are estimated in the dependent model. All the results show that the FOA can be used as an effective technique to estimate the PSDs under the dependent model. Then, an optimal wavelength selection technique is proposed to improve the retrieval results of bimodal PSD. Finally, combined with two general functions, i.e. the Johnson's S B (J-S B ) function and the modified beta (M-β) function, the FOA is employed to recover actual measurement aerosol PSDs over Beijing and Hangzhou obtained from the aerosol robotic network (AERONET). All the numerical simulations and experiment results demonstrate that the FOA can be used to retrieve actual measurement PSDs, and more reliable and accurate results can be obtained, if the J-S B function is employed
Directory of Open Access Journals (Sweden)
Sanjay Kumar Singh
2011-06-01
Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.
Directory of Open Access Journals (Sweden)
Gustavo Sanchez
2012-01-01
Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.
OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE.
Xie, Xianchao; Kou, S C; Brown, Lawrence
2016-03-01
This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results.
Re-estimation of Motion and Reconstruction for Distributed Video Coding
DEFF Research Database (Denmark)
Luong, Huynh Van; Raket, Lars Lau; Forchhammer, Søren
2014-01-01
Transform domain Wyner-Ziv (TDWZ) video coding is an efficient approach to distributed video coding (DVC), which provides low complexity encoding by exploiting the source statistics at the decoder side. The DVC coding efficiency depends mainly on side information and noise modeling. This paper...... proposes a motion re-estimation technique based on optical flow to improve side information and noise residual frames by taking partially decoded information into account. To improve noise modeling, a noise residual motion re-estimation technique is proposed. Residual motion compensation with motion...
Doughty, Austin; Hasanjee, Aamr; Pettitt, Alex; Silk, Kegan; Liu, Hong; Chen, Wei R.; Zhou, Feifan
2016-03-01
Laser Immunotherapy is a novel cancer treatment modality that has seen much success in treating many different types of cancer, both in animal studies and in clinical trials. The treatment consists of the synergistic interaction between photothermal laser irradiation and the local injection of an immunoadjuvant. As a result of the therapy, the host immune system launches a systemic antitumor response. The photothermal effect induced by the laser irradiation has multiple effects at different temperature elevations which are all required for optimal response. Therefore, determining the temperature distribution in the target tumor during the laser irradiation in laser immunotherapy is crucial to facilitate the treatment of cancers. To investigate the temperature distribution in the target tumor, female Wistar Furth rats were injected with metastatic mammary tumor cells and, upon sufficient tumor growth, underwent laser irradiation and were monitored using thermocouples connected to locally-inserted needle probes and infrared thermography. From the study, we determined that the maximum central tumor temperature was higher for tumors of less volume. Additionally, we determined that the temperature near the edge of the tumor as measured with a thermocouple had a strong correlation with the maximum temperature value in the infrared camera measurement.
Energy Technology Data Exchange (ETDEWEB)
Westover, B. [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, California 92093 (United States); Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Chen, C. D.; Patel, P. K.; McLean, H. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Beg, F. N., E-mail: fbeg@ucsd.edu [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, California 92093 (United States)
2014-03-15
Experiments on the Titan laser (∼150 J, 0.7 ps, 2 × 10{sup 20} W cm{sup −2}) at the Lawrence Livermore National Laboratory were carried out in order to study the properties of fast electrons produced by high-intensity, short pulse laser interacting with matter under conditions relevant to Fast Ignition. Bremsstrahlung x-rays produced by these fast electrons were measured by a set of compact filter-stack based x-ray detectors placed at three angles with respect to the target. The measured bremsstrahlung signal allows a characterization of the fast electron beam spectrum, conversion efficiency of laser energy into fast electron kinetic energy and angular distribution. A Monte Carlo code Integrated Tiger Series was used to model the bremsstrahlung signal and infer a laser to fast electron conversion efficiency of 30%, an electron slope temperature of about 2.2 MeV, and a mean divergence angle of 39°. Simulations were also performed with the hybrid transport code ZUMA which includes fields in the target. In this case, a conversion efficiency of laser energy to fast electron energy of 34% and a slope temperature between 1.5 MeV and 4 MeV depending on the angle between the target normal direction and the measuring spectrometer are found. The observed temperature of the bremsstrahlung spectrum, and therefore the inferred electron spectrum are found to be angle dependent.
Directory of Open Access Journals (Sweden)
Hamza Benzerrouk
2018-03-01
Full Text Available Multi-Unmanned Aerial Vehicle (UAV Doppler-based target tracking has not been widely investigated, specifically when using modern nonlinear information filters. A high-degree Gauss–Hermite information filter, as well as a seventh-degree cubature information filter (CIF, is developed to improve the fifth-degree and third-degree CIFs proposed in the most recent related literature. These algorithms are applied to maneuvering target tracking based on Radar Doppler range/range rate signals. To achieve this purpose, different measurement models such as range-only, range rate, and bearing-only tracking are used in the simulations. In this paper, the mobile sensor target tracking problem is addressed and solved by a higher-degree class of quadrature information filters (HQIFs. A centralized fusion architecture based on distributed information filtering is proposed, and yielded excellent results. Three high dynamic UAVs are simulated with synchronized Doppler measurement broadcasted in parallel channels to the control center for global information fusion. Interesting results are obtained, with the superiority of certain classes of higher-degree quadrature information filters.
International Nuclear Information System (INIS)
Mairs, R.J.; Gaze, M.N.; Murray, T.; Reid, R.; McSharry, C.; Babich, J.W.
1991-01-01
This study aims to select the radiopharmaceutical vehicle for targeted radiotherapy of neuroblastoma which is most likely to penetrate readily the centre of micrometastases in vivo. The human neuroblastoma cell line NB1-G, grown as multicellular spheroids provided an in vitro model for micrometastases. The radiopharmaceuticals studied were the catecholamine analogue metaiodobenzyl guanidine (mIBG), a specific neuroectodermal monoclonal antibody (UJ13A) and β nerve growth factor (βNGF). Following incubation of each drug with neuroblastoma spheroids, autoradiographs of frozen sections were prepared to demonstrate their relative distributions. mIBG and βNGF were found to penetrate the centre of spheroids readily although the concentration of mIBG greatly exceeded that of βNGF. In contrast, UJ13A was only bound peripherally. We conclude that mIBG is the best available vehicle for targeted radiotherapy of neuroblastoma cells with active uptake mechanisms for catecholimines. It is suggested that radionuclides with a shorter range of emissions than 131 I may be conjugated to benzyl guanidine to constitute more effective targeting agents with potentially less toxicity to adjacent normal tissues. (author)
Energy Technology Data Exchange (ETDEWEB)
Song, Tae Kwang; Bae, Hong Yeol; Chun, Yun Bae; Oh, Chang Young; Kim, Yun Jae [Korea University, Seoul (Korea, Republic of); Lee, Kyoung Soo; Park, Chi Yong [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)
2008-08-15
In nuclear power plants, ferritic low alloy steel nozzle was connected with austenitic stainless steel piping system through alloy 82/182 butt weld. Accurate estimation of residual stress for weldment is important in the sense that alloy 82/182 is susceptible to stress corrosion cracking. There are many results which predict residual stress distribution for alloy 82/182 weld between nozzle and pipe. However, nozzle and piping system usually connected through safe end which has short length. In this paper, residual stress distribution for pressurizer nozzle of Kori nuclear power plant was predicted using FE analysis, which considered safe end. As a result, existing residual stress profile was redistributed and residual stress of inner surface was decreased specially. It means that safe end should be considered to reduce conservatism when estimating the piping system.
Mu, Wenying; Cui, Baotong; Li, Wen; Jiang, Zhengxian
2014-07-01
This paper proposes a scheme for non-collocated moving actuating and sensing devices which is unitized for improving performance in distributed parameter systems. By Lyapunov stability theorem, each moving actuator/sensor agent velocity is obtained. To enhance state estimation of a spatially distributes process, two kinds of filters with consensus terms which penalize the disagreement of the estimates are considered. Both filters can result in the well-posedness of the collective dynamics of state errors and can converge to the plant state. Numerical simulations demonstrate that the effectiveness of such a moving actuator-sensor network in enhancing system performance and the consensus filters converge faster to the plant state when consensus terms are included. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino
2012-03-15
We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.
International Nuclear Information System (INIS)
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2001-01-01
The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of
Reconsidering the smart metering data collection frequency for distribution state estimation
Chen, Qipeng; Kaleshi, Dritan; Armour, Simon; Fan, Zhong
2015-01-01
The current UK Smart Metering Technical Specification requires smart meter readings to be collected once a day, primarily to support accurate billing without violating users' privacy. In this paper we consider the use of Smart Metering data for Distribution State Estimation (DSE), and compare the effectiveness of daily data collection strategy with a more frequent, half-hourly SM data collection strategy. We first assess the suitability of using the data for load forecasting at Low Voltage (L...
International Nuclear Information System (INIS)
Adam, J.; Barabanov, M.Yu.; Bradnova, V.
2002-01-01
The distribution of neutrons emitted during the irradiation with 0.65, 1.0 and 1.5 GeV protons from a lead target (O / = 8 cm, l = 20 cm) and moderated by a surrounding paraffin moderator of 6 cm thick was studied with a radiochemical sensor along the beam axis on top of the moderator. Small 139 La-sensors of approximately 1 g were used to measure essentially the thermal neutron fluence at different depths near the surface: i.e., on top of the moderator, in 10 mm deep holes and in 20 mm deep holes. The reaction 139 La(n, γ) 140 La (τ 1/2 = 40.27 h) was studied using standard procedures of gamma spectroscopy and data analysis. The neutron induced activity of 140 La increases strongly with the depth of the hole inside the moderator, its activity distribution along the beam direction on top of the moderator has its maximum about 10 cm downstream the entrance of the protons into the lead and the induced activity increases about linearity with the proton energy. Some comparisons of the experimental results with model estimations based on the LAHET code are also presented. The experiments were carried out using the Nuclotron accelerator of the Laboratory of High Energies (JINR)
Moving-Target Position Estimation Using GPU-Based Particle Filter for IoT Sensing Applications
Directory of Open Access Journals (Sweden)
Seongseop Kim
2017-11-01
Full Text Available A particle filter (PF has been introduced for effective position estimation of moving targets for non-Gaussian and nonlinear systems. The time difference of arrival (TDOA method using acoustic sensor array has normally been used to for estimation by concealing the location of a moving target, especially underwater. In this paper, we propose a GPU -based acceleration of target position estimation using a PF and propose an efficient system and software architecture. The proposed graphic processing unit (GPU-based algorithm has more advantages in applying PF signal processing to a target system, which consists of large-scale Internet of Things (IoT-driven sensors because of the parallelization which is scalable. For the TDOA measurement from the acoustic sensor array, we use the generalized cross correlation phase transform (GCC-PHAT method to obtain the correlation coefficient of the signal using Fast Fourier Transform (FFT, and we try to accelerate the calculations of GCC-PHAT based TDOA measurements using FFT with GPU compute unified device architecture (CUDA. The proposed approach utilizes a parallelization method in the target position estimation algorithm using GPU-based PF processing. In addition, it could efficiently estimate sudden movement change of the target using GPU-based parallel computing which also can be used for multiple target tracking. It also provides scalability in extending the detection algorithm according to the increase of the number of sensors. Therefore, the proposed architecture can be applied in IoT sensing applications with a large number of sensors. The target estimation algorithm was verified using MATLAB and implemented using GPU CUDA. We implemented the proposed signal processing acceleration system using target GPU to analyze in terms of execution time. The execution time of the algorithm is reduced by 55% from to the CPU standalone operation in target embedded board, NVIDIA Jetson TX1. Also, to apply large
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm
Institute of Scientific and Technical Information of China (English)
Haidong Xu; Mingyan Jiang; Kun Xu
2015-01-01
The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.
International Nuclear Information System (INIS)
Mizoguchi, Asumi; Arimura, Hidetaka; Shioyama, Yoshiyuki
2013-01-01
We are developing a method to evaluate four-dimensional radiation dose distribution in a patient body based upon the animated image of EPID (electronic portal imaging device) which is an image of beam-direction at the irradiation. In the first place, we have obtained the image of the dose which is emitted from patient body at therapy planning using therapy planning CT image and dose evaluation algorism. In the second place, we have estimated the emission dose image at the irradiation using EPID animated image which is obtained at the irradiation. In the third place, we have got an affine transformation matrix including respiratory movement in the body by performing linear registration on the emission dose image at therapy planning to get the one at the irradiation. In the fourth place, we have applied the affine transformation matrix on the therapy planning CT image and estimated the CT image 'at irradiation'. Finally we have evaluated four-dimensional dose distribution by calculating dose distribution in the CT image 'at irradiation' which has been estimated for each frame of the EPID animated-image. This scheme may be useful for evaluating therapy results and risk management. (author)
Directory of Open Access Journals (Sweden)
Peng Fangfang
2014-01-01
Full Text Available This paper studies the fusion estimation problem of a class of multisensor multirate systems with observation multiplicative noises. The dynamic system is sampled uniformly. Sampling period of each sensor is uniform and the integer multiple of the state update period. Moreover, different sensors have the different sampling rates and observations of sensors are subject to the stochastic uncertainties of multiplicative noises. At first, local filters at the observation sampling points are obtained based on the observations of each sensor. Further, local estimators at the state update points are obtained by predictions of local filters at the observation sampling points. They have the reduced computational cost and a good real-time property. Then, the cross-covariance matrices between any two local estimators are derived at the state update points. At last, using the matrix weighted optimal fusion estimation algorithm in the linear minimum variance sense, the distributed optimal fusion estimator is obtained based on the local estimators and the cross-covariance matrices. An example shows the effectiveness of the proposed algorithms.
Estimation of Distributed Fermat-Point Location for Wireless Sensor Networking
Directory of Open Access Journals (Sweden)
Yanuarius Teofilus Larosa
2011-04-01
Full Text Available This work presents a localization scheme for use in wireless sensor networks (WSNs that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE. DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.
Estimation of distributed Fermat-point location for wireless sensor networking.
Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien
2011-01-01
This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.
Directory of Open Access Journals (Sweden)
Zhou Hao
2015-06-01
Full Text Available The traditional MUltiple SIgnal Classification (MUSIC algorithm requires significant computational effort and can not be employed for the Direction Of Arrival (DOA estimation of targets in a low-altitude multipath environment. As such, a novel MUSIC approach is proposed on the basis of the algorithm of Adaptive Step Glowworm Swarm Optimization (ASGSO. The virtual spatial smoothing of the matrix formed by each snapshot is used to realize the decorrelation of the multipath signal and the establishment of a fullorder correlation matrix. ASGSO optimizes the function and estimates the elevation of the target. The simulation results suggest that the proposed method can overcome the low altitude multipath effect and estimate the DOA of target readily and precisely without radar effective aperture loss.
Zhan, Hanyu; Voelz, David G.
2016-12-01
The polarimetric bidirectional reflectance distribution function (pBRDF) describes the relationships between incident and scattered Stokes parameters, but the familiar surface-only microfacet pBRDF cannot capture diffuse scattering contributions and depolarization phenomena. We propose a modified pBRDF model with a diffuse scattering component developed from the Kubelka-Munk and Le Hors et al. theories, and apply it in the development of a method to jointly estimate refractive index, slope variance, and diffuse scattering parameters from a series of Stokes parameter measurements of a surface. An application of the model and estimation approach to experimental data published by Priest and Meier shows improved correspondence with measurements of normalized Mueller matrix elements. By converting the Stokes/Mueller calculus formulation of the model to a degree of polarization (DOP) description, the estimation results of the parameters from measured DOP values are found to be consistent with a previous DOP model and results.
On the effect of correlated measurements on the performance of distributed estimation
Ahmed, Mohammed
2013-06-01
We address the distributed estimation of an unknown scalar parameter in Wireless Sensor Networks (WSNs). Sensor nodes transmit their noisy observations over multiple access channel to a Fusion Center (FC) that reconstructs the source parameter. The received signal is corrupted by noise and channel fading, so that the FC objective is to minimize the Mean-Square Error (MSE) of the estimate. In this paper, we assume sensor node observations to be correlated with the source signal and correlated with each other as well. The correlation coefficient between two observations is exponentially decaying with the distance separation. The effect of the distance-based correlation on the estimation quality is demonstrated and compared with the case of unity correlated observations. Moreover, a closed-form expression for the outage probability is derived and its dependency on the correlation coefficients is investigated. Numerical simulations are provided to verify our analytic results. © 2013 IEEE.
Estimation of dislocations density and distribution of dislocations during ECAP-Conform process
Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza
2018-01-01
Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.
Directory of Open Access Journals (Sweden)
Sonia Aïssa
2008-05-01
Full Text Available This paper investigates the effects of channel estimation error at the receiver on the achievable rate of distributed space-time block coded transmission. We consider that multiple transmitters cooperate to send the signal to the receiver and derive lower and upper bounds on the mutual information of distributed space-time block codes (D-STBCs when the channel gains and channel estimation error variances pertaining to different transmitter-receiver links are unequal. Then, assessing the gap between these two bounds, we provide a limiting value that upper bounds the latter at any input transmit powers, and also show that the gap is minimum if the receiver can estimate the channels of different transmitters with the same accuracy. We further investigate positioning the receiving node such that the mutual information bounds of D-STBCs and their robustness to the variations of the subchannel gains are maximum, as long as the summation of these gains is constant. Furthermore, we derive the optimum power transmission strategy to achieve the outage capacity lower bound of D-STBCs under arbitrary numbers of transmit and receive antennas, and provide closed-form expressions for this capacity metric. Numerical simulations are conducted to corroborate our analysis and quantify the effects of imperfect channel estimation.
International Nuclear Information System (INIS)
Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup
2015-01-01
In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF
Energy Technology Data Exchange (ETDEWEB)
Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-02-15
In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.
Methods for obtaining distributions of uranium occurrence from estimates of geologic features
International Nuclear Information System (INIS)
Ford, C.E.; McLaren, R.A.
1980-04-01
The problem addressed in this paper is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data require obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs
Methods for obtaining distributions of uranium occurrence from estimates of geologic features
International Nuclear Information System (INIS)
Ford, C.E.; McLaren, R.A.
1980-04-01
The problem addressed in this report is the determination of a quantitative estimate of a resource from estimates of fundamental variables which describe the resource. Due to uncertainty about the estimates, these basic variables are stochastic. The evaluation of random equations involving these variables is the core of the analysis process. The basic variables are originally described in terms of a low and a high percentile (the 5th and 95th, for example) and a central value (the mode, mean or median). The variable thus described is then generally assumed to be represented by a three-parameter lognormal distribution. Expressions involving these variables are evaluated by computing the first four central moments of the random functions (which are usually products and sums of variables). Stochastic independence is discussed. From the final set of moments a Pearson distribution is obtained; the high values of skewness and kurtosis resulting from uranium data requires obtaining Pearson curves beyond those described in published tables. A cubic spline solution to the Pearson differential equation accomplishes this task. A sample problem is used to illustrate the application of the process; sensitivity to the estimated values of the basic variables is discussed. Appendices contain details of the methods and descriptions of computer programs
Estimation of dose distribution in occupationally exposed individuals to FDG-{sup 18}F
Energy Technology Data Exchange (ETDEWEB)
Lacerda, Isabelle V. Batista de; Cabral, Manuela O. Monteiro; Vieira, Jose Wilson, E-mail: ilacerda.bolsista@cnen.gov.br, E-mail: manuela.omc@gmail.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Oliveira, Mercia Liane de; Andrade Lima, Fernando R. de, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2014-07-01
The use of unsealed radiation sources in nuclear medicine can lead to important incorporation of radionuclides, especially for occupationally exposed individuals (OEIs) during production and handling of radiopharmaceuticals. In this study, computer simulation was proposed as an alternative methodology for evaluation of the absorbed dose distribution and for the effective dose value in OEIs. For this purpose, the Exposure Computational Model (ECM) which is named as FSUP (Female Adult Mesh - supine) were used. This ECM is composed of: voxel phantom FASH (Female Adult MeSH) in the supine position, the MC code EGSnrc and an algorithm simulator of general internal source. This algorithm was modified to adapt to specific needs of the positron emission from FDG-{sup 18}F. The obtained results are presented as absorbed dose/accumulated activity. To obtain the absorbed dose distribution it was necessary to use accumulative activity data from the in vivo bioassay. The absorbed dose distribution and the value of estimated effective dose in this study did not exceed the limits for occupational exposure. Therefore, the creation of a database with the distribution of accumulated activity is suggested in order to estimate the absorbed dose in radiosensitive organs and the effective dose for OEI in similar environment. (author)
Estimation of dose distribution in occupationally exposed individuals to FDG-18F
International Nuclear Information System (INIS)
Lacerda, Isabelle V. Batista de; Cabral, Manuela O. Monteiro; Vieira, Jose Wilson
2014-01-01
The use of unsealed radiation sources in nuclear medicine can lead to important incorporation of radionuclides, especially for occupationally exposed individuals (OEIs) during production and handling of radiopharmaceuticals. In this study, computer simulation was proposed as an alternative methodology for evaluation of the absorbed dose distribution and for the effective dose value in OEIs. For this purpose, the Exposure Computational Model (ECM) which is named as FSUP (Female Adult Mesh - supine) were used. This ECM is composed of: voxel phantom FASH (Female Adult MeSH) in the supine position, the MC code EGSnrc and an algorithm simulator of general internal source. This algorithm was modified to adapt to specific needs of the positron emission from FDG- 18 F. The obtained results are presented as absorbed dose/accumulated activity. To obtain the absorbed dose distribution it was necessary to use accumulative activity data from the in vivo bioassay. The absorbed dose distribution and the value of estimated effective dose in this study did not exceed the limits for occupational exposure. Therefore, the creation of a database with the distribution of accumulated activity is suggested in order to estimate the absorbed dose in radiosensitive organs and the effective dose for OEI in similar environment. (author)
DEFF Research Database (Denmark)
Chen, Xiaoshuang; Lin, Jin; Wan, Can
2016-01-01
State estimation (SE) in distribution networks is not as accurate as that in transmission networks. Traditionally, distribution networks (DNs) are lack of direct measurements due to the limitations of investments and the difficulties of maintenance. Therefore, it is critical to improve the accuracy...... of SE in distribution networks by placing additional physical meters. For state-of-the-art SE models, it is difficult to clearly quantify measurements' influences on SE errors, so the problems of optimal meter placement for reducing SE errors are mostly solved by heuristic or suboptimal algorithms....... Under this background, this paper proposes a circuit representation model to represent SE errors. Based on the matrix formulation of the circuit representation model, the problem of optimal meter placement can be transformed to a mixed integer linear programming problem (MILP) via the disjunctive model...
Huff, David D; Lindley, Steven T; Wells, Brian K; Chai, Fei
2012-01-01
The green sturgeon (Acipenser medirostris), which is found in the eastern Pacific Ocean from Baja California to the Bering Sea, tends to be highly migratory, moving long distances among estuaries, spawning rivers, and distant coastal regions. Factors that determine the oceanic distribution of green sturgeon are unclear, but broad-scale physical conditions interacting with migration behavior may play an important role. We estimated the distribution of green sturgeon by modeling species-environment relationships using oceanographic and migration behavior covariates with maximum entropy modeling (MaxEnt) of species geographic distributions. The primary concentration of green sturgeon was estimated from approximately 41-51.5° N latitude in the coastal waters of Washington, Oregon, and Vancouver Island and in the vicinity of San Francisco and Monterey Bays from 36-37° N latitude. Unsuitably cold water temperatures in the far north and energetic efficiencies associated with prevailing water currents may provide the best explanation for the range-wide marine distribution of green sturgeon. Independent trawl records, fisheries observer records, and tagging studies corroborated our findings. However, our model also delineated patchily distributed habitat south of Monterey Bay, though there are few records of green sturgeon from this region. Green sturgeon are likely influenced by countervailing pressures governing their dispersal. They are behaviorally directed to revisit natal freshwater spawning rivers and persistent overwintering grounds in coastal marine habitats, yet they are likely physiologically bounded by abiotic and biotic environmental features. Impacts of human activities on green sturgeon or their habitat in coastal waters, such as bottom-disturbing trawl fisheries, may be minimized through marine spatial planning that makes use of high-quality species distribution information.
Pei, Huiqin; Chen, Shiming; Lai, Qiang
2016-12-01
This paper studies the multi-target consensus pursuit problem of multi-agent systems. For solving the problem, a distributed multi-flocking method is designed based on the partial information exchange, which is employed to realise the pursuit of multi-target and the uniform distribution of the number of pursuing agents with the dynamic target. Combining with the proposed circle formation control strategy, agents can adaptively choose the target to form the different circle formation groups accomplishing a multi-target pursuit. The speed state of pursuing agents in each group converges to the same value. A Lyapunov approach is utilised to analyse the stability of multi-agent systems. In addition, a sufficient condition is given for achieving the dynamic target consensus pursuit, and which is then analysed. Finally, simulation results verify the effectiveness of the proposed approaches.
Kalwij, Jesse M; Robertson, Mark P; Ronk, Argo; Zobel, Martin; Pärtel, Meelis
2014-01-01
Much ecological research relies on existing multispecies distribution datasets. Such datasets, however, can vary considerably in quality, extent, resolution or taxonomic coverage. We provide a framework for a spatially-explicit evaluation of geographical representation within large-scale species distribution datasets, using the comparison of an occurrence atlas with a range atlas dataset as a working example. Specifically, we compared occurrence maps for 3773 taxa from the widely-used Atlas Florae Europaeae (AFE) with digitised range maps for 2049 taxa of the lesser-known Atlas of North European Vascular Plants. We calculated the level of agreement at a 50-km spatial resolution using average latitudinal and longitudinal species range, and area of occupancy. Agreement in species distribution was calculated and mapped using Jaccard similarity index and a reduced major axis (RMA) regression analysis of species richness between the entire atlases (5221 taxa in total) and between co-occurring species (601 taxa). We found no difference in distribution ranges or in the area of occupancy frequency distribution, indicating that atlases were sufficiently overlapping for a valid comparison. The similarity index map showed high levels of agreement for central, western, and northern Europe. The RMA regression confirmed that geographical representation of AFE was low in areas with a sparse data recording history (e.g., Russia, Belarus and the Ukraine). For co-occurring species in south-eastern Europe, however, the Atlas of North European Vascular Plants showed remarkably higher richness estimations. Geographical representation of atlas data can be much more heterogeneous than often assumed. Level of agreement between datasets can be used to evaluate geographical representation within datasets. Merging atlases into a single dataset is worthwhile in spite of methodological differences, and helps to fill gaps in our knowledge of species distribution ranges. Species distribution
Directory of Open Access Journals (Sweden)
Li Chenlei
2014-10-01
Full Text Available Estimating cross-range velocity is a challenging task for space-borne synthetic aperture radar (SAR, which is important for ground moving target indication (GMTI. Because the velocity of a target is very small compared with that of the satellite, it is difficult to correctly estimate it using a conventional monostatic platform algorithm. To overcome this problem, a novel method employing multistatic SAR is presented in this letter. The proposed hybrid method, which is based on an extended space-time model (ESTIM of the azimuth signal, has two steps: first, a set of finite impulse response (FIR filter banks based on a fractional Fourier transform (FrFT is used to separate multiple targets within a range gate; second, a cross-correlation spectrum weighted subspace fitting (CSWSF algorithm is applied to each of the separated signals in order to estimate their respective parameters. As verified through computer simulation with the constellations of Cartwheel, Pendulum and Helix, this proposed time-frequency-subspace method effectively improves the estimation precision of the cross-range velocities of multiple targets.
International Nuclear Information System (INIS)
Futami, Hikaru; Arai, Tsunenori; Yashiro, Hideki; Nakatsuka, Seishi; Kuribayashi, Sachio; Izumi, Youtaro; Tsukada, Norimasa; Kawamura, Masafumi
2006-01-01
To develop an evaluation method for the curative field when using X-ray CT imaging during percutaneous transthoracic cryoablation for lung cancer, we constructed a finite-element heat conduction simulator to estimate temperature distribution in the lung during cryo-treatment. We calculated temperature distribution using a simple two-dimensional finite element model, although the actual temperature distribution spreads in three dimensions. Temperature time-histories were measured within 10 minutes using experimental ex vivo and in vivo lung cryoablation conditions. We adjusted specific heat and thermal conductivity in the heat conduction calculation and compared them with measured temperature time-histories ex vivo. Adjusted lung specific heat was 3.7 J/ (g·deg C) for unfrozen lung and 1.8 J/ (g·deg C) for frozen lung. Adjusted lung thermal conductivity in our finite element model fitted proportionally to the exponential function of lung density. We considered the heat input by blood flow circulation and metabolic heat when we calculated the temperature time-histories during in vivo cryoablation of the lung. We assumed that the blood flow varies in inverse proportion to the change in blood viscosity up to the maximum blood flow predicted from cardiac output. Metabolic heat was set as heat generation in the calculation. The measured temperature time-histories of in vivo cryoablation were then estimated with an accuracy of ±3 deg C when calculated based on this assumption. Therefore, we successfully constructed a two-dimensional heat conduction simulator that is capable of estimating temperature distribution in the lung at the time of first freezing during cryoablation. (author)
Distributed Channel Estimation and Pilot Contamination Analysis for Massive MIMO-OFDM Systems
Zaib, Alam
2016-07-22
By virtue of large antenna arrays, massive MIMO systems have a potential to yield higher spectral and energy efficiency in comparison with the conventional MIMO systems. This paper addresses uplink channel estimation in massive MIMO-OFDM systems with frequency selective channels. We propose an efficient distributed minimum mean square error (MMSE) algorithm that can achieve near optimal channel estimates at low complexity by exploiting the strong spatial correlation among antenna array elements. The proposed method involves solving a reduced dimensional MMSE problem at each antenna followed by a repetitive sharing of information through collaboration among neighboring array elements. To further enhance the channel estimates and/or reduce the number of reserved pilot tones, we propose a data-aided estimation technique that relies on finding a set of most reliable data carriers. Furthermore, we use stochastic geometry to quantify the pilot contamination, and in turn use this information to analyze the effect of pilot contamination on channel MSE. The simulation results validate our analysis and show near optimal performance of the proposed estimation algorithms.
Lee, Yu; Yu, Chanki; Lee, Sang Wook
2018-01-10
We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.
Pilots' Visual Scan Patterns and Attention Distribution During the Pursuit of a Dynamic Target.
Yu, Chung-San; Wang, Eric Min-Yang; Li, Wen-Chin; Braithwaite, Graham; Greaves, Matthew
2016-01-01
The current research was to investigate pilots' visual scan patterns in order to assess attention distribution during air-to-air maneuvers. A total of 30 qualified mission-ready fighter pilots participated in this research. Eye movement data were collected by a portable head-mounted eye-tracking device, combined with a jet fighter simulator. To complete the task, pilots had to search for, pursue, and lock on a moving target while performing air-to-air tasks. There were significant differences in pilots' saccade duration (ms) in three operating phases, including searching (M = 241, SD = 332), pursuing (M = 311, SD = 392), and lock-on (M = 191, SD = 226). Also, there were significant differences in pilots' pupil sizes (pixel(2)), of which the lock-on phase was the largest (M = 27,237, SD = 6457), followed by pursuit (M = 26,232, SD = 6070), then searching (M = 25,858, SD = 6137). Furthermore, there were significant differences between expert and novice pilots in the percentage of fixation on the head-up display (HUD), time spent looking outside the cockpit, and the performance of situational awareness (SA). Experienced pilots have better SA performance and paid more attention to the HUD, but focused less outside the cockpit when compared with novice pilots. Furthermore, pilots with better SA performance exhibited a smaller pupil size during the operational phase of lock on while pursuing a dynamic target. Understanding pilots' visual scan patterns and attention distribution are beneficial to the design of interface displays in the cockpit and in developing human factors training syllabi to improve the safety of flight operations.
International Nuclear Information System (INIS)
Liu, K.-S.; Chang, Y.-L.; Hayward, S.B.; Gadgil, A.J.; Nero, A.V.
1992-01-01
Data on residential radon concentrations in California, together with information on California residents' moving houses and time-activity patterns, have been used to estimate the distribution of lifetime cumulative exposures to 222 Rn. This distribution was constructed using Monte Carlo techniques to simulate the lifetime occupancy histories and associated radon exposures of 10,000 California residents. For standard male and female lifespans, the simulation sampled from transition probability matrices representing changes of residence within and between six regions of California, as well as into and out of the other United States, and then sampled from the appropriate regional (or national) distribution of indoor concentrations. The resulting distribution of lifetime cumulative exposures has a significantly narrower relative width than the distribution of California indoor concentrations, with only a small fraction (less than 0.2%) of the population having lifetime exposures equivalent to living their lifetimes in a single home with a radon concentration of 148 Bq.m -3 or more. (author)
Estimating interevent time distributions from finite observation periods in communication networks
Kivelä, Mikko; Porter, Mason A.
2015-11-01
A diverse variety of processes—including recurrent disease episodes, neuron firing, and communication patterns among humans—can be described using interevent time (IET) distributions. Many such processes are ongoing, although event sequences are only available during a finite observation window. Because the observation time window is more likely to begin or end during long IETs than during short ones, the analysis of such data is susceptible to a bias induced by the finite observation period. In this paper, we illustrate how this length bias is born and how it can be corrected without assuming any particular shape for the IET distribution. To do this, we model event sequences using stationary renewal processes, and we formulate simple heuristics for determining the severity of the bias. To illustrate our results, we focus on the example of empirical communication networks, which are temporal networks that are constructed from communication events. The IET distributions of such systems guide efforts to build models of human behavior, and the variance of IETs is very important for estimating the spreading rate of information in networks of temporal interactions. We analyze several well-known data sets from the literature, and we find that the resulting bias can lead to systematic underestimates of the variance in the IET distributions and that correcting for the bias can lead to qualitatively different results for the tails of the IET distributions.
Distribution of base rock depth estimated from Rayleigh wave measurement by forced vibration tests
International Nuclear Information System (INIS)
Hiroshi Hibino; Toshiro Maeda; Chiaki Yoshimura; Yasuo Uchiyama
2005-01-01
This paper shows an application of Rayleigh wave methods to a real site, which was performed to determine spatial distribution of base rock depth from the ground surface. At a certain site in Sagami Plain in Japan, the base rock depth from surface is assumed to be distributed up to 10 m according to boring investigation. Possible accuracy of the base rock depth distribution has been needed for the pile design and construction. In order to measure Rayleigh wave phase velocity, forced vibration tests were conducted with a 500 N vertical shaker and linear arrays of three vertical sensors situated at several points in two zones around the edges of the site. Then, inversion analysis was carried out for soil profile by genetic algorithm, simulating measured Rayleigh wave phase velocity with the computed counterpart. Distribution of the base rock depth inverted from the analysis was consistent with the roughly estimated inclination of the base rock obtained from the boring tests, that is, the base rock is shallow around edge of the site and gradually inclines towards the center of the site. By the inversion analysis, the depth of base rock was determined as from 5 m to 6 m in the edge of the site, 10 m in the center of the site. The determined distribution of the base rock depth by this method showed good agreement on most of the points where boring investigation were performed. As a result, it was confirmed that the forced vibration tests on the ground by Rayleigh wave methods can be useful as the practical technique for estimating surface soil profiles to a depth of up to 10 m. (authors)
TRMM Satellite Algorithm Estimates to Represent the Spatial Distribution of Rainstorms
Directory of Open Access Journals (Sweden)
Patrick Marina
2017-01-01
Full Text Available On-site measurements from rain gauge provide important information for the design, construction, and operation of water resources engineering projects, groundwater potentials, and the water supply and irrigation systems. A dense gauging network is needed to accurately characterize the variation of rainfall over a region, unfitting for conditions with limited networks, such as in Sarawak, Malaysia. Hence, satellite-based algorithm estimates are introduced as an innovative solution to these challenges. With accessibility to dataset retrievals from public domain websites, it has become a useful source to measure rainfall for a wider coverage area at finer temporal resolution. This paper aims to investigate the rainfall estimates prepared by Tropical Rainfall Measuring Mission (TRMM to explain whether it is suitable to represent the distribution of extreme rainfall in Sungai Sarawak Basin. Based on the findings, more uniform correlations for the investigated storms can be observed for low to medium altitude (>40 MASL. It is found for the investigated events of Jan 05-11, 2009: the normalized root mean square error (NRMSE = 36.7 %; and good correlation (CC = 0.9. These findings suggest that satellite algorithm estimations from TRMM are suitable to represent the spatial distribution of extreme rainfall.
Directory of Open Access Journals (Sweden)
Le Bihan Guillaume
2016-01-01
Full Text Available Flash floods monitoring systems developed up to now generally enable a real-time assessment of the potential flash-floods magnitudes based on highly distributed hydrological models and weather radar records. The approach presented here aims to go one step ahead by offering a direct assessment of the potential impacts of flash floods on inhabited areas. This approach is based on an a priori analysis of the considered area in order (1 to evaluate based on a semi-automatic hydraulic approach (Cartino method the potentially flooded areas for different discharge levels, and (2 to identify the associated buildings and/or population at risk based on geographic databases. This preliminary analysis enables to build a simplified impact model (discharge-impact curve for each river reach, which can be used to directly estimate the importance of potentially affected assets based on the outputs of a distributed rainfall-runoff model. This article presents a first case study conducted in the Gard region (south eastern France. The first validation results are presented in terms of (1 accuracy of the delineation of the flooded areas estimated based on the Cartino method and using a high resolution DTM, and (2 relevance and usefulness of the impact model obtained. The impacts estimated at the event scale will now be evaluated in a near future based on insurance claim data provided by CCR (Caisse Centrale de Réassurrance.
International Nuclear Information System (INIS)
Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi
2016-01-01
Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.
International Nuclear Information System (INIS)
Shimoura, S.
1992-01-01
The relation between nuclear density distribution and interaction cross section is discussed in terms of Glauber model. Based on the model, density distribution of neutron drip-line nucleus 11 Be and 11 Li is determined experimentally from incident energy dependence of interaction cross sections of 11 Be and 11 Li on light targets. The obtained distributions have long tails corresponding to neutron halos of loosely bound neutrons. (Author)
Comparing performance level estimation of safety functions in three distributed structures
International Nuclear Information System (INIS)
Hietikko, Marita; Malm, Timo; Saha, Heikki
2015-01-01
The capability of a machine control system to perform a safety function is expressed using performance levels (PL). This paper presents the results of a study where PL estimation was carried out for a safety function implemented using three different distributed control system structures. Challenges relating to the process of estimating PLs for safety related distributed machine control functions are highlighted. One of these examines the use of different cabling schemes in the implementation of a safety function and its effect on the PL evaluation. The safety function used as a generic example in PL calculations relates to a mobile work machine. It is a safety stop function where different technologies (electrical, hydraulic and pneumatic) can be utilized. It was detected that by replacing analogue cables with digital communication the system structure becomes simpler with less number of failing components, which can better the PL of the safety function. - Highlights: • Integration in distributed systems enables systems with less components. • It offers high reliability and diagnostic properties. • Analogue signals create uncertainty in signal reliability and difficult diagnostics
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Directory of Open Access Journals (Sweden)
Qingyang Xu
2014-01-01
Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
Distributed Input and State Estimation Using Local Information in Heterogeneous Sensor Networks
Directory of Open Access Journals (Sweden)
Dzung Tran
2017-07-01
Full Text Available A new distributed input and state estimation architecture is introduced and analyzed for heterogeneous sensor networks. Specifically, nodes of a given sensor network are allowed to have heterogeneous information roles in the sense that a subset of nodes can be active (that is, subject to observations of a process of interest and the rest can be passive (that is, subject to no observation. Both fixed and varying active and passive roles of sensor nodes in the network are investigated. In addition, these nodes are allowed to have non-identical sensor modalities under the common underlying assumption that they have complimentary properties distributed over the sensor network to achieve collective observability. The key feature of our framework is that it utilizes local information not only during the execution of the proposed distributed input and state estimation architecture but also in its design in that global uniform ultimate boundedness of error dynamics is guaranteed once each node satisfies given local stability conditions independent from the graph topology and neighboring information of these nodes. As a special case (e.g., when all nodes are active and a positive real condition is satisfied, the asymptotic stability can be achieved with our algorithm. Several illustrative numerical examples are further provided to demonstrate the efficacy of the proposed architecture.
W-phase estimation of first-order rupture distribution for megathrust earthquakes
Benavente, Roberto; Cummins, Phil; Dettmer, Jan
2014-05-01
Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of
Dugal, Cherie J; van Beest, Floris M; Vander Wal, Eric; Brook, Ryan K
2013-10-01
Endemic and emerging diseases are rarely uniform in their spatial distribution or prevalence among cohorts of wildlife. Spatial models that quantify risk-driven differences in resource selection and hunter mortality of animals at fine spatial scales can assist disease management by identifying high-risk areas and individuals. We used resource selection functions (RSFs) and selection ratios (SRs) to quantify sex- and age-specific resource selection patterns of collared (n = 67) and hunter-killed (n = 796) nonmigratory elk (Cervus canadensis manitobensis) during the hunting season between 2002 and 2012, in southwestern Manitoba, Canada. Distance to protected area was the most important covariate influencing resource selection and hunter-kill sites of elk (AICw = 1.00). Collared adult males (which are most likely to be infected with bovine tuberculosis (Mycobacterium bovis) and chronic wasting disease) rarely selected for sites outside of parks during the hunting season in contrast to adult females and juvenile males. The RSFs showed selection by adult females and juvenile males to be negatively associated with landscape-level forest cover, high road density, and water cover, whereas hunter-kill sites of these cohorts were positively associated with landscape-level forest cover and increasing distance to streams and negatively associated with high road density. Local-level forest was positively associated with collared animal locations and hunter-kill sites; however, selection was stronger for collared juvenile males and hunter-killed adult females. In instances where disease infects a metapopulation and eradication is infeasible, a principle goal of management is to limit the spread of disease among infected animals. We map high-risk areas that are regularly used by potentially infectious hosts but currently underrepresented in the distribution of kill sites. We present a novel application of widely available data to target hunter distribution based on host resource
2015-03-26
clustering is an algorithm that has been used in data mining applications such as machine learning applications , pattern recognition, hyper-spectral imagery...42 3.7.2 Application of K-means Clustering . . . . . . . . . . . . . . . . . 42 3.8 Experiment Design...Tomographic Imaging WLAN Wireless Local Area Networks WSN Wireless Sensor Network xx ESTIMATING SINGLE AND MULTIPLE TARGET LOCATIONS USING K-MEANS CLUSTERING
Louvaris, Evangelos E.; Karnezi, Eleni; Kostenidou, Evangelia; Kaltsonoudis, Christos; Pandis, Spyros N.
2017-10-01
A method is developed following the work of Grieshop et al. (2009) for the determination of the organic aerosol (OA) volatility distribution combining thermodenuder (TD) and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA) produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) and a scanning mobility particle sizer (SMPS). In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60-75 % of the cooking OA (COA) at concentrations around 500 µg m-3 consisted of low-volatility organic compounds (LVOCs), 20-30 % of semivolatile organic compounds (SVOCs), and around 10 % of intermediate-volatility organic compounds (IVOCs). The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol-1 and the effective accommodation coefficient was 0.06-0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.
Directory of Open Access Journals (Sweden)
E. E. Louvaris
2017-10-01
Full Text Available A method is developed following the work of Grieshop et al. (2009 for the determination of the organic aerosol (OA volatility distribution combining thermodenuder (TD and isothermal dilution measurements. The approach was tested in experiments that were conducted in a smog chamber using organic aerosol (OA produced during meat charbroiling. A TD was operated at temperatures ranging from 25 to 250 °C with a 14 s centerline residence time coupled to a high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS and a scanning mobility particle sizer (SMPS. In parallel, a dilution chamber filled with clean air was used to dilute isothermally the aerosol of the larger chamber by approximately a factor of 10. The OA mass fraction remaining was measured as a function of temperature in the TD and as a function of time in the isothermal dilution chamber. These two sets of measurements were used together to estimate the volatility distribution of the OA and its effective vaporization enthalpy and accommodation coefficient. In the isothermal dilution experiments approximately 20 % of the OA evaporated within 15 min. Almost all the OA evaporated in the TD at approximately 200 °C. The resulting volatility distributions suggested that around 60–75 % of the cooking OA (COA at concentrations around 500 µg m−3 consisted of low-volatility organic compounds (LVOCs, 20–30 % of semivolatile organic compounds (SVOCs, and around 10 % of intermediate-volatility organic compounds (IVOCs. The estimated effective vaporization enthalpy of COA was 100 ± 20 kJ mol−1 and the effective accommodation coefficient was 0.06–0.07. Addition of the dilution measurements to the TD data results in a lower uncertainty of the estimated vaporization enthalpy as well as the SVOC content of the OA.
DEFF Research Database (Denmark)
Mukul, Sharif A.; Biswas, Shekhar R.; Rashid, A. Z. M. Manzoor
2014-01-01
In tropical developing countries, reducing emissions from deforestation and forest degradation (REDD+) is becoming an important mechanism for conserving forests and protecting biodiversity. A key prerequisite for any successful REDD+ project, however, is obtaining baseline estimates of carbon...... in forest ecosystems. Using available published data, we provide here a new and more reliable estimate of carbon in Bangladesh forest ecosystems, along with their geo-spatial distribution. Our study reveals great variability in carbon density in different forests and higher carbon stock in the mangrove...... ecosystems, followed by in hill forests and in inland Sal (Shorea robusta) forests in the country. Due to its coverage, degraded nature, and diverse stakeholder engagement, the hill forests of Bangladesh can be used to obtain maximum REDD+ benefits. Further research on carbon and biodiversity in under...
Estimation of soil-soil solution distribution coefficient of radiostrontium using soil properties.
Ishikawa, Nao K; Uchida, Shigeo; Tagami, Keiko
2009-02-01
We propose a new approach for estimation of soil-soil solution distribution coefficient (K(d)) of radiostrontium using some selected soil properties. We used 142 Japanese agricultural soil samples (35 Andosol, 25 Cambisol, 77 Fluvisol, and 5 others) for which Sr-K(d) values had been determined by a batch sorption test and listed in our database. Spearman's rank correlation test was carried out to investigate correlations between Sr-K(d) values and soil properties. Electrical conductivity and water soluble Ca had good correlations with Sr-K(d) values for all soil groups. Then, we found a high correlation between the ratio of exchangeable Ca to Ca concentration in water soluble fraction and Sr-K(d) values with correlation coefficient R=0.72. This pointed us toward a relatively easy way to estimate Sr-K(d) values.
Optimal allocation of sensors for state estimation of distributed parameter systems
International Nuclear Information System (INIS)
Sunahara, Yoshifumi; Ohsumi, Akira; Mogami, Yoshio.
1978-01-01
The purpose of this paper is to present a method for finding the optimal allocation of sensors for state estimation of linear distributed parameter systems. This method is based on the criterion that the error covariance associated with the state estimate becomes minimal with respect to the allocation of the sensors. A theorem is established, giving the sufficient condition for optimizing the allocation of sensors to make minimal the error covariance approximated by a modal expansion. The remainder of this paper is devoted to illustrate important phases of the general theory of the optimal measurement allocation problem. To do this, several examples are demonstrated, including extensive discussions on the mutual relation between the optimal allocation and the dynamics of sensors. (author)
Khan, Fahd Ahmed
2012-04-01
New coherent receivers are derived for a pilot-symbol-aided distributed space-time block-coded system with imperfect channel state information which do not perform channel estimation at the destination by using the received pilot signals directly for decoding. The derived receivers are based on new metrics that use distribution of the channels and the noise to achieve improved symbol-error-rate (SER) performance. The SER performance of the derived receivers is further improved by utilizing the decision history in the receivers. The decision history is also incorporated in the existing Euclidean metric to improve its performance. Simulation results show that, for 16-quadrature-amplitude-modulation in a Rayleigh fading channel, a performance gain of up to 2.5 dB can be achieved for the new receivers compared with the conventional mismatched coherent receiver. © 2012 IEEE.
Rajakaruna, Harshana; VandenByllaardt, Julie; Kydd, Jocelyn; Bailey, Sarah
2018-03-01
The International Maritime Organization (IMO) has set limits on allowable plankton concentrations in ballast water discharge to minimize aquatic invasions globally. Previous guidance on ballast water sampling and compliance decision thresholds was based on the assumption that probability distributions of plankton are Poisson when spatially homogenous, or negative binomial when heterogeneous. We propose a hierarchical probability model, which incorporates distributions at the level of particles (i.e., discrete individuals plus colonies per unit volume) and also within particles (i.e., individuals per particle) to estimate the average plankton concentration in ballast water. We examined the performance of the models using data for plankton in the size class ≥ 10 μm and test ballast water compliance using the above models.
HIROSE,Hideo
1998-01-01
TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...
International Nuclear Information System (INIS)
Wang, X.; Horiguchi, I.; Takeda, T.; Yazawa, M.; Liu, X.; Yang, Y.; Wang, Q.
1999-01-01
The distribution and zoning of air temperature over Liaoning Province, China were examined using the calculated values of air temperature derived from satellite data (GMS data) as well as from altitude data. The results are summarized as follows. At 02:00 LST the correlation coefficients for the air temperatures calculated from altitude compared with the observed air temperatures were the same as those of the air temperatures derived from GMS data. At 14:00 LST, however, the correlation coefficients for air temperatures calculated from altitude were less than those of the air temperatures derived from GMS data. This fact verifies that the distribution of air temperature in the day-time is affected by other factors than altitude. The distribution of air temperature in a cell of approximately 5'(latitude) x 7.5'(longitude) over Liaoning Province, china was estimated by using the regression equations between surface temperature derived from GMS and the observed air temperature. The distribution of air temperature was classified into 5 types, and the types are obtained at 14:00 LST are seasonal ones but the types at 02:00 LST are not related to season. Also, the regional classification for the air temperature was examined using this distribution of air temperature. This regional classification for the air temperature was similar to the published zoning of the agricultural climate. It became clear that the characteristic distribution of air temperature in a cell unit can be obtained by satellite data. And it is possible to define the zoning of air temperature for a cell unit by the accumulated analyses of satellite data over an extended period
Inverse Estimation of Heat Flux and Temperature Distribution in 3D Finite Domain
International Nuclear Information System (INIS)
Muhammad, Nauman Malik
2009-02-01
Inverse heat conduction problems occur in many theoretical and practical applications where it is difficult or practically impossible to measure the input heat flux and the temperature of the layer conducting the heat flux to the body. Thus it becomes imperative to devise some means to cater for such a problem and estimate the heat flux inversely. Adaptive State Estimator is one such technique which works by incorporating the semi-Markovian concept into a Bayesian estimation technique thereby developing an inverse input and state estimator consisting of a bank of parallel adaptively weighted Kalman filters. The problem presented in this study deals with a three dimensional system of a cube with one end conducting heat flux and all the other sides are insulated while the temperatures are measured on the accessible faces of the cube. The measurements taken on these accessible faces are fed into the estimation algorithm and the input heat flux and the temperature distribution at each point in the system is calculated. A variety of input heat flux scenarios have been examined to underwrite the robustness of the estimation algorithm and hence insure its usability in practical applications. These include sinusoidal input flux, a combination of rectangular, linearly changing and sinusoidal input flux and finally a step changing input flux. The estimator's performance limitations have been examined in these input set-ups and error associated with each set-up is compared to conclude the realistic application of the estimation algorithm in such scenarios. Different sensor arrangements, that is different sensor numbers and their locations are also examined to impress upon the importance of number of measurements and their location i.e. close or farther from the input area. Since practically it is both economically and physically tedious to install more number of measurement sensors, hence optimized number and location is very important to determine for making the study more
Milk cow feed intake and milk production and distribution estimates for Phase 1
International Nuclear Information System (INIS)
Beck, D.M.; Darwin, R.F.; Erickson, A.R.; Eckert, R.L.
1992-04-01
This report provides initial information on milk production and distribution in the Hanford Environmental Dose Reconstruction (HEDR) Project Phase I study area. The Phase I study area consists of eight countries in central Washington and two countries in northern Oregon. The primary objective of the HEDR Project is to develop estimates of the radiation doses populations could have received from Hanford operations. The objective of Phase I of the project was to determine the feasibility of reconstructing data, models, and development of preliminary dose estimates received by people living in the ten countries surrounding Hanford from 1944 to 1947. One of the most important contributors to radiation doses from Hanford during the period of interest was radioactive iodine. Consumption of milk from cows that ate vegetation contaminated with iodine is likely the dominant pathway of human exposure. To estimate the doses people could have received from this pathway, it is necessary to estimate the amount of milk that the people living in the Phase I area consumed, the source of the milk, and the type of feed that the milk cows ate. The objective of the milk model subtask is to identify the sources of milk supplied to residents of each community in the study area as well as the sources of feeds that were fed to the milk cows. In this report, we focus on Grade A cow's milk (fresh milk used for human consumption)
Estimating alarm thresholds and the number of components in mixture distributions
Energy Technology Data Exchange (ETDEWEB)
Burr, Tom, E-mail: tburr@lanl.gov [Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States); Hamada, Michael S. [Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States)
2012-09-01
Mixtures of probability distributions arise in many nuclear assay and forensic applications, including nuclear weapon detection, neutron multiplicity counting, and in solution monitoring (SM) for nuclear safeguards. SM data is increasingly used to enhance nuclear safeguards in aqueous reprocessing facilities having plutonium in solution form in many tanks. This paper provides background for mixture probability distributions and then focuses on mixtures arising in SM data. SM data can be analyzed by evaluating transfer-mode residuals defined as tank-to-tank transfer differences, and wait-mode residuals defined as changes during non-transfer modes. A previous paper investigated impacts on transfer-mode and wait-mode residuals of event marking errors which arise when the estimated start and/or stop times of tank events such as transfers are somewhat different from the true start and/or stop times. Event marking errors contribute to non-Gaussian behavior and larger variation than predicted on the basis of individual tank calibration studies. This paper illustrates evidence for mixture probability distributions arising from such event marking errors and from effects such as condensation or evaporation during non-transfer modes, and pump carryover during transfer modes. A quantitative assessment of the sample size required to adequately characterize a mixture probability distribution arising in any context is included.
Multi-objective optimization with estimation of distribution algorithm in a noisy environment.
Shim, Vui Ann; Tan, Kay Chen; Chia, Jun Yong; Al Mamun, Abdullah
2013-01-01
Many real-world optimization problems are subjected to uncertainties that may be characterized by the presence of noise in the objective functions. The estimation of distribution algorithm (EDA), which models the global distribution of the population for searching tasks, is one of the evolutionary computation techniques that deals with noisy information. This paper studies the potential of EDAs; particularly an EDA based on restricted Boltzmann machines that handles multi-objective optimization problems in a noisy environment. Noise is introduced to the objective functions in the form of a Gaussian distribution. In order to reduce the detrimental effect of noise, a likelihood correction feature is proposed to tune the marginal probability distribution of each decision variable. The EDA is subsequently hybridized with a particle swarm optimization algorithm in a discrete domain to improve its search ability. The effectiveness of the proposed algorithm is examined via eight benchmark instances with different characteristics and shapes of the Pareto optimal front. The scalability, hybridization, and computational time are rigorously studied. Comparative studies show that the proposed approach outperforms other state of the art algorithms.
A three-dimensional dose-distribution estimation system using computerized image reconstruction
International Nuclear Information System (INIS)
Nishijima, Akihiko; Kidoya, Eiji; Komuro, Hiroyuki; Tanaka, Masato; Asada, Naoki.
1990-01-01
In radiotherapy planning, three dimensional (3-D) estimation of dose distribution has been very troublesome and time-consuming. To solve this problem, a simple and fast 3-D dose distribution image using a computer and Charged Couple Device (CCD) camera was developed. A series of X-ray films inserted in the phantom using a linear accelerator unit was exposed. The degree of film density was degitized with a CCD camera and a minicomputer (VAX 11-750). After that these results were compared with the present depth dose obtained by a JARP type dosimeter, with a dose error being less than 2%. The 3-D dose distribution image could accurately depict the density changes created by aluminum and air put into the phantom. The contrast resolution of the CCD camera seemed to be superior to the convention densitometer in the low-to-intermediate contrast range. In conclusion, our method seem to be very fast and simple for obtaining 3-D dose distribution images and is very effective when compared with the conventional method. (author)
Estimating the spatial distribution of artificial groundwater recharge using multiple tracers.
Moeck, Christian; Radny, Dirk; Auckenthaler, Adrian; Berg, Michael; Hollender, Juliane; Schirmer, Mario
2017-10-01
Stable isotopes of water, organic micropollutants and hydrochemistry data are powerful tools for identifying different water types in areas where knowledge of the spatial distribution of different groundwater is critical for water resource management. An important question is how the assessments change if only one or a subset of these tracers is used. In this study, we estimate spatial artificial infiltration along an infiltration system with stage-discharge relationships and classify different water types based on the mentioned hydrochemistry data for a drinking water production area in Switzerland. Managed aquifer recharge via surface water that feeds into the aquifer creates a hydraulic barrier between contaminated groundwater and drinking water wells. We systematically compare the information from the aforementioned tracers and illustrate differences in distribution and mixing ratios. Despite uncertainties in the mixing ratios, we found that the overall spatial distribution of artificial infiltration is very similar for all the tracers. The highest infiltration occurred in the eastern part of the infiltration system, whereas infiltration in the western part was the lowest. More balanced infiltration within the infiltration system could cause the elevated groundwater mound to be distributed more evenly, preventing the natural inflow of contaminated groundwater. Dedicated to Professor Peter Fritz on the occasion of his 80th birthday.
International Nuclear Information System (INIS)
Ayodele, T.R.; Jimoh, A.A.; Munda, J.L.; Agee, J.T.
2012-01-01
Highlights: ► We evaluate capacity factor of some commercially available wind turbines. ► Wind speed in the sites studied can best be modelled using Weibull distribution. ► Site WM05 has the highest wind power potential while site WM02 has the lowest. ► More wind power can be harnessed during the day period compared to the night. ► Turbine K seems to be the best turbine for the coastal region of South Africa. - Abstract: The operating curve parameters of a wind turbine should match the local wind regime optimally to ensure maximum exploitation of available energy in a mass of moving air. This paper provides estimates of the capacity factor of 20 commercially available wind turbines, based on the local wind characteristics of ten different sites located in the Western Cape region of South Africa. Ten-min average time series wind-speed data for a period of 1 year are used for the study. First, the wind distribution that best models the local wind regime of the sites is determined. This is based on root mean square error (RMSE) and coefficient of determination (R 2 ) which are used to test goodness of fit. First, annual, seasonal, diurnal and peak period-capacity factor are estimated analytically. Then, the influence of turbine power curve parameters on the capacity factor is investigated. Some of the key results show that the wind distribution of the entire site can best be modelled statistically using the Weibull distribution. Site WM05 (Napier) presents the highest capacity factor for all the turbines. This indicates that this site has the highest wind power potential of all the available sites. Site WM02 (Calvinia) has the lowest capacity factor i.e. lowest wind power potential. This paper can assist in the planning and development of large-scale wind power-generating sites in South Africa.
OligoRAP - an Oligo Re-Annotation Pipeline to improve annotation and estimate target specificity
Neerincx, P.B.T.; Rauwerda, H.; Nie, H.; Groenen, M.A.M.; Breit, T.M.; Leunissen, J.A.M.
2009-01-01
Background: High throughput gene expression studies using oligonucleotide microarrays depend on the specificity of each oligonucleotide (oligo or probe) for its target gene. However, target specific probes can only be designed when a reference genome of the species at hand were completely sequenced,
Directory of Open Access Journals (Sweden)
Jong-Wuu Wu
2013-01-01
Full Text Available We propose the weighted moments estimators (WMEs of the location and scale parameters for the extreme value distribution based on the multiply type II censored sample. Simulated mean squared errors (MSEs of best linear unbiased estimator (BLUE and exact MSEs of WMEs are compared to study the behavior of different estimation methods. The results show the best estimator among the WMEs and BLUE under different combinations of censoring schemes.
Dinges, Andrew J.; Webb, Elisabeth B.; Vrtiska, Mark P.
2015-01-01
The Light Goose Conservation Order (LGCO) was initiated in 1999 to reduce mid-continent populations of light geese (lesser snow geese Chen caerulescens and Ross's geese C. rossi). However, concern about potential for LGCO activities (i.e. hunting activities) to negatively impact non-target waterfowl species during spring migration in the Rainwater Basin (RWB) of Nebraska prompted agency personnel to limit the number of hunt days each week and close multiple public wetlands to LGCO activities entirely. To evaluate the effects of the LGCO in the RWB, we quantified waterfowl density at wetlands open and closed to LGCO hunting and recorded all hunter encounters during springs 2011 and 2012. We encountered a total of 70 hunting parties on 22 study wetlands, with over 90% of these encounters occurring during early season when the majority of waterfowl used the RWB region. We detected greater overall densities of dabbling ducks Anas spp., as well as for mallards A. platyrhynchos and northern pintails A. acuta on wetlands closed to the LGCO. We detected no effects of hunt day in the analyses of dabbling duck densities. We detected no differences in mean weekly dabbling duck densities among wetlands open to hunting, regardless of weekly or cumulative hunting encounter frequency throughout early season. Additionally, hunting category was not a predictor for the presence of greater white-fronted geese Anser albifronsin a logistic regression model. Given that dabbling duck densities were greater on wetlands closed to hunting, providing wetlands free from hunting disturbance as refugia during the LGCO remains an important management strategy at migration stopover sites. However, given that we did not detect an effect of hunt day or hunting frequency on dabbling duck density, our results suggest increased hunting frequency at sites already open to hunting would likely have minimal impacts on the distribution of non-target waterfowl species using the region for spring
Target-fragment angular distributions for the interaction of 86 MeV/A 12C with 197Au
International Nuclear Information System (INIS)
Kraus, R.H. Jr.; Loveland, W.; McGaughey, P.L.; Seaborg, G.T.; Morita, Y.; Hageboe, E.; Haldorsen, I.R.; Sugihara, T.T.
1985-01-01
Target-fragment angular distributions were measured using radiochemical techniques for 69 different fragments (44 12 C with 197 Au. The angular distributions in the laboratory system are forward-peaked with some distributions also showing a backward peaking. The shapes of the laboratory system distributions were compared with the predictions of the nuclear firestreak model. The measured angular distributions differed markedly from the predictions of the firestreak model in most cases. This discrepancy could be due, in part, to overestimation of the transferred longitudinal momentum by the firestreak model, the assumption of isotropic angular distributions for fission and particle emission in the moving frame and incorrect assumptions about how the lightest (A 145) fragment distributions were symmetric about 90 0 . (orig.)
Importance of exposure model in estimating impacts when a water distribution system is contaminated
International Nuclear Information System (INIS)
Davis, M. J.; Janke, R.; Environmental Science Division; USEPA
2008-01-01
The quantity of a contaminant ingested by individuals using tap water drawn from a water distribution system during a contamination event depends on the concentration of the contaminant in the water and the volume of water ingested. If the concentration varies with time, the actual time of exposure affects the quantity ingested. The influence of the timing of exposure and of individual variability in the volume of water ingested on estimated impacts for a contamination event has received limited attention. We examine the significance of ingestion timing and variability in the volume of water ingested by using a number of models for ingestion timing and volume. Contaminant concentrations were obtained from simulations of an actual distribution system for cases involving contaminant injections lasting from 1 to 24 h. We find that assumptions about exposure can significantly influence estimated impacts, especially when injection durations are short and impact thresholds are high. The influence of ingestion timing and volume should be considered when assessing impacts for contamination events
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment
Directory of Open Access Journals (Sweden)
Qi Liu
2016-08-01
Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.
A practical algorithm for distribution state estimation including renewable energy sources
Energy Technology Data Exchange (ETDEWEB)
Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)
2009-11-15
Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)
Directory of Open Access Journals (Sweden)
Siti Masitoh Kartikawati
2014-04-01
Full Text Available Pasak bumi (Eurycoma longifolia Jack is one of non timber forest products with “indeterminate” conservation status and commercially traded in West Kalimantan. The research objective was to determine the potential of pasak bumi root per hectare and its ecological condition under natural habitat. Root weight of E. longifolia Jack was estimated using simple linear regression and exponential equation with stem diameter and height as independent variables. The results showed that the individual number of the population was 114 with the majority in seedling stage with 71 individuals (62.28%. The distribution was found in clumped pattern. Conditions of the habitat could be described as follows: daily average temperature of 25.6oC, daily average relative humidity of 73.6%, light intensity of 0.9 klx, and red-yellow podsolic soil with texture ranged from clay to sandy clay. The selected estimator model for E. longifolia Jack root weight used exponential equation with stem height as independent variable using the equation of Y= 21.99T0,010 and determination coefficient of 0.97. After height variable was added, the potential of E. longifolia Jack minimum root weight that could be harvested per hectare was 0.33 kg.Keywords: Eurycoma longifolia, habitat preference, distribution pattern, root weight
Johnson, Nathan C.; Haig, Susan M.; Mosher, Stephen M.
2018-01-01
We described past and present distribution and abundance data to evaluate the status of the endangered Mariana Swiftlet (Aerodramus bartschi), a little-known echolocating cave swiftlet that currently inhabits 3 of 5 formerly occupied islands in the Mariana archipelago. We then evaluated the survey methods used to attain these estimates via fieldwork carried out on an introduced population of Mariana Swiftlets on the island of O'ahu, Hawaiian Islands, to derive better methods for future surveys. We estimate the range-wide population of Mariana Swiftlets to be 5,704 individuals occurring in 15 caves on Saipan, Aguiguan, and Guam in the Marianas; and 142 individuals occupying one tunnel on O'ahu. We further confirm that swiftlets have been extirpated from Rota and Tinian and have declined on Aguiguan. Swiftlets have remained relatively stable on Guam and Saipan in recent years. Our assessment of survey methods used for Mariana Swiftlets suggests overestimates depending on the technique used. We suggest the use of night vision technology and other changes to more accurately reflect their distribution, abundance, and status.
MIPAS ESA v7 carbon tetrachloride data: distribution, trend and atmospheric lifetime estimation
Valeri, M.; Barbara, F.; Boone, C. D.; Ceccherini, S.; Gai, M.; Maucher, G.; Raspollini, P.; Ridolfi, M.; Sgheri, L.; Wetzel, G.; Zoppetti, N.
2017-12-01
Carbon tetrachloride (CCl4) is a strong ozone-depleting atmospheric gas regulated by the Montreal protocol. Recently it received increasing interest due to the so called "mystery of CCl4": it was found that its atmospheric concentration at the surface declines with a rate significantly smaller than its lifetime-limited rate. Indeed there is a discrepancy between atmospheric observations and the estimated distribution based on the reported production and consumption. Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) measurements are used to estimate CCl4 distributions, its trend, and atmospheric lifetime in the upper troposphere / lower stratosphere (UTLS) region. In particular, here we use MIPAS product generated with Version 7 of the Level 2 algorithm operated by the European Space Agency. The CCl4 distribution shows features typical of long-lived species of anthropogenic origin: higher concentrations in the troposphere, decreasing with altitude due to the photolysis. We compare MIPAS CCl4 data with independent observations from Atmospheric Chemistry Experiment - Fourier Transform Spectrometer (ACE - FTS) and stratospheric balloon version of MIPAS (MIPAS-B). The comparison shows a general good agreement between the different datasets. CCl4 trends are evaluated as a function of both latitude and altitude: negative trends (-10/ -15 pptv/decade, -10/ -30 %/decade) are found at all latitudes in the UTLS, apart from a region in the Southern mid-latitudes between 50 and 10 hPa where the trend is slightly positive (5/10 pptv/decade, 15/20 %/decade). At the lowest altitudes sounded by the MIPAS scan we find trend values consistent with those determined on the basis of the Advanced Global Atmospheric Gases Experiment (AGAGE) and the National Oceanic and Atmospheric Administration / Earth System Research Laboratory / Halocarbons and other Atmospheric Trace Species (NOAA / ESRL / HATS) networks. CCl4 global average lifetime of 47(39 - 61) years has been
Estimating the Grain Size Distribution of Mars based on Fragmentation Theory and Observations
Charalambous, C.; Pike, W. T.; Golombek, M.
2017-12-01
We present here a fundamental extension to the fragmentation theory [1] which yields estimates of the distribution of particle sizes of a planetary surface. The model is valid within the size regimes of surfaces whose genesis is best reflected by the evolution of fragmentation phenomena governed by either the process of meteoritic impacts, or by a mixture with aeolian transportation at the smaller sizes. The key parameter of the model, the regolith maturity index, can be estimated as an average of that observed at a local site using cratering size-frequency measurements, orbital and surface image-detected rock counts and observations of sub-mm particles at landing sites. Through validation of ground truth from previous landed missions, the basis of this approach has been used at the InSight landing ellipse on Mars to extrapolate rock size distributions in HiRISE images down to 5 cm rock size, both to determine the landing safety risk and the subsequent probability of obstruction by a rock of the deployed heat flow mole down to 3-5 m depth [2]. Here we focus on a continuous extrapolation down to 600 µm coarse sand particles, the upper size limit that may be present through aeolian processes [3]. The parameters of the model are first derived for the fragmentation process that has produced the observable rocks via meteorite impacts over time, and therefore extrapolation into a size regime that is affected by aeolian processes has limited justification without further refinement. Incorporating thermal inertia estimates, size distributions observed by the Spirit and Opportunity Microscopic Imager [4] and Atomic Force and Optical Microscopy from the Phoenix Lander [5], the model's parameters in combination with synthesis methods are quantitatively refined further to allow transition within the aeolian transportation size regime. In addition, due to the nature of the model emerging in fractional mass abundance, the percentage of material by volume or mass that resides
Eppelbaum, Lev
2013-04-01
magnetic field for the models of thin bed, thick bed and horizontal circular cylinder; some of these procedures demand performing measurements at two levels over the earth's surface), (6) advanced 3D magnetic-gravity modeling for complex media, and (7) development of 3D physical-archaeological (or magnetic-archaeological) model of the studied area. ROV observations also permit to realize a multimodel approach to magnetic data analysis (Eppelbaum, 2005). Results of performed 3D modeling confirm an effectiveness of the proposed ROV low-altitude survey. Khesin's methodology (Khesin et al., 2006) for estimation of upper geological section magnetization consists of land magnetic observations along a profile disposing under inclined relief with the consequent data processing (this method cannot be applied at flat topography). The improved modification of this approach is based on combination of straight and inclined ROV observations that will help to obtain parameters of the medium magnetization with areas of flat terrain relief. ACKNOWLEDGEMENT This investigation is funding from the Tel Aviv University - the Cyprus Research Institute combined project "Advanced coupled electric-magnetic archaeological prospecting in Cyprus and Israel". REFERENCES Eppelbaum, L.V., 2005. Multilevel observations of magnetic field at archaeological sites as additional interpreting tool. Proceed. of the 6th Conference of Archaeological Prospection, Roma, Italy, 1-4. Eppelbaum, L.V., 2010. Archaeological geophysics in Israel: Past, Present and Future. Advances of Geosciences, 24, 45-68. Eppelbaum, L.V., 2011. Study of magnetic anomalies over archaeological targets in urban conditions. Physics and Chemistry of the Earth, 36, No. 16, 1318-1330. Eppelbaum, L.V., Alperovich, L., Zheludev, V. and Pechersky, A., 2011. Application of informational and wavelet approaches for integrated processing of geophysical data in complex environments. Proceed. of the 2011 SAGEEP Conference, Charleston, South Carolina
Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong
2018-05-19
In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.
Qu, Long; Nettleton, Dan; Dekkers, Jack C M
2012-12-01
Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.
Directory of Open Access Journals (Sweden)
MOHAMMAD H. FATEMI
2011-07-01
Full Text Available Quantitative structure–activity relationship (QSAR approaches were used to estimate the volume of distribution (Vd using an artificial neural network (ANN. The data set consisted of the volume of distribution of 129 pharmacologically important compounds, i.e., benzodiazepines, barbiturates, nonsteroidal anti-inflammatory drugs (NSAIDs, tricyclic anti-depressants and some antibiotics, such as betalactams, tetracyclines and quinolones. The descriptors, which were selected by stepwise variable selection methods, were: the Moriguchi octanol–water partition coefficient; the 3D-MoRSE-signal 30, weighted by atomic van der Waals volumes; the fragment-based polar surface area; the d COMMA2 value, weighted by atomic masses; the Geary autocorrelation, weighted by the atomic Sanderson electronegativities; the 3D-MoRSE – signal 02, weighted by atomic masses, and the Geary autocorrelation – lag 5, weighted by the atomic van der Waals volumes. These descriptors were used as inputs for developing multiple linear regressions (MLR and artificial neural network models as linear and non-linear feature mapping techniques, respectively. The standard errors in the estimation of Vd by the MLR model were: 0.104, 0.103 and 0.076 and for the ANN model: 0.029, 0.087 and 0.082 for the training, internal and external validation test, respectively. The robustness of these models were also evaluated by the leave-5-out cross validation procedure, that gives the statistics Q2 = 0.72 for the MLR model and Q2 = 0.82 for the ANN model. Moreover, the results of the Y-randomization test revealed that there were no chance correlations among the data matrix. In conclusion, the results of this study indicate the applicability of the estimation of the Vd value of drugs from their structural molecular descriptors. Furthermore, the statistics of the developed models indicate the superiority of the ANN over the MLR model.
Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks
Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Rahmani, Amirreza
2011-01-01
Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation-flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are proposed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms rely on a local information exchange network, relaxing the assumptions on existing algorithms. Distributed space systems rely on a signal transmission network among multiple spacecraft for their operation. Control and coordination among multiple spacecraft in a formation is facilitated via a network of relative sensing and interspacecraft communications. Guidance, navigation, and control rely on the sensing network. This network becomes more complex the more spacecraft are added, or as mission requirements become more complex. The observability of a formation state was observed by a set of local observations from a particular node in the formation. Formation observability can be parameterized in terms of the matrices appearing in the formation dynamics and observation matrices. An agreement protocol was used as a mechanism for observing formation states from local measurements. An agreement protocol is essentially an unforced dynamic system whose trajectory is governed by the interconnection geometry and initial condition of each node, with a goal of reaching a common value of interest. The observability of the interconnected system depends on the geometry of the network, as well as the position of the observer relative to the topology. For the first time, critical GN&C (guidance, navigation, and control estimation) subsystems are synthesized by bringing the contribution of the spacecraft information-exchange network to the forefront of algorithmic analysis and design. The result is a
Directory of Open Access Journals (Sweden)
Steve Wathen
Full Text Available Evidence for significant losses of species richness or biodiversity, even within protected natural areas, is mounting. Managers are increasingly being asked to monitor biodiversity, yet estimating biodiversity is often prohibitively expensive. As a cost-effective option, we estimated the spatial and temporal distribution of species richness for four taxonomic groups (birds, mammals, herpetofauna (reptiles and amphibians, and plants within Sequoia and Kings Canyon National Parks using only existing biological studies undertaken within the Parks and the Parks' long-term wildlife observation database. We used a rarefaction approach to model species richness for the four taxonomic groups and analyzed those groups by habitat type, elevation zone, and time period. We then mapped the spatial distributions of species richness values for the four taxonomic groups, as well as total species richness, for the Parks. We also estimated changes in species richness for birds, mammals, and herpetofauna since 1980. The modeled patterns of species richness either peaked at mid elevations (mammals, plants, and total species richness or declined consistently with increasing elevation (herpetofauna and birds. Plants reached maximum species richness values at much higher elevations than did vertebrate taxa, and non-flying mammals reached maximum species richness values at higher elevations than did birds. Alpine plant communities, including sagebrush, had higher species richness values than did subalpine plant communities located below them in elevation. These results are supported by other papers published in the scientific literature. Perhaps reflecting climate change: birds and herpetofauna displayed declines in species richness since 1980 at low and middle elevations and mammals displayed declines in species richness since 1980 at all elevations.
International Nuclear Information System (INIS)
Gu Yuqiu; Zheng Zhijian; Zhou Weimin; Wen Tianshu; Chunyu Shutai; Cai Dafeng; Sichuan Univ., Chengdu; Neijiang Teachers College, Neijiang; Jiao Chunye; Chen Hao; Sichuan Univ., Chengdu; Yang Xiangdong
2005-01-01
This paper reports the results of the experiment of hot electron energy distribution during the femtosecond laser-solid target interaction. The hot electrons formed an anisotropic energy distribution. In the direction of the target normal, the energy spectrum of the hot electron was a Maxwellian-like distribution with an effective temperature of 206 keV, which was due to the resonance absorption. In the direction of the specular reflection of laser, there appeared a local plateau of hot electron energy spectrum at the beginning and then it was decreased gradually, which maybe produced by several acceleration mechanisms. The effective temperature and the yield of hot electrons in the direction of the target normal is larger than those in the direction of the specular reflection of laser, which proves that the resonance absorption mechanism is more effective than others. (authors)
DEFF Research Database (Denmark)
Pertl, Michael; Douglass, Philip James; Heussen, Kai
2018-01-01
network approach for voltage estimation in active distribution grids by means of measured data from two feeders of a real low voltage distribution grid. The approach enables a real-time voltage estimation at locations in the distribution grid, where otherwise only non-real-time measurements are available......The installation of measurements in distribution grids enables the development of data driven methods for the power system. However, these methods have to be validated in order to understand the limitations and capabilities for their use. This paper presents a systematic validation of a neural...
Directory of Open Access Journals (Sweden)
Ping Wang
2016-04-01
Full Text Available Industry structure adjustment is an effective measure to achieve the carbon intensity target of Guangdong Province. Accurately evaluating the contribution of industry structure adjustment to the carbon intensity target is helpful for the government to implement more flexible and effective policies and measures for CO2 emissions reduction. In this paper, we attempt to evaluate the contribution of industry structure adjustment to the carbon intensity target. Firstly, we predict the gross domestic product (GDP with scenario forecasting, industry structure with the Markov chain model, CO2 emissions with a novel correlation mode based on least squares support vector machine, and then we assess the contribution of industry structure adjustment to the carbon intensity target of Guangdong during the period of 2011–2015 under nine scenarios. The obtained results show, in the ideal scenario, that the economy will grow at a high speed and the industry structure will be significantly adjusted, and thus the carbon intensity in 2015 will decrease by 25.53% compared to that in 2010, which will make a 130.94% contribution to the carbon intensity target. Meanwhile, in the conservative scenario, the economy will grow at a low speed and the industry structure will be slightly adjusted, and thus the carbon intensity in 2015 will decrease by 23.89% compared to that in 2010, which will make a 122.50% contribution to the carbon intensity target.
Estimating the formation age distribution of continental crust by unmixing zircon ages
Korenaga, Jun
2018-01-01
Continental crust provides first-order control on Earth's surface environment, enabling the presence of stable dry landmasses surrounded by deep oceans. The evolution of continental crust is important for atmospheric evolution, because continental crust is an essential component of deep carbon cycle and is likely to have played a critical role in the oxygenation of the atmosphere. Geochemical information stored in the mineral zircon, known for its resilience to diagenesis and metamorphism, has been central to ongoing debates on the genesis and evolution of continental crust. However, correction for crustal reworking, which is the most critical step when estimating original formation ages, has been incorrectly formulated, undermining the significance of previous estimates. Here I suggest a simple yet promising approach for reworking correction using the global compilation of zircon data. The present-day distribution of crustal formation age estimated by the new "unmixing" method serves as the lower bound to the true crustal growth, and large deviations from growth models based on mantle depletion imply the important role of crustal recycling through the Earth history.
Vupparaboina, Kiran Kumar; Nizampatnam, Srinath; Chhablani, Jay; Richhariya, Ashutosh; Jana, Soumya
2015-12-01
A variety of vision ailments are indicated by anomalies in the choroid layer of the posterior visual section. Consequently, choroidal thickness and volume measurements, usually performed by experts based on optical coherence tomography (OCT) images, have assumed diagnostic significance. Now, to save precious expert time, it has become imperative to develop automated methods. To this end, one requires choroid outer boundary (COB) detection as a crucial step, where difficulty arises as the COB divides the choroidal granularity and the scleral uniformity only notionally, without marked brightness variation. In this backdrop, we measure the structural dissimilarity between choroid and sclera by structural similarity (SSIM) index, and hence estimate the COB by thresholding. Subsequently, smooth COB estimates, mimicking manual delineation, are obtained using tensor voting. On five datasets, each consisting of 97 adult OCT B-scans, automated and manual segmentation results agree visually. We also demonstrate close statistical match (greater than 99.6% correlation) between choroidal thickness distributions obtained algorithmically and manually. Further, quantitative superiority of our method is established over existing results by respective factors of 27.67% and 76.04% in two quotient measures defined relative to observer repeatability. Finally, automated choroidal volume estimation, being attempted for the first time, also yields results in close agreement with that of manual methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
O. Jakubov
2013-09-01
Full Text Available Common techniques for position-velocity-time estimation in satellite navigation, iterative least squares and the extended Kalman filter, involve matrix operations. The matrix inversion and inclusion of a matrix library pose requirements on a computational power and operating platform of the navigation processor. In this paper, we introduce a novel distributed algorithm suitable for implementation in simple parallel processing units each for a tracked satellite. Such a unit performs only scalar sum, subtraction, multiplication, and division. The algorithm can be efficiently implemented in hardware logic. Given the fast position-velocity-time estimator, frequent estimates can foster dynamic performance of a vector tracking receiver. The algorithm has been designed from a factor graph representing the extended Kalman filter by splitting vector nodes into scalar ones resulting in a cyclic graph with few iterations needed. Monte Carlo simulations have been conducted to investigate convergence and accuracy. Simulation case studies for a vector tracking architecture and experimental measurements with a real-time software receiver developed at CTU in Prague were conducted. The algorithm offers compromises in stability, accuracy, and complexity depending on the number of iterations. In scenarios with a large number of tracked satellites, it can outperform the traditional methods at low complexity.
Directory of Open Access Journals (Sweden)
Tudor DRUGAN
2003-08-01
Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.
Duffy, Sean; Smith, John
2017-10-18
Duffy, Huttenlocher, Hedges, and Crawford (Psychonomic Bulletin & Review, 17(2), 224-230, 2010) report on experiments where participants estimate the lengths of lines. These studies were designed to test the category adjustment model (CAM), a Bayesian model of judgments. The authors report that their analysis provides evidence consistent with CAM: that there is a bias toward the running mean and not recent stimuli. We reexamine their data. First, we attempt to replicate their analysis, and we obtain different results. Second, we conduct a different statistical analysis. We find significant recency effects, and we identify several specifications where the running mean is not significantly related to judgment. Third, we conduct tests of auxiliary predictions of CAM. We do not find evidence that the bias toward the mean increases with exposure to the distribution. We also do not find that responses longer than the maximum of the distribution or shorter than the minimum become less likely with greater exposure to the distribution. Fourth, we produce a simulated dataset that is consistent with key features of CAM, and our methods correctly identify it as consistent with CAM. We conclude that the Duffy et al. (2010) dataset is not consistent with CAM. We also discuss how conventions in psychology do not sufficiently reduce the likelihood of these mistakes in future research. We hope that the methods that we employ will be used to evaluate other datasets.
Estimation of cost-effectiveness of the Finnish electricity distribution utilities
Energy Technology Data Exchange (ETDEWEB)
Kopsakangas-Savolainen, Maria; Svento, Rauli [Department of Economics, University of Oulu (Finland)
2008-03-15
This paper examines the cost-effectiveness of Finnish electricity distribution utilities. We estimate several panel data stochastic frontier specifications using both Cobb-Douglas and Translog model specifications. The conventional models are extended in order to model observed heterogeneity explicitly in the cost frontier models. The true fixed effects model has been used as a representative of the models which account for unobserved heterogeneity and extended conventional random effect models have been used in analysing the impact of observed heterogeneity. A comparison between the conventional random effects model and models where heterogeneity component is entered either into the mean or into the variance of the inefficiency term shows that relative efficiency scores diminish when heterogeneity is added to the analysis. The true fixed effects model on the other hand gives clearly smaller inefficiency scores than random effects models. In the paper we also show that the relative inefficiency scores and rankings are not sensitive to the cost function specification. Our analysis points out the importance of the efficient use of the existing distribution network. The economies of scale results suggest that firms could reduce their operating costs by using networks more efficiently. According to our results average size firms which have high load factors are the most efficient ones. All firms have unused capacities so that they can improve cost-effectiveness rather by increasing the average distributed volumes than by mergers. (author)
Estimation of cost-effectiveness of the Finnish electricity distribution utilities
International Nuclear Information System (INIS)
Kopsakangas-Savolainen, Maria; Svento, Rauli
2008-01-01
This paper examines the cost-effectiveness of Finnish electricity distribution utilities. We estimate several panel data stochastic frontier specifications using both Cobb-Douglas and Translog model specifications. The conventional models are extended in order to model observed heterogeneity explicitly in the cost frontier models. The true fixed effects model has been used as a representative of the models which account for unobserved heterogeneity and extended conventional random effect models have been used in analysing the impact of observed heterogeneity. A comparison between the conventional random effects model and models where heterogeneity component is entered either into the mean or into the variance of the inefficiency term shows that relative efficiency scores diminish when heterogeneity is added to the analysis. The true fixed effects model on the other hand gives clearly smaller inefficiency scores than random effects models. In the paper we also show that the relative inefficiency scores and rankings are not sensitive to the cost function specification. Our analysis points out the importance of the efficient use of the existing distribution network. The economies of scale results suggest that firms could reduce their operating costs by using networks more efficiently. According to our results average size firms which have high load factors are the most efficient ones. All firms have unused capacities so that they can improve cost-effectiveness rather by increasing the average distributed volumes than by mergers
Ziegler, Hannes Moritz
Planners and managers often rely on coarse population distribution data from the census for addressing various social, economic, and environmental problems. In the analysis of physical vulnerabilities to sea-level rise, census units such as blocks or block groups are coarse relative to the required decision-making application. This study explores the benefits offered from integrating image classification and dasymetric mapping at the household level to provide detailed small area population estimates at the scale of residential buildings. In a case study of Boca Raton, FL, a sea-level rise inundation grid based on mapping methods by NOAA is overlaid on the highly detailed population distribution data to identify vulnerable residences and estimate population displacement. The enhanced spatial detail offered through this method has the potential to better guide targeted strategies for future development, mitigation, and adaptation efforts.
Rosetti, Marcos Francisco; Pacheco-Cobos, Luis; Larralde, Hernán; Hudson, Robyn
2010-11-01
This work explores search trajectories of children attempting to find targets distributed on a playing field. This task, of ludic nature, was developed to test the effect of conspicuity and spatial distribution of targets on the searcher’s performance. The searcher’s path was recorded by a Global Positioning System (GPS) device attached to the child’s waist. Participants were not rewarded nor their performance rated. Variation in the conspicuity of the targets influenced search performance as expected; cryptic targets resulted in slower searches and longer, more tortuous paths. Extracting the main features of the paths showed that the children: (1) paid little attention to the spatial distribution and at least in the conspicuous condition approximately followed a nearest neighbor pattern of target collection, (2) were strongly influenced by the conspicuity of the targets. We implemented a simple statistical model for the search rules mimicking the children’s behavior at the level of individual (coarsened) steps. The model reproduced the main features of the children’s paths without the participation of memory or planning.
Characterizing subcritical assemblies with time of flight fixed by energy estimation distributions
Monterial, Mateusz; Marleau, Peter; Pozzi, Sara
2018-04-01
We present the Time of Flight Fixed by Energy Estimation (TOFFEE) as a measure of the fission chain dynamics in subcritical assemblies. TOFFEE is the time between correlated gamma rays and neutrons, subtracted by the estimated travel time of the incident neutron from its proton recoil. The measured subcritical assembly was the BeRP ball, a 4.482 kg sphere of α-phase weapons grade plutonium metal, which came in five configurations: bare, 0.5, 1, and 1.5 in iron, and 1 in nickel closed fitting shell reflectors. We extend the measurement with MCNPX-PoliMi simulations of shells ranging up to 6 inches in thickness, and two new reflector materials: aluminum and tungsten. We also simulated the BeRP ball with different masses ranging from 1 to 8 kg. A two-region and single-region point kinetics models were used to model the behavior of the positive side of the TOFFEE distribution from 0 to 100 ns. The single region model of the bare cases gave positive linear correlations between estimated and expected neutron decay constants and leakage multiplications. The two-region model provided a way to estimate neutron multiplication for the reflected cases, which correlated positively with expected multiplication, but the nature of the correlation (sub or superlinear) changed between material types. Finally, we found that the areal density of the reflector shells had a linear correlation with the integral of the two-region model fit. Therefore, we expect that with knowledge of reflector composition, one could determine the shell thickness, or vice versa. Furthermore, up to a certain amount and thickness of the reflector, the two-region model provides a way of distinguishing bare and reflected plutonium assemblies.
Rigit, A. R. H.; Shrimpton, John S.
2009-06-01
The majority of scientific and industrial electrical spray applications make use of sprays that contain a range of drop diameters. Indirect evidence suggests the mean drop diameter and the mean drop charge level are usually correlated. In addition, within each drop diameter class there is every reason to suspect a distribution of charge levels exist for a particular drop diameter class. This paper presents an experimental method that uses the joint PDF of drop velocity and diameter, obtained from phase Doppler anemometry measurements, and directly obtained spatially resolved distributions of the mass and charge flux to obtain a drop diameter and charge frequency distribution. The method is demonstrated using several data-sets obtained from experimental measurements of steady poly-disperse sprays of an electrically insulating liquid produced with the charge injection technique. The space charge repulsion in the spray plume produces a hollow cone spray structure. In addition an approximate self-similarity is observed, with the maximum radial mass and charge flow occurring at r/ d ~ 200. The charge flux profile is slightly offset from the mass flux profile, and this gives direct evidence that the spray specific charge increases from approximately 20% of the bulk mean spray specific charge on the spray axis to approximately 200% of the bulk mean specific charge in the periphery of the spray. The results from the drop charge estimation model suggest a complex picture of the correlation between drop charge and drop diameter, with spray specific charge, injection velocity and orifice diameter all contributing to the shape of the drop diameter-charge distribution. Mean drop charge as a function of the Rayleigh limit is approximately 0.2, and is invariant with drop diameter and also across the spray cases tested.
Energy Technology Data Exchange (ETDEWEB)
Rigit, A.R.H. [University of Sarawak, Faculty of Engineering, Kota Samarahan, Sarawak (Malaysia); Shrimpton, John S. [University of Southampton, Energy Technology Research Group, School of Engineering Sciences, Southampton (United Kingdom)
2009-06-15
The majority of scientific and industrial electrical spray applications make use of sprays that contain a range of drop diameters. Indirect evidence suggests the mean drop diameter and the mean drop charge level are usually correlated. In addition, within each drop diameter class there is every reason to suspect a distribution of charge levels exist for a particular drop diameter class. This paper presents an experimental method that uses the joint PDF of drop velocity and diameter, obtained from phase Doppler anemometry measurements, and directly obtained spatially resolved distributions of the mass and charge flux to obtain a drop diameter and charge frequency distribution. The method is demonstrated using several data-sets obtained from experimental measurements of steady poly-disperse sprays of an electrically insulating liquid produced with the charge injection technique. The space charge repulsion in the spray plume produces a hollow cone spray structure. In addition an approximate self-similarity is observed, with the maximum radial mass and charge flow occurring at r/d{proportional_to}200. The charge flux profile is slightly offset from the mass flux profile, and this gives direct evidence that the spray specific charge increases from approximately 20% of the bulk mean spray specific charge on the spray axis to approximately 200% of the bulk mean specific charge in the periphery of the spray. The results from the drop charge estimation model suggest a complex picture of the correlation between drop charge and drop diameter, with spray specific charge, injection velocity and orifice diameter all contributing to the shape of the drop diameter-charge distribution. Mean drop charge as a function of the Rayleigh limit is approximately 0.2, and is invariant with drop diameter and also across the spray cases tested. (orig.)
Larson, Steven J.; Crawford, Charles G.; Gilliom, Robert J.
2004-01-01
Regression models were developed for predicting atrazine concentration distributions in rivers and streams, using the Watershed Regressions for Pesticides (WARP) methodology. Separate regression equations were derived for each of nine percentiles of the annual distribution of atrazine concentrations and for the annual time-weighted mean atrazine concentration. In addition, seasonal models were developed for two specific periods of the year--the high season, when the highest atrazine concentrations are expected in streams, and the low season, when concentrations are expected to be low or undetectable. Various nationally available watershed parameters were used as explanatory variables, including atrazine use intensity, soil characteristics, hydrologic parameters, climate and weather variables, land use, and agricultural management practices. Concentration data from 112 river and stream stations sampled as part of the U.S. Geological Survey's National Water-Quality Assessment and National Stream Quality Accounting Network Programs were used for computing the concentration percentiles and mean concentrations used as the response variables in regression models. Tobit regression methods, using maximum likelihood estimation, were used for developing the models because some of the concentration values used for the response variables were censored (reported as less than a detection threshold). Data from 26 stations not used for model development were used for model validation. The annual models accounted for 62 to 77 percent of the variability in concentrations among the 112 model development stations. Atrazine use intensity (the amount of atrazine used in the watershed divided by watershed area) was the most important explanatory variable in all models, but additional watershed parameters significantly increased the amount of variability explained by the models. Predicted concentrations from all 10 models were within a factor of 10 of the observed concentrations at most
Directory of Open Access Journals (Sweden)
W. Castaings
2009-04-01
Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.
In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.
It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.
For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.
Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.
International Nuclear Information System (INIS)
Kobayashi, T.; Tanihata, I.; Suzuki, T.
1992-01-01
Transverse momentum distributions of the projectile fragments from β-unstable nuclei have been measured with various projectile and target combinations. The momentum correlation of two neutrons in the neutron halo is extracted from the P c t distribution of 9 Li and hat of the neutrons. It is found that the two neutrons are moving in the same direction on average and thus strongly suggests the formation of a di-neutron in 11 Li. (Author)
Some estimates of mirror plasma startup by neutral beam heating of pellet and gas cloud targets
International Nuclear Information System (INIS)
Shearer, J.W.; Willmann, P.A.
1978-01-01
Hot plasma buildup by neutral beam injection into an initially cold solid or gaseous target is found to be conceivable in large mirror machine experiments such as 2XIIB or MFTF. A simple analysis shows that existing neutral beam intensities are sufficient to ablate suitable targets to form a gas or vapor cloud. An approximate rate equation model is used to follow the subsequent processes of ionization, heating, and hot plasma formation. Solutions of these rate equations are obtained by means of the ''GEAR'' techniques for solving ''stiff'' systems of differential equations. These solutions are in rough agreement with the 2XIIB stream plasma buildup experiment. They also predict that buildup on a suitable nitrogen-like target will occur in the MFTF geometry. In 2XIIB the solutions are marginal; buildup may be possible, but is not certain
Estimating cost ratio distribution between fatal and non-fatal road accidents in Malaysia
Hamdan, Nurhidayah; Daud, Noorizam
2014-07-01
Road traffic crashes are a global major problem, and should be treated as a shared responsibility. In Malaysia, road accident tragedies kill 6,917 people and injure or disable 17,522 people in year 2012, and government spent about RM9.3 billion in 2009 which cost the nation approximately 1 to 2 percent loss of gross domestic product (GDP) reported annually. The current cost ratio for fatal and non-fatal accident used by Ministry of Works Malaysia simply based on arbitrary value of 6:4 or equivalent 1.5:1 depends on the fact that there are six factors involved in the calculation accident cost for fatal accident while four factors for non-fatal accident. The simple indication used by the authority to calculate the cost ratio is doubted since there is lack of mathematical and conceptual evidence to explain how this ratio is determined. The main aim of this study is to determine the new accident cost ratio for fatal and non-fatal accident in Malaysia based on quantitative statistical approach. The cost ratio distributions will be estimated based on Weibull distribution. Due to the unavailability of official accident cost data, insurance claim data both for fatal and non-fatal accident have been used as proxy information for the actual accident cost. There are two types of parameter estimates used in this study, which are maximum likelihood (MLE) and robust estimation. The findings of this study reveal that accident cost ratio for fatal and non-fatal claim when using MLE is 1.33, while, for robust estimates, the cost ratio is slightly higher which is 1.51. This study will help the authority to determine a more accurate cost ratio between fatal and non-fatal accident as compared to the official ratio set by the government, since cost ratio is an important element to be used as a weightage in modeling road accident related data. Therefore, this study provides some guidance tips to revise the insurance claim set by the Malaysia road authority, hence the appropriate method
International Nuclear Information System (INIS)
Tabak, M.; Callahan-Miller, D.
1997-01-01
We describe the status of a distributed radiator heavy ion target design. In integrated calculations this target ignited and produced 390-430 MJ of yieldwhen driven with 5.8-6.5 MJ of 3-4 GeV Pb ions. The target has cylindrical symmetry with disk endplates. The ions uniformly illuminate these endplates in a 5mm radius spot. We discuss the considerations which led to this design together with some previously unused design features: low density hohlraum walls in approximate pressure balance with internal low-Z fill materials, radiationsymmetry determined by the position of the radiator materials and particle ranges, and early time pressure symmetry possibly influenced by radiation shims. We discuss how this target scales to lower input energy or to lower beam power. Variant designs with more realistic beam focusing strategies are also discussed. We show the tradeoffs required for targets which accept higher particle energies
CATCH ESTIMATION AND SIZE DISTRIBUTION OF BILLFISHES LANDED IN PORT OF BENOA, BALI
Directory of Open Access Journals (Sweden)
Bram Setyadji
2012-06-01
Full Text Available Billfishes are generally considered as by-product in tuna long line fisheries that have high economic value in the market. By far, the information about Indian Ocean billfish biology and fisheries especially in Indonesia is very limited. This research aimed to elucidate the estimation of production and size distribution of billfishes landed in port of Benoa during 2010 (February – December through daily observation at the processing plants. The result showed that the landings dominated by Swordfish (Xiphias gladius 54.9%, Blue marlin (Makaira mazara 17.8% and Black marlin (Makaira indica 13.0% respectively, followed by small amount of striped marlin (Tetrapturus audax, sailfish (Istiophorus platypterus, and shortbil spearfish (Tetrapturus Angustirostris. Generally the individual size of billfishes range between 68 and 206 cm (PFL, and showing negative allometric pattern except on swordfish that was isometric. Most of the billfish landed haven’t reached their first sexual maturity.
Iwafune, Yumiko; Ogimoto, Kazuhiko; Yagita, Yoshie
The Energy management systems (EMS) on demand sides are expected as a method to enhance the capability of supply and demand balancing of a power system under the anticipated penetration of renewable energy generation such as Photovoltaics (PV). Elucidation of energy consumption structure in a building is one of important elements for realization of EMS and contributes to the extraction of potential energy saving. In this paper, we propose the estimation method of operating condition of household appliances using circuit current data on an electric distribution board. Circuit current data are broken down by their shape using a self-organization map method and aggregated by appliance based on customers' information of appliance possessed. Proposed method is verified using residential energy consumption measurement survey data.
Wenger, Seth J; Freeman, Mary C
2008-10-01
Researchers have developed methods to account for imperfect detection of species with either occupancy (presence absence) or count data using replicated sampling. We show how these approaches can be combined to simultaneously estimate occurrence, abundance, and detection probability by specifying a zero-inflated distribution for abundance. This approach may be particularly appropriate when patterns of occurrence and abundance arise from distinct processes operating at differing spatial or temporal scales. We apply the model to two data sets: (1) previously published data for a species of duck, Anas platyrhynchos, and (2) data for a stream fish species, Etheostoma scotti. We show that in these cases, an incomplete-detection zero-inflated modeling approach yields a superior fit to the data than other models. We propose that zero-inflated abundance models accounting for incomplete detection be considered when replicate count data are available.
Françoise Benz
2004-01-01
ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on natural annealing processes or Evolutionary Computation, based on biological evolution processes. Geneti...
Françoise Benz
2004-01-01
ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on nat...
International Nuclear Information System (INIS)
Hans, J.M.; Hall, J.B.; Moore, W.E.
1986-08-01
Population distributions and tailings areas were estimated from aerial photography for each of 21 licensed uranium millsites. Approximately 11,600 persons live within 5 kilometers of the tailings impoundments at the millsites. About 82% of these persons live near five of the millsites. No persons were found living within 5 kilometers of six of the millsites. Tailings area measurements include the surface area of tailings in impoundments, heap-leached ore, and carryover tailings in evaporation ponds. Approximately 4,000 acres of tailings surfaces were measured for the 21 millsites. About 55% of the tailings surfaces were dry, 11% wet, and the remainder ponded. The average tailings surface area for the millsites is about 200 acres and ranges from 7 to 813 acres
Directory of Open Access Journals (Sweden)
Shichao Mi
2016-02-01
Full Text Available Heterogeneous wireless sensor networks (HWSNs can achieve more tasks and prolong the network lifetime. However, they are vulnerable to attacks from the environment or malicious nodes. This paper is concerned with the issues of a consensus secure scheme in HWSNs consisting of two types of sensor nodes. Sensor nodes (SNs have more computation power, while relay nodes (RNs with low power can only transmit information for sensor nodes. To address the security issues of distributed estimation in HWSNs, we apply the heterogeneity of responsibilities between the two types of sensors and then propose a parameter adjusted-based consensus scheme (PACS to mitigate the effect of the malicious node. Finally, the convergence property is proven to be guaranteed, and the simulation results validate the effectiveness and efficiency of PACS.
Smith, T.; McLaughlin, D.
2017-12-01
Growing more crops to provide a secure food supply to an increasing global population will further stress land and water resources that have already been significantly altered by agriculture. The connection between production and resource use depends on crop yields and unit evapotranspiration (UET) rates that vary greatly, over both time and space. For regional and global analyses of food security it is appropriate to treat yield and UET as uncertain variables conditioned on climatic and soil properties. This study describes how probability distributions of these variables can be estimated by combining remotely sensed land use and evapotranspiration data with in situ agronomic and soils data, all available at different resolutions and coverages. The results reveal the influence of water and temperature stress on crop yield at large spatial scales. They also provide a basis for stochastic modeling and optimization procedures that explicitly account for uncertainty in the environmental factors that affect food production.
Energy Technology Data Exchange (ETDEWEB)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithm which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a
May, Henry
2014-01-01
Interest in variation in program impacts--How big is it? What might explain it?--has inspired recent work on the analysis of data from multi-site experiments. One critical aspect of this problem involves the use of random or fixed effect estimates to visualize the distribution of impact estimates across a sample of sites. Unfortunately, unless the…
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Kaasenbrood, Lotte; Boekholdt, S. Matthijs; van der Graaf, Yolanda; Ray, Kausik K.; Peters, Ron J. G.; Kastelein, John J. P.; Amarenco, Pierre; LaRosa, John C.; Cramer, Maarten J. M.; Westerink, Jan; Kappelle, L. Jaap; de Borst, Gert J.; Visseren, Frank L. J.
2016-01-01
Among patients with clinically manifest vascular disease, the risk of recurrent vascular events is likely to vary. We assessed the distribution of estimated 10-year risk of recurrent vascular events in a secondary prevention population. We also estimated the potential risk reduction and residual
Kaasenbrood, Lotte; Boekholdt, S. Matthijs; Van Der Graaf, Yolanda; Ray, Kausik K.; Peters, Ron J G; Kastelein, John J P; Amarenco, Pierre; Larosa, John C.; Cramer, Maarten J M; Westerink, Jan; Kappelle, L. Jaap; De Borst, Gert J.; Visseren, Frank L J
2016-01-01
Background: Among patients with clinically manifest vascular disease, the risk of recurrent vascular events is likely to vary. We assessed the distribution of estimated 10-year risk of recurrent vascular events in a secondary prevention population. We also estimated the potential risk reduction and
Sierdsema, H.; van Loon, E.E.
2008-01-01
Birds play an increasingly prominent role in politics, nature conservation and nature management. As a consequence, up-to-date and reliable spatial estimates of bird distributions over large areas are in high demand. The requested bird distribution maps are however not easily obtained. Intensive
Energy Technology Data Exchange (ETDEWEB)
Gagnon, Pieter [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barbose, Galen L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Stoll, Brady [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ehlen, Ali [National Renewable Energy Lab. (NREL), Golden, CO (United States); Zuboy, Jarret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mills, Andrew D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2018-05-15
Misforecasting the adoption of customer-owned distributed photovoltaics (DPV) can have operational and financial implications for utilities; forecasting capabilities can be improved, but generally at a cost. This paper informs this decision-space by using a suite of models to explore the capacity expansion and operation of the Western Interconnection over a 15-year period across a wide range of DPV growth rates and misforecast severities. The system costs under a misforecast are compared against the costs under a perfect forecast, to quantify the costs of misforecasting. Using a simplified probabilistic method applied to these modeling results, an analyst can make a first-order estimate of the financial benefit of improving a utility’s forecasting capabilities, and thus be better informed about whether to make such an investment. For example, under our base assumptions, a utility with 10 TWh per year of retail electric sales who initially estimates that DPV growth could range from 2% to 7.5% of total generation over the next 15 years could expect total present-value savings of approximately $4 million if they could reduce the severity of misforecasting to within ±25%. Utility resource planners can compare those savings against the costs needed to achieve that level of precision, to guide their decision on whether to make an investment in tools or resources.
International Nuclear Information System (INIS)
Konstam, M.A.; Strauss, H.W.; Alpert, N.M.; Miller, S.W.; Murphy, R.X.; Greene, R.E.; McKusick, K.A.
1979-01-01
To determine whether a correlation exists between pulmonary arterial (PA) pressure (P/sub a/) and the distribution of pulmonary blood flow, this distribution was measured in four upright dogs in the control state and during intravenous infusions of epinephrine or prostaglandin F/sub 2α/. During suspension of respiration, 15 mCi of Xe-133 were injected intravenously, and perfusion and equilibration lung images were recorded with a scintillation camera. The procedure was performed several times on each dog, with and without pharmacological elevation of PA pressure by 5 to 50 cm H 2 O. For each scintigram, the relative blood flow per unit ventilated lung volume (F) was plotted against centimeters above the hilum (h). Pulmonary arterial pressure was derived from each curve, assuming the relation F = B(P/sub a/ - hD) 2 , where B = constant and D = specific gravity of blood. Calculated PA pressure correlated strongly (r = 0.83) with measured PA pressure, suggesting a possible means of noninvasive estimation of PA pressure
Shao, Zhongshi; Pi, Dechang; Shao, Weishi
2018-05-01
This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.
An overview of distributed microgrid state estimation and control for smart grids.
Rana, Md Masud; Li, Li
2015-02-12
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.
An Overview of Distributed Microgrid State Estimation and Control for Smart Grids
Rana, Md Masud; Li, Li
2015-01-01
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method. PMID:25686316
Spatial Distribution of Estimated Wind-Power Royalties in West Texas
Directory of Open Access Journals (Sweden)
Christian Brannstrom
2015-12-01
Full Text Available Wind-power development in the U.S. occurs primarily on private land, producing royalties for landowners through private contracts with wind-farm operators. Texas, the U.S. leader in wind-power production with well-documented support for wind power, has virtually all of its ~12 GW of wind capacity sited on private lands. Determining the spatial distribution of royalty payments from wind energy is a crucial first step to understanding how renewable power may alter land-based livelihoods of some landowners, and, as a result, possibly encourage land-use changes. We located ~1700 wind turbines (~2.7 GW on 241 landholdings in Nolan and Taylor counties, Texas, a major wind-development region. We estimated total royalties to be ~$11.5 million per year, with mean annual royalty received per landowner per year of $47,879 but with significant differences among quintiles and between two sub-regions. Unequal distribution of royalties results from land-tenure patterns established before wind-power development because of a “property advantage,” defined as the pre-existing land-tenure patterns that benefit the fraction of rural landowners who receive wind turbines. A “royalty paradox” describes the observation that royalties flow to a small fraction of landowners even though support for wind power exceeds 70 percent.
CSIR Research Space (South Africa)
Maliage, M
2012-05-01
Full Text Available The purpose of this paper is to validate SolTrace for concentrating solar investigations at CSIR by means of a test case: the comparison of the flux distribution in the focal spot of a 1.25 m2 target aligned heliostat predicted by the ray tracing...
Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan
2017-08-01
We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.
The Social Distribution of Health: Estimating Quality-Adjusted Life Expectancy in England.
Love-Koh, James; Asaria, Miqdad; Cookson, Richard; Griffin, Susan
2015-07-01
To model the social distribution of quality-adjusted life expectancy (QALE) in England by combining survey data on health-related quality of life with administrative data on mortality. Health Survey for England data sets for 2010, 2011, and 2012 were pooled (n = 35,062) and used to model health-related quality of life as a function of sex, age, and socioeconomic status (SES). Office for National Statistics mortality rates were used to construct life tables for age-sex-SES groups. These quality-of-life and length-of-life estimates were then combined to predict QALE as a function of these characteristics. Missing data were imputed, and Monte-Carlo simulation was used to estimate standard errors. Sensitivity analysis was conducted to explore alternative regression models and measures of SES. Socioeconomic inequality in QALE at birth was estimated at 11.87 quality-adjusted life-years (QALYs), with a sex difference of 1 QALY. When the socioeconomic-sex subgroups are ranked by QALE, a differential of 10.97 QALYs is found between the most and least healthy quintile groups. This differential can be broken down into a life expectancy difference of 7.28 years and a quality-of-life adjustment of 3.69 years. The methods proposed in this article refine simple binary quality-adjustment measures such as the widely used disability-free life expectancy, providing a more accurate picture of overall health inequality in society than has hitherto been available. The predictions also lend themselves well to the task of evaluating the health inequality impact of interventions in the context of cost-effectiveness analysis. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Estimation of Speciation and Distribution of {sup 131}I in urban and natural Environments
Energy Technology Data Exchange (ETDEWEB)
Hormann, Volker; Fischer, Helmut W. [University of Bremen, Institute of Environmental Physics, Otto-Hahn-Allee 1, 28359 Bremen (Germany)
2014-07-01
{sup 131}I is a radionuclide that may be introduced into natural and urban environments via several pathways. As a result of nuclear accidents it may be washed out from air or settle onto the ground by dry deposition. In urban landscapes this is again washed out by rain, partly introduced into the sewer system and thus transported to the next wastewater plant where it may accumulate in certain compartments. In rural landscapes it may penetrate the soil and be more or less available to plant uptake depending on chemical and physical conditions. On a regular basis, {sup 131}I is released into the urban sewer system in the course of therapeutic and diagnostic treatment of patients with thyroid diseases. The speciation of iodine in the environment is complex. Depending on redox state and biological activity, it may appear as I{sup -}, IO{sub 3}{sup -}, I{sub 2} or bound to organic molecules (e.g. humic acids). Moreover, some of these species are bound to surfaces of particles suspended in water or present in soil, e.g. hydrous ferric oxides (HFO). It is to be expected that speciation and solid-liquid distribution of iodine strongly depends on environmental conditions. In this study, the speciation and solid-liquid distribution of iodine in environmental samples such as waste water, sewage sludge and soil are estimated with the help of the geochemical code PHREEQC. The calculations are carried out using chemical equilibrium and sorption data from the literature and chemical analyses of the media. We present the results of these calculations and compare them with experimental results of medical {sup 131}I in waste water and sewage sludge. The output of this study will be used in future work where transport and distribution of iodine in wastewater treatment plants and in irrigated agricultural soils will be modeled. (authors)
A Novel Methodology to Estimate Metabolic Flux Distributions in Constraint-Based Models
Directory of Open Access Journals (Sweden)
Francesco Alessandro Massucci
2013-09-01
Full Text Available Quite generally, constraint-based metabolic flux analysis describes the space of viable flux configurations for a metabolic network as a high-dimensional polytope defined by the linear constraints that enforce the balancing of production and consumption fluxes for each chemical species in the system. In some cases, the complexity of the solution space can be reduced by performing an additional optimization, while in other cases, knowing the range of variability of fluxes over the polytope provides a sufficient characterization of the allowed configurations. There are cases, however, in which the thorough information encoded in the individual distributions of viable fluxes over the polytope is required. Obtaining such distributions is known to be a highly challenging computational task when the dimensionality of the polytope is sufficiently large, and the problem of developing cost-effective ad hoc algorithms has recently seen a major surge of interest. Here, we propose a method that allows us to perform the required computation heuristically in a time scaling linearly with the number of reactions in the network, overcoming some limitations of similar techniques employed in recent years. As a case study, we apply it to the analysis of the human red blood cell metabolic network, whose solution space can be sampled by different exact techniques, like Hit-and-Run Monte Carlo (scaling roughly like the third power of the system size. Remarkably accurate estimates for the true distributions of viable reaction fluxes are obtained, suggesting that, although further improvements are desirable, our method enhances our ability to analyze the space of allowed configurations for large biochemical reaction networks.
Reduced complexity FFT-based DOA and DOD estimation for moving target in bistatic MIMO radar
Ali, Hussain; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2016-01-01
classification (2D-MUSIC) and reduced-dimension MUSIC (RD-MUSIC) algorithms. It is shown by simulations, our proposed algorithm has better estimation performance and lower computational complexity compared to the 2D-MUSIC and RD-MUSIC algorithms. Moreover
Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot
CSIR Research Space (South Africa)
Burke, Michael G
2010-11-01
Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...
Bandyopadhyay, Saptarshi
guidance algorithms using results from numerical simulations and closed-loop hardware experiments on multiple quadrotors. In the second part of this dissertation, we present two novel discrete-time algorithms for distributed estimation, which track a single target using a network of heterogeneous sensing agents. The Distributed Bayesian Filtering (DBF) algorithm, the sensing agents combine their normalized likelihood functions using the logarithmic opinion pool and the discrete-time dynamic average consensus algorithm. Each agent's estimated likelihood function converges to an error ball centered on the joint likelihood function of the centralized multi-sensor Bayesian filtering algorithm. Using a new proof technique, the convergence, stability, and robustness properties of the DBF algorithm are rigorously characterized. The explicit bounds on the time step of the robust DBF algorithm are shown to depend on the time-scale of the target dynamics. Furthermore, the DBF algorithm for linear-Gaussian models can be cast into a modified form of the Kalman information filter. In the Bayesian Consensus Filtering (BCF) algorithm, the agents combine their estimated posterior pdfs multiple times within each time step using the logarithmic opinion pool scheme. Thus, each agent's consensual pdf minimizes the sum of Kullback-Leibler divergences with the local posterior pdfs. The performance and robust properties of these algorithms are validated using numerical simulations. In the third part of this dissertation, we present an attitude control strategy and a new nonlinear tracking controller for a spacecraft carrying a large object, such as an asteroid or a boulder. If the captured object is larger or comparable in size to the spacecraft and has significant modeling uncertainties, conventional nonlinear control laws that use exact feed-forward cancellation are not suitable because they exhibit a large resultant disturbance torque. The proposed nonlinear tracking control law guarantees
International Nuclear Information System (INIS)
Karamyan, S.A.; Adam, J.; Belov, A.G.; Chaloun, P.; Norseev, Yu.V.; Stegajlov, V.I.
1997-01-01
Fission-fragment mass distribution has been measured by the cumulative yields of radionuclides detected in the 232 Th(γ,f)-reaction at the Bremsstrahlung endpoint energies of 12 and 24 MeV. The yield upper limits have been estimated for the light nuclei 24 Na, 28 Mg, 38 S etc. at the Th and Ta targets exposure to the 24 MeV Bremsstrahlung. The results are discussed in terms of the multimodal fission phenomena and cluster emission >from a deformed fissioning system or from a compound nucleus
Fencl, Martin; Jörg, Rieckermann; Vojtěch, Bareš
2015-04-01
Commercial microwave links (MWL) are point-to-point radio systems which are used in backhaul networks of cellular operators. For several years, they have been suggested as rainfall sensors complementary to rain gauges and weather radars, because, first, they operate at frequencies where rain drops represent significant source of attenuation and, second, cellular networks almost completely cover urban and rural areas. Usually, path-average rain rates along a MWL are retrieved from the rain-induced attenuation of received MWL signals with a simple model based on a power law relationship. The model is often parameterized based on the characteristics of a particular MWL, such as frequency, polarization and the drop size distribution (DSD) along the MWL. As information on the DSD is usually not available in operational conditions, the model parameters are usually considered constant. Unfortunately, this introduces bias into rainfall estimates from MWL. In this investigation, we propose a generic method to eliminate this bias in MWL rainfall estimates. Specifically, we search for attenuation statistics which makes it possible to classify rain events into distinct groups for which same power-law parameters can be used. The theoretical attenuation used in the analysis is calculated from DSD data using T-Matrix method. We test the validity of our approach on observations from a dedicated field experiment in Dübendorf (CH) with a 1.85-km long commercial dual-polarized microwave link transmitting at a frequency of 38 GHz, an autonomous network of 5 optical distrometers and 3 rain gauges distributed along the path of the MWL. The data is recorded at a high temporal resolution of up to 30s. It is further tested on data from an experimental catchment in Prague (CZ), where 14 MWLs, operating at 26, 32 and 38 GHz frequencies, and reference rainfall from three RGs is recorded every minute. Our results suggest that, for our purpose, rain events can be nicely characterized based on
Directory of Open Access Journals (Sweden)
T. Viskari
2012-12-01
Full Text Available Extended Kalman Filter (EKF is used to estimate particle size distributions from observations. The focus here is on the practical application of EKF to simultaneously merge information from different types of experimental instruments. Every 10 min, the prior state estimate is updated with size-segregating measurements from Differential Mobility Particle Sizer (DMPS and Aerodynamic Particle Sizer (APS as well as integrating measurements from a nephelometer. Error covariances are approximate in our EKF implementation. The observation operator assumes a constant particle density and refractive index. The state estimates are compared to particle size distributions that are a composite of DMPS and APS measurements. The impact of each instrument on the size distribution estimate is studied. Kalman Filtering of DMPS and APS yielded a temporally consistent state estimate. This state estimate is continuous over the overlapping size range of DMPS and APS. Inclusion of the integrating measurements further reduces the effect of measurement noise. Even with the present approximations, EKF is shown to be a very promising method to estimate particle size distribution with observations from different types of instruments.
International Nuclear Information System (INIS)
Clementi, Luis A.; Vega, Jorge R.; Gugliotta, Luis M.; Quirantes, Arturo
2012-01-01
A numerical method is proposed for the characterization of core–shell spherical particles from static light scattering (SLS) measurements. The method is able to estimate the core size distribution (CSD) and the particle size distribution (PSD), through the following two-step procedure: (i) the estimation of the bivariate core–particle size distribution (C–PSD), by solving a linear ill-conditioned inverse problem through a generalized Tikhonov regularization strategy, and (ii) the calculation of the CSD and the PSD from the estimated C–PSD. First, the method was evaluated on the basis of several simulated examples, with polystyrene–poly(methyl methacrylate) core–shell particles of different CSDs and PSDs. Then, two samples of hematite–Yttrium basic carbonate core–shell particles were successfully characterized. In all analyzed examples, acceptable estimates of the PSD and the average diameter of the CSD were obtained. Based on the single-scattering Mie theory, the proposed method is an effective tool for characterizing core–shell colloidal particles larger than their Rayleigh limits without requiring any a-priori assumption on the shapes of the size distributions. Under such conditions, the PSDs can always be adequately estimated, while acceptable CSD estimates are obtained when the core/shell particles exhibit either a high optical contrast, or a moderate optical contrast but with a high ‘average core diameter’/‘average particle diameter’ ratio. -- Highlights: ► Particles with core–shell morphology are characterized by static light scattering. ► Core size distribution and particle size distribution are successfully estimated. ► Simulated and experimental examples are used to validate the numerical method. ► The positive effect of a large core/shell optical contrast is investigated. ► No a-priori assumption on the shapes of the size distributions is required.
Estimation of peak heat flux onto the targets for CFETR with extended divertor leg
International Nuclear Information System (INIS)
Zhang, Chuanjia; Chen, Bin; Xing, Zhe; Wu, Haosheng; Mao, Shifeng; Luo, Zhengping; Peng, Xuebing; Ye, Minyou
2016-01-01
Highlights: • A hypothetical geometry is assumed to extend the outer divertor leg in CFETR. • Density scan SOLPS simulation is done to study the peak heat flux onto target. • Attached–detached regime transition in out divertor occurs at lower puffing rate. • Unexpected delay of attached–detached regime transition occurs in inner divertor. - Abstract: China Fusion Engineering Test Reactor (CFETR) is now in conceptual design phase. CFETR is proposed as a good complement to ITER for demonstrating of fusion energy. Divertor is a crucial component which faces the plasmas and handles huge heat power for CFETR and future fusion reactor. To explore an effective way for heat exhaust, various methods to reduce the heat flux to divertor target should be considered for CFETR. In this work, the effect of extended out divertor leg on the peak heat flux is studied. The magnetic configuration of the long leg divertor is obtained by EFIT and Tokamak Simulation Code (TSC), while a hypothetical geometry is assumed to extend the out divertor leg as long as possible inside vacuum vessel. A SOLPS simulation is performed to study peak heat flux of the long leg divertor for CFETR. D 2 gas puffing is used and increasing of the puffing rate means increase of plasma density. Both peak heat flux onto inner and outer targets are below 10 MW/m 2 is achieved. A comparison between the peak heat flux between long leg and conventional divertor shows that an attached–detached regime transition of out divertor occurs at lower gas puffing gas puffing rate for long leg divertor. While for the inner divertor, even the configuration is almost the same, the situation is opposite.
Estimation of peak heat flux onto the targets for CFETR with extended divertor leg
Energy Technology Data Exchange (ETDEWEB)
Zhang, Chuanjia; Chen, Bin [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Xing, Zhe [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Wu, Haosheng [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Mao, Shifeng, E-mail: sfmao@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Luo, Zhengping; Peng, Xuebing [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Ye, Minyou [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China)
2016-11-01
Highlights: • A hypothetical geometry is assumed to extend the outer divertor leg in CFETR. • Density scan SOLPS simulation is done to study the peak heat flux onto target. • Attached–detached regime transition in out divertor occurs at lower puffing rate. • Unexpected delay of attached–detached regime transition occurs in inner divertor. - Abstract: China Fusion Engineering Test Reactor (CFETR) is now in conceptual design phase. CFETR is proposed as a good complement to ITER for demonstrating of fusion energy. Divertor is a crucial component which faces the plasmas and handles huge heat power for CFETR and future fusion reactor. To explore an effective way for heat exhaust, various methods to reduce the heat flux to divertor target should be considered for CFETR. In this work, the effect of extended out divertor leg on the peak heat flux is studied. The magnetic configuration of the long leg divertor is obtained by EFIT and Tokamak Simulation Code (TSC), while a hypothetical geometry is assumed to extend the out divertor leg as long as possible inside vacuum vessel. A SOLPS simulation is performed to study peak heat flux of the long leg divertor for CFETR. D{sub 2} gas puffing is used and increasing of the puffing rate means increase of plasma density. Both peak heat flux onto inner and outer targets are below 10 MW/m{sup 2} is achieved. A comparison between the peak heat flux between long leg and conventional divertor shows that an attached–detached regime transition of out divertor occurs at lower gas puffing gas puffing rate for long leg divertor. While for the inner divertor, even the configuration is almost the same, the situation is opposite.
van der Laan, Mark J
2011-01-01
The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows (1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and (2) targeting the fitting of the probability distribution of the data toward the targe
Directory of Open Access Journals (Sweden)
Chris Bambey Guure
2012-01-01
Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Costa, Anna; Molnar, Peter; Anghileri, Daniela
2017-04-01
Suspended sediment is associated with nutrient and contaminant transport in water courses. Estimating suspended sediment load is relevant for water-quality assessment, recreational activities, reservoir sedimentation issues, and ecological habitat assessment. Suspended sediment concentration (SSC) along channels is usually reproduced by suspended sediment rating curves, which relate SSC to discharge with a power law equation. Large uncertainty characterizes rating curves based only on discharge, because sediment supply is not explicitly accounted for. The aim of this work is to develop a source-oriented formulation of suspended sediment dynamics and to estimate suspended sediment yield at the outlet of a large Alpine catchment (upper Rhône basin, Switzerland). We propose a novel modelling approach for suspended sediment which accounts for sediment supply by taking into account the variety of sediment sources in an Alpine environment, i.e. the spatial location of sediment sources (e.g. distance from the outlet and lithology) and the different processes of sediment production and transport (e.g. by rainfall, overland flow, snowmelt). Four main sediment sources, typical of Alpine environments, are included in our model: glacial erosion, hillslope erosion, channel erosion and erosion by mass wasting processes. The predictive model is based on gridded datasets of precipitation and air temperature which drive spatially distributed degree-day models to simulate snowmelt and ice-melt, and determine erosive rainfall. A mass balance at the grid scale determines daily runoff. Each cell belongs to a different sediment source (e.g. hillslope, channel, glacier cell). The amount of sediment entrained and transported in suspension is simulated through non-linear functions of runoff, specific for sediment production and transport processes occurring at the grid scale (e.g. rainfall erosion, snowmelt-driven overland flow). Erodibility factors identify different lithological units
International Nuclear Information System (INIS)
Sekowski, M.; Burenkov, A.; Martinez-Limia, A.; Hernandez-Mangas, J.; Ryssel, H.
2008-01-01
Angular distributions of ion sputtered germanium and silicon atoms are investigated within this work. Experiments are performed for the case of grazing ion incidence angles, where the resulting angular distributions are asymmetrical with respect to the polar angle of the sputtered atoms. The performed experiments are compared to Monte-Carlo simulations from different programs. We show here an improved model for the angular distribution, which has an additional dependence of the ion incidence angle.
Estimates of the Size Distribution of Meteoric Smoke Particles From Rocket-Borne Impact Probes
Antonsen, Tarjei; Havnes, Ove; Mann, Ingrid
2017-11-01
Ice particles populating noctilucent clouds and being responsible for polar mesospheric summer echoes exist around the mesopause in the altitude range from 80 to 90 km during polar summer. The particles are observed when temperatures around the mesopause reach a minimum, and it is presumed that they consist of water ice with inclusions of smaller mesospheric smoke particles (MSPs). This work provides estimates of the mean size distribution of MSPs through analysis of collision fragments of the ice particles populating the mesospheric dust layers. We have analyzed data from two triplets of mechanically identical rocket probes, MUltiple Dust Detector (MUDD), which are Faraday bucket detectors with impact grids that partly fragments incoming ice particles. The MUDD probes were launched from Andøya Space Center (69°17'N, 16°1'E) on two payloads during the MAXIDUSTY campaign on 30 June and 8 July 2016, respectively. Our analysis shows that it is unlikely that ice particles produce significant current to the detector, and that MSPs dominate the recorded current. The size distributions obtained from these currents, which reflect the MSP sizes, are described by inverse power laws with exponents of k˜ [3.3 ± 0.7, 3.7 ± 0.5] and k˜ [3.6 ± 0.8, 4.4 ± 0.3] for the respective flights. We derived two k values for each flight depending on whether the charging probability is proportional to area or volume of fragments. We also confirm that MSPs are probably abundant inside mesospheric ice particles larger than a few nanometers, and the volume filling factor can be a few percent for reasonable assumptions of particle properties.
Distribution of near-surface permafrost in Alaska: estimates of present and future conditions
Pastick, Neal J.; Jorgenson, M. Torre; Wylie, Bruce K.; Nield, Shawn J.; Johnson, Kristofer D.; Finley, Andrew O.
2015-01-01
High-latitude regions are experiencing rapid and extensive changes in ecosystem composition and function as the result of increases in average air temperature. Increasing air temperatures have led to widespread thawing and degradation of permafrost, which in turn has affected ecosystems, socioeconomics, and the carbon cycle of high latitudes. Here we overcome complex interactions among surface and subsurface conditions to map nearsurface permafrost through decision and regression tree approaches that statistically and spatially extend field observations using remotely sensed imagery, climatic data, and thematic maps of a wide range of surface and subsurface biophysical characteristics. The data fusion approach generated medium-resolution (30-m pixels) maps of near-surface (within 1 m) permafrost, active-layer thickness, and associated uncertainty estimates throughout mainland Alaska. Our calibrated models (overall test accuracy of ~85%) were used to quantify changes in permafrost distribution under varying future climate scenarios assuming no other changes in biophysical factors. Models indicate that near-surface permafrost underlies 38% of mainland Alaska and that near-surface permafrost will disappear on 16 to 24% of the landscape by the end of the 21st Century. Simulations suggest that near-surface permafrost degradation is more probable in central regions of Alaska than more northerly regions. Taken together, these results have obvious implications for potential remobilization of frozen soil carbon pools under warmer temperatures. Additionally, warmer and drier conditions may increase fire activity and severity, which may exacerbate rates of permafrost thaw and carbon remobilization relative to climate alone. The mapping of permafrost distribution across Alaska is important for land-use planning, environmental assessments, and a wide-array of geophysical studies.
Directory of Open Access Journals (Sweden)
J. Szilagyi
2009-05-01
Full Text Available Under simplifying conditions catchment-scale vapor pressure at the drying land surface can be calculated as a function of its watershed-representative temperature (<T_{s}> by the wet-surface equation (WSE, similar to the wet-bulb equation in meteorology for calculating the dry-bulb thermometer vapor pressure of the Complementary Relationship of evaporation. The corresponding watershed ET rate,
Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G
2012-10-01
Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper