WorldWideScience

Sample records for optimal estimation technique

  1. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Software for the grouped optimal aggregation technique

    Science.gov (United States)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  3. Application of PSO (particle swarm optimization) and GA (genetic algorithm) techniques on demand estimation of oil in Iran

    International Nuclear Information System (INIS)

    Assareh, E.; Behrang, M.A.; Assari, M.R.; Ghanbarzadeh, A.

    2010-01-01

    This paper presents application of PSO (Particle Swarm Optimization) and GA (Genetic Algorithm) techniques to estimate oil demand in Iran, based on socio-economic indicators. The models are developed in two forms (exponential and linear) and applied to forecast oil demand in Iran. PSO-DEM and GA-DEM (PSO and GA demand estimation models) are developed to estimate the future oil demand values based on population, GDP (gross domestic product), import and export data. Oil consumption in Iran from 1981 to 2005 is considered as the case of this study. The available data is partly used for finding the optimal, or near optimal values of the weighting parameters (1981-1999) and partly for testing the models (2000-2005). For the best results of GA, the average relative errors on testing data were 2.83% and 1.72% for GA-DEM exponential and GA-DEM linear , respectively. The corresponding values for PSO were 1.40% and 1.36% for PSO-DEM exponential and PSO-DEM linear , respectively. Oil demand in Iran is forecasted up to year 2030. (author)

  4. Acceleration techniques in the univariate Lipschitz global optimization

    Science.gov (United States)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela

    2016-10-01

    Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.

  5. Optimal state estimation theory applied to safeguards accounting

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.

    1977-01-01

    This paper presents a unified theory for the application of modern state estimation techniques to nuclear material accountability. First a summary of the current MUF/LEMUF approach is detailed. It is shown that when inventory measurement error is large in comparison to transfer measurement error, improved estimates of the losses can be achieved using the cumulative summation technique. However, the optimal estimator is shown to be the Kalman filter. An enhancement of the retrospective estimation of losses can be achieved using linear smoothing. State space models are developed for a mixed oxide fuel fabrication facility and examples are presented

  6. Optimal Design for Reactivity Ratio Estimation: A Comparison of Techniques for AMPS/Acrylamide and AMPS/Acrylic Acid Copolymerizations

    Directory of Open Access Journals (Sweden)

    Alison J. Scott

    2015-11-01

    Full Text Available Water-soluble polymers of acrylamide (AAm and acrylic acid (AAc have significant potential in enhanced oil recovery, as well as in other specialty applications. To improve the shear strength of the polymer, a third comonomer, 2-acrylamido-2-methylpropane sulfonic acid (AMPS, can be added to the pre-polymerization mixture. Copolymerization kinetics of AAm/AAc are well studied, but little is known about the other comonomer pairs (AMPS/AAm and AMPS/AAc. Hence, reactivity ratios for AMPS/AAm and AMPS/AAc copolymerization must be established first. A key aspect in the estimation of reliable reactivity ratios is design of experiments, which minimizes the number of experiments and provides increased information content (resulting in more precise parameter estimates. However, design of experiments is hardly ever used during copolymerization parameter estimation schemes. In the current work, copolymerization experiments for both AMPS/AAm and AMPS/AAc are designed using two optimal techniques (Tidwell-Mortimer and the error-in-variables-model (EVM. From these optimally designed experiments, accurate reactivity ratio estimates are determined for AMPS/AAm (rAMPS = 0.18, rAAm = 0.85 and AMPS/AAc (rAMPS = 0.19, rAAc = 0.86.

  7. Biological Inspired Stochastic Optimization Technique (PSO for DOA and Amplitude Estimation of Antenna Arrays Signal Processing in RADAR Communication System

    Directory of Open Access Journals (Sweden)

    Khurram Hammed

    2016-01-01

    Full Text Available This paper presents a stochastic global optimization technique known as Particle Swarm Optimization (PSO for joint estimation of amplitude and direction of arrival of the targets in RADAR communication system. The proposed scheme is an excellent optimization methodology and a promising approach for solving the DOA problems in communication systems. Moreover, PSO is quite suitable for real time scenario and easy to implement in hardware. In this study, uniform linear array is used and targets are supposed to be in far field of the arrays. Formulation of the fitness function is based on mean square error and this function requires a single snapshot to obtain the best possible solution. To check the accuracy of the algorithm, all of the results are taken by varying the number of antenna elements and targets. Finally, these results are compared with existing heuristic techniques to show the accuracy of PSO.

  8. Estimation of Valve Stiction Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    S. Sivagamasundari

    2011-06-01

    Full Text Available This paper presents a procedure for quantifying valve stiction in control loops based on particle swarm optimization. Measurements of the Process Variable (PV and Controller Output (OP are used to estimate the parameters of a Hammerstein system, consisting of connection of a non linear control valve stiction model and a linear process model. The parameters of the Hammerstein model are estimated using particle swarm optimization, from the input-output data by minimizing the error between the true model output and the identified model output. Using particle swarm optimization, Hammerstein models with known nonlinear structure and unknown parameters can be identified. A cost-effective optimization technique is adopted to find the best valve stiction models representing a more realistic valve behavior in the oscillating loop. Simulation and practical laboratory control system results are included, which demonstrates the effectiveness and robustness of the identification scheme.

  9. Robust subspace estimation using low-rank optimization theory and applications

    CERN Document Server

    Oreifej, Omar

    2014-01-01

    Various fundamental applications in computer vision and machine learning require finding the basis of a certain subspace. Examples of such applications include face detection, motion estimation, and activity recognition. An increasing interest has been recently placed on this area as a result of significant advances in the mathematics of matrix rank optimization. Interestingly, robust subspace estimation can be posed as a low-rank optimization problem, which can be solved efficiently using techniques such as the method of Augmented Lagrange Multiplier. In this book,?the authors?discuss fundame

  10. Optimal Bandwidth Selection for Kernel Density Functionals Estimation

    Directory of Open Access Journals (Sweden)

    Su Chen

    2015-01-01

    Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.

  11. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  12. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  13. Optimal power allocation for SM-OFDM systems with imperfect channel estimation

    International Nuclear Information System (INIS)

    Yu, Feng; Song, Lijun; Lei, Xia; Xiao, Yue; Jiang, Zhao Xiang; Jin, Maozhu

    2016-01-01

    This paper analyses the bit error rate (BER) of the spatial modulation orthogonal frequency division multiplex (SM-OFDM) system and derives the optimal power allocation between the data and the pilot symbols by minimizing the upper bound for the BER operating with imperfect channel estimation. Furthermore, we prove the proposed optimal power allocation scheme applies to all generalized linear interpolation techniques with the minimum mean square error (MMSE) channel estimation . Simulation results show that employing the proposed optimal power allocation provides a substantial gain in terms of the average BER performance for the SM-OFDM system compared to its equal-power-allocation counterpart.

  14. Simulation-based optimization parametric optimization techniques and reinforcement learning

    CERN Document Server

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  15. Quantitative Portfolio Optimization Techniques Applied to the Brazilian Stock Market

    Directory of Open Access Journals (Sweden)

    André Alves Portela Santos

    2012-09-01

    Full Text Available In this paper we assess the out-of-sample performance of two alternative quantitative portfolio optimization techniques - mean-variance and minimum variance optimization – and compare their performance with respect to a naive 1/N (or equally-weighted portfolio and also to the market portfolio given by the Ibovespa. We focus on short selling-constrained portfolios and consider alternative estimators for the covariance matrices: sample covariance matrix, RiskMetrics, and three covariance estimators proposed by Ledoit and Wolf (2003, Ledoit and Wolf (2004a and Ledoit and Wolf (2004b. Taking into account alternative portfolio re-balancing frequencies, we compute out-of-sample performance statistics which indicate that the quantitative approaches delivered improved results in terms of lower portfolio volatility and better risk-adjusted returns. Moreover, the use of more sophisticated estimators for the covariance matrix generated optimal portfolios with lower turnover over time.

  16. Fault-tolerant embedded system design and optimization considering reliability estimation uncertainty

    International Nuclear Information System (INIS)

    Wattanapongskorn, Naruemon; Coit, David W.

    2007-01-01

    In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed

  17. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  18. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  19. Optimization of Mangala Hydropower Station, Pakistan, using Optimization Techniques

    Directory of Open Access Journals (Sweden)

    Zaman Muhammad

    2017-01-01

    Full Text Available Hydropower generation is one of the key element in the economy of a country. The present study focusses on the optimal electricity generation from the Mangla reservoir in Pakistan. A mathematical model has been developed for the Mangla hydropower station and particle swarm and genetic algorithm optimization techniques were applied at this model for optimal electricity generation. Results revealed that electricity production increases with the application of optimization techniques at the proposed mathematical model. Genetic Algorithm can produce maximum electricity than Particle swarm optimization but the time of execution of particle swarm optimization is much lesser than the Genetic algorithm. Mangla hydropower station can produce up to 59*109 kWh electricity by using the flows optimally than 47*108 kWh production from traditional methods.

  20. Estimation of time-varying reactivity by the H∞ optimal linear filter

    International Nuclear Information System (INIS)

    Suzuki, Katsuo; Shimazaki, Junya; Watanabe, Koiti

    1995-01-01

    The problem of estimating the time-varying net reactivity from flux measurements is solved for a point reactor kinetics model using a linear filtering technique in an H ∞ settings. In order to sue this technique, an appropriate dynamical model of the reactivity is constructed that can be embedded into the reactor model as one of its variables. A filter, which minimizes the H ∞ norm of the estimation error power spectrum, operates on neutron density measurements corrupted by noise and provides an estimate of the dynamic net reactivity. Computer simulations are performed to reveal the basic characteristics of the H ∞ optimal filter. The results of the simulation indicate that the filter can be used to determine the time-varying reactivity from neutron density measurements that have been corrupted by noise

  1. Essays on variational approximation techniques for stochastic optimization problems

    Science.gov (United States)

    Deride Silva, Julio A.

    This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence

  2. Improving real-time estimation of heavy-to-extreme precipitation using rain gauge data via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Seo, Dong-Jun; Siddique, Ridwan; Zhang, Yu; Kim, Dongsoo

    2014-11-01

    A new technique for gauge-only precipitation analysis for improved estimation of heavy-to-extreme precipitation is described and evaluated. The technique is based on a novel extension of classical optimal linear estimation theory in which, in addition to error variance, Type-II conditional bias (CB) is explicitly minimized. When cast in the form of well-known kriging, the methodology yields a new kriging estimator, referred to as CB-penalized kriging (CBPK). CBPK, however, tends to yield negative estimates in areas of no or light precipitation. To address this, an extension of CBPK, referred to herein as extended conditional bias penalized kriging (ECBPK), has been developed which combines the CBPK estimate with a trivial estimate of zero precipitation. To evaluate ECBPK, we carried out real-world and synthetic experiments in which ECBPK and the gauge-only precipitation analysis procedure used in the NWS's Multisensor Precipitation Estimator (MPE) were compared for estimation of point precipitation and mean areal precipitation (MAP), respectively. The results indicate that ECBPK improves hourly gauge-only estimation of heavy-to-extreme precipitation significantly. The improvement is particularly large for estimation of MAP for a range of combinations of basin size and rain gauge network density. This paper describes the technique, summarizes the results and shares ideas for future research.

  3. Optimal design criteria - prediction vs. parameter estimation

    Science.gov (United States)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  4. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  5. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    International Nuclear Information System (INIS)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R.

    2013-01-01

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  6. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R., E-mail: michaeltdh@physics.ucsb.edu, E-mail: cgwinn@physics.ucsb.edu [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2013-03-10

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  7. Modeling, estimation and optimal filtration in signal processing

    CERN Document Server

    Najim, Mohamed

    2010-01-01

    The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the

  8. Cost Engineering Techniques and Their Applicability for Cost Estimation of Organic Rankine Cycle Systems

    Directory of Open Access Journals (Sweden)

    Sanne Lemmens

    2016-06-01

    Full Text Available The potential of organic Rankine cycle (ORC systems is acknowledged by both considerable research and development efforts and an increasing number of applications. Most research aims at improving ORC systems through technical performance optimization of various cycle architectures and working fluids. The assessment and optimization of technical feasibility is at the core of ORC development. Nonetheless, economic feasibility is often decisive when it comes down to considering practical instalments, and therefore an increasing number of publications include an estimate of the costs of the designed ORC system. Various methods are used to estimate ORC costs but the resulting values are rarely discussed with respect to accuracy and validity. The aim of this paper is to provide insight into the methods used to estimate these costs and open the discussion about the interpretation of these results. A review of cost engineering practices shows there has been a long tradition of industrial cost estimation. Several techniques have been developed, but the expected accuracy range of the best techniques used in research varies between 10% and 30%. The quality of the estimates could be improved by establishing up-to-date correlations for the ORC industry in particular. Secondly, the rapidly growing ORC cost literature is briefly reviewed. A graph summarizing the estimated ORC investment costs displays a pattern of decreasing costs for increasing power output. Knowledge on the actual costs of real ORC modules and projects remains scarce. Finally, the investment costs of a known heat recovery ORC system are discussed and the methodologies and accuracies of several approaches are demonstrated using this case as benchmark. The best results are obtained with factorial estimation techniques such as the module costing technique, but the accuracies may diverge by up to +30%. Development of correlations and multiplication factors for ORC technology in particular is

  9. Mechanical Design Optimization Using Advanced Optimization Techniques

    CERN Document Server

    Rao, R Venkata

    2012-01-01

    Mechanical design includes an optimization process in which designers always consider objectives such as strength, deflection, weight, wear, corrosion, etc. depending on the requirements. However, design optimization for a complete mechanical assembly leads to a complicated objective function with a large number of design variables. It is a good practice to apply optimization techniques for individual components or intermediate assemblies than a complete assembly. Analytical or numerical methods for calculating the extreme values of a function may perform well in many practical cases, but may fail in more complex design situations. In real design problems, the number of design parameters can be very large and their influence on the value to be optimized (the goal function) can be very complicated, having nonlinear character. In these complex cases, advanced optimization algorithms offer solutions to the problems, because they find a solution near to the global optimum within reasonable time and computational ...

  10. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  11. Reexamination of optimal quantum state estimation of pure states

    International Nuclear Information System (INIS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-01-01

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independent of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input

  12. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    Science.gov (United States)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  13. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  14. Estimating the Celestial Reference Frame via Intra-Technique Combination

    Science.gov (United States)

    Iddink, Andreas; Artz, Thomas; Halsig, Sebastian; Nothnagel, Axel

    2016-12-01

    One of the primary goals of Very Long Baseline Interferometry (VLBI) is the determination of the International Celestial Reference Frame (ICRF). Currently the third realization of the internationally adopted CRF, the ICRF3, is under preparation. In this process, various optimizations are planned to realize a CRF that does not benefit only from the increased number of observations since the ICRF2 was published. The new ICRF can also benefit from an intra-technique combination as is done for the Terrestrial Reference Frame (TRF). Here, we aim at estimating an optimized CRF by means of an intra-technique combination. The solutions are based on the input to the official combined product of the International VLBI Service for Geodesy and Astrometry (IVS), also providing the radio source parameters. We discuss the differences in the setup using a different number of contributions and investigate the impact on TRF and CRF as well as on the Earth Orientation Parameters (EOPs). Here, we investigate the differences between the combined CRF and the individual CRFs from the different analysis centers.

  15. Using simulation-optimization techniques to improve multiphase aquifer remediation

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, S.; Pruess, K. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)

    1995-03-01

    The T2VOC computer model for simulating the transport of organic chemical contaminants in non-isothermal multiphase systems has been coupled to the ITOUGH2 code which solves parameter optimization problems. This allows one to use linear programming and simulated annealing techniques to solve groundwater management problems, i.e. the optimization of operations for multiphase aquifer remediation. A cost function has to be defined, containing the actual and hypothetical expenses of a cleanup operation which depend - directly or indirectly - on the state variables calculated by T2VOC. Subsequently, the code iteratively determines a remediation strategy (e.g. pumping schedule) which minimizes, for instance, pumping and energy costs, the time for cleanup, and residual contamination. We discuss an illustrative sample problem to discuss potential applications of the code. The study shows that the techniques developed for estimating model parameters can be successfully applied to the solution of remediation management problems. The resulting optimum pumping scheme depends, however, on the formulation of the remediation goals and the relative weighting between individual terms of the cost function.

  16. Research reactor loading pattern optimization using estimation of distribution algorithms

    International Nuclear Information System (INIS)

    Jiang, S.; Ziver, K.; Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H.; Franklin, S. J.; Phillips, H. J.

    2006-01-01

    A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K eff ) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K eff with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)

  17. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    Science.gov (United States)

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open

  18. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  19. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    Science.gov (United States)

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  20. Optimal probabilistic energy management in a typical micro-grid based-on robust optimization and point estimate method

    International Nuclear Information System (INIS)

    Alavi, Seyed Arash; Ahmadian, Ali; Aliakbar-Golkar, Masoud

    2015-01-01

    Highlights: • Energy management is necessary in the active distribution network to reduce operation costs. • Uncertainty modeling is essential in energy management studies in active distribution networks. • Point estimate method is a suitable method for uncertainty modeling due to its lower computation time and acceptable accuracy. • In the absence of Probability Distribution Function (PDF) robust optimization has a good ability for uncertainty modeling. - Abstract: Uncertainty can be defined as the probability of difference between the forecasted value and the real value. As this probability is small, the operation cost of the power system will be less. This purpose necessitates modeling of system random variables (such as the output power of renewable resources and the load demand) with appropriate and practicable methods. In this paper, an adequate procedure is proposed in order to do an optimal energy management on a typical micro-grid with regard to the relevant uncertainties. The point estimate method is applied for modeling the wind power and solar power uncertainties, and robust optimization technique is utilized to model load demand uncertainty. Finally, a comparison is done between deterministic and probabilistic management in different scenarios and their results are analyzed and evaluated

  1. Optimal estimation and control in nuclear power plants

    International Nuclear Information System (INIS)

    Purviance, J.E.; Tylee, J.L.

    1982-08-01

    Optimal estimation and control theories offer the potential for more precise control and diagnosis of nuclear power plants. The important element of these theories is that a mathematical plant model is used in conjunction with the actual plant data to optimize some performance criteria. These criteria involve important plant variables and incorporate a sense of the desired plant performance. Several applications of optimal estimation and control to nuclear systems are discussed

  2. Research reactor loading pattern optimization using estimation of distribution algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, S. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Ziver, K. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); AMCG Group, RM Consultants, Abingdon (United Kingdom); Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Franklin, S. J.; Phillips, H. J. [Imperial College, Reactor Centre, Silwood Park, Buckhurst Road, Ascot, Berkshire, SL5 7TE (United Kingdom)

    2006-07-01

    A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K{sub eff}) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K{sub eff} with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)

  3. Parameters estimation online for Lorenz system by a novel quantum-behaved particle swarm optimization

    International Nuclear Information System (INIS)

    Gao Fei; Tong Hengqing; Li Zhuoqiu

    2008-01-01

    This paper proposes a novel quantum-behaved particle swarm optimization (NQPSO) for the estimation of chaos' unknown parameters by transforming them into nonlinear functions' optimization. By means of the techniques in the following three aspects: contracting the searching space self-adaptively; boundaries restriction strategy; substituting the particles' convex combination for their centre of mass, this paper achieves a quite effective search mechanism with fine equilibrium between exploitation and exploration. Details of applying the proposed method and other methods into Lorenz systems are given, and experiments done show that NQPSO has better adaptability, dependability and robustness. It is a successful approach in unknown parameter estimation online especially in the cases with white noises

  4. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    Science.gov (United States)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold

  5. Optimal phase estimation with arbitrary a priori knowledge

    International Nuclear Information System (INIS)

    Demkowicz-Dobrzanski, Rafal

    2011-01-01

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attention is paid to a natural a priori probability distribution arising from a diffusion process.

  6. Bearing Fault Detection Based on Maximum Likelihood Estimation and Optimized ANN Using the Bees Algorithm

    Directory of Open Access Journals (Sweden)

    Behrooz Attaran

    2015-01-01

    Full Text Available Rotating machinery is the most common machinery in industry. The root of the faults in rotating machinery is often faulty rolling element bearings. This paper presents a technique using optimized artificial neural network by the Bees Algorithm for automated diagnosis of localized faults in rolling element bearings. The inputs of this technique are a number of features (maximum likelihood estimation values, which are derived from the vibration signals of test data. The results shows that the performance of the proposed optimized system is better than most previous studies, even though it uses only two features. Effectiveness of the above method is illustrated using obtained bearing vibration data.

  7. Bayesian techniques for surface fuel loading estimation

    Science.gov (United States)

    Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell

    2016-01-01

    A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...

  8. Operation optimization of distributed generation using artificial intelligent techniques

    Directory of Open Access Journals (Sweden)

    Mahmoud H. Elkazaz

    2016-06-01

    Full Text Available Future smart grids will require an observable, controllable and flexible network architecture for reliable and efficient energy delivery. The use of artificial intelligence and advanced communication technologies is essential in building a fully automated system. This paper introduces a new technique for online optimal operation of distributed generation (DG resources, i.e. a hybrid fuel cell (FC and photovoltaic (PV system for residential applications. The proposed technique aims to minimize the total daily operating cost of a group of residential homes by managing the operation of embedded DG units remotely from a control centre. The target is formed as an objective function that is solved using genetic algorithm (GA optimization technique. The optimal settings of the DG units obtained from the optimization process are sent to each DG unit through a fully automated system. The results show that the proposed technique succeeded in defining the optimal operating points of the DGs that affect directly the total operating cost of the entire system.

  9. Optimization of Phasor Measurement Unit (PMU Placement in Supervisory Control and Data Acquisition (SCADA-Based Power System for Better State-Estimation Performance

    Directory of Open Access Journals (Sweden)

    Mohammad Shoaib Shahriar

    2018-03-01

    Full Text Available Present-day power systems are mostly equipped with conventional meters and intended for the installation of highly accurate phasor measurement units (PMUs to ensure better protection, monitoring and control of the network. PMU is a deliberate choice due to its unique capacity in providing accurate phasor readings of bus voltages and currents. However, due to the high expense and a requirement for communication facilities, the installation of a limited number of PMUs in a network is common practice. This paper presents an optimal approach to selecting the locations of PMUs to be installed with the objective of ensuring maximum accuracy of the state estimation (SE. The optimization technique ensures that the critical locations of the system will be covered by PMU meters which lower the negative impact of bad data on state-estimation performance. One of the well-known intelligent optimization techniques, the genetic algorithm (GA, is used to search for the optimal set of PMUs. The proposed technique is compared with a heuristic approach of PMU placement. The weighted least square (WLS, with a modified Jacobian to deal with the phasor quantities, is used to compute the estimation accuracy. IEEE 30-bus and 118-bus systems are used to demonstrate the suggested technique.

  10. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  11. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  12. Inverse estimation of the particle size distribution using the Fruit Fly Optimization Algorithm

    International Nuclear Information System (INIS)

    He, Zhenzong; Qi, Hong; Yao, Yuchen; Ruan, Liming

    2015-01-01

    The Fruit Fly Optimization Algorithm (FOA) is applied to retrieve the particle size distribution (PSD) for the first time. The direct problems are solved by the modified Anomalous Diffraction Approximation (ADA) and the Lambert–Beer Law. Firstly, three commonly used monomodal PSDs, i.e. the Rosin–Rammer (R–R) distribution, the normal (N–N) distribution and the logarithmic normal (L–N) distribution, and the bimodal Rosin–Rammer distribution function are estimated in the dependent model. All the results show that the FOA can be used as an effective technique to estimate the PSDs under the dependent model. Then, an optimal wavelength selection technique is proposed to improve the retrieval results of bimodal PSD. Finally, combined with two general functions, i.e. the Johnson's S B (J-S B ) function and the modified beta (M-β) function, the FOA is employed to recover actual measurement aerosol PSDs over Beijing and Hangzhou obtained from the aerosol robotic network (AERONET). All the numerical simulations and experiment results demonstrate that the FOA can be used to retrieve actual measurement PSDs, and more reliable and accurate results can be obtained, if the J-S B function is employed

  13. Optimal estimation of entanglement in optical qubit systems

    International Nuclear Information System (INIS)

    Brida, Giorgio; Degiovanni, Ivo P.; Florio, Angela; Genovese, Marco; Meda, Alice; Shurupov, Alexander P.; Giorda, Paolo; Paris, Matteo G. A.

    2011-01-01

    We address the experimental determination of entanglement for systems made of a pair of polarization qubits. We exploit quantum estimation theory to derive optimal estimators, which are then implemented to achieve ultimate bound to precision. In particular, we present a set of experiments aimed at measuring the amount of entanglement for states belonging to different families of pure and mixed two-qubit two-photon states. Our scheme is based on visibility measurements of quantum correlations and achieves the ultimate precision allowed by quantum mechanics in the limit of Poissonian distribution of coincidence counts. Although optimal estimation of entanglement does not require the full tomography of the states we have also performed state reconstruction using two different sets of tomographic projectors and explicitly shown that they provide a less precise determination of entanglement. The use of optimal estimators also allows us to compare and statistically assess the different noise models used to describe decoherence effects occurring in the generation of entanglement.

  14. Optimal Error Estimates of Two Mixed Finite Element Methods for Parabolic Integro-Differential Equations with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti

    2013-05-01

    In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.

  15. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    Science.gov (United States)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the

  16. Modern optimization algorithms for fault location estimation in power systems

    Directory of Open Access Journals (Sweden)

    A. Sanad Ahmed

    2017-10-01

    Full Text Available This paper presents a fault location estimation approach in two terminal transmission lines using Teaching Learning Based Optimization (TLBO technique, and Harmony Search (HS technique. Also, previous methods were discussed such as Genetic Algorithm (GA, Artificial Bee Colony (ABC, Artificial neural networks (ANN and Cause & effect (C&E with discussing advantages and disadvantages of all methods. Initial data for proposed techniques are post-fault measured voltages and currents from both ends, along with line parameters as initial inputs as well. This paper deals with several types of faults, L-L-L, L-L-L-G, L-L-G and L-G. Simulation of the model was performed on SIMULINK by extracting initial inputs from SIMULINK to MATLAB, where the objective function specifies the fault location with a very high accuracy, precision and within a very short time. Future works are discussed showing the benefit behind using the Differential Learning TLBO (DLTLBO was discussed as well.

  17. Spectral Estimation by the Random Dec Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, Jacob L.; Krenk, Steen

    1990-01-01

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  18. Spectral Estimation by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, J. Laigaard; Krenk, S.

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  19. Application of Feedback System Control Optimization Technique in Combined Use of Dual Antiplatelet Therapy and Herbal Medicines

    Directory of Open Access Journals (Sweden)

    Wang Liu

    2018-05-01

    Full Text Available Aim: Combined use of herbal medicines in patients underwent dual antiplatelet therapy (DAPT might cause bleeding or thrombosis because herbal medicines with anti-platelet activities may exhibit interactions with DAPT. In this study, we tried to use a feedback system control (FSC optimization technique to optimize dose strategy and clarify possible interactions in combined use of DAPT and herbal medicines.Methods: Herbal medicines with reported anti-platelet activities were selected by searching related references in Pubmed. Experimental anti-platelet activities of representative compounds originated from these herbal medicines were investigated using in vitro assay, namely ADP-induced aggregation of rat platelet-rich-plasma. FSC scheme hybridized artificial intelligence calculation and bench experiments to iteratively optimize 4-drug combination and 2-drug combination from these drug candidates.Results: Totally 68 herbal medicines were reported to have anti-platelet activities. In the present study, 7 representative compounds from these herbal medicines were selected to study combinatorial drug optimization together with DAPT, i.e., aspirin and ticagrelor. FSC technique first down-selected 9 drug candidates to the most significant 5 drugs. Then, FSC further secured 4 drugs in the optimal combination, including aspirin, ticagrelor, ferulic acid from DangGui, and forskolin from MaoHouQiaoRuiHua. Finally, FSC quantitatively estimated the possible interactions between aspirin:ticagrelor, aspirin:ferulic acid, ticagrelor:forskolin, and ferulic acid:forskolin. The estimation was further verified by experimentally determined Combination Index (CI values.Conclusion: Results of the present study suggested that FSC optimization technique could be used in optimization of anti-platelet drug combinations and might be helpful in designing personal anti-platelet therapy strategy. Furthermore, FSC analysis could also identify interactions between different

  20. 9th International Conference on Optimization : Techniques and Applications

    CERN Document Server

    Wang, Song; Wu, Soon-Yi

    2015-01-01

    This book presents the latest research findings and state-of-the-art solutions on optimization techniques and provides new research direction and developments. Both the theoretical and practical aspects of the book will be much beneficial to experts and students in optimization and operation research community. It selects high quality papers from The International Conference on Optimization: Techniques and Applications (ICOTA2013). The conference is an official conference series of POP (The Pacific Optimization Research Activity Group; there are over 500 active members). These state-of-the-art works in this book authored by recognized experts will make contributions to the development of optimization with its applications.

  1. Optimal estimation of the intensity function of a spatial point process

    DEFF Research Database (Denmark)

    Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus

    easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation and reduces to the likelihood score in case of a Poisson process. We discuss...

  2. Optimal Data Interval for Estimating Advertising Response

    OpenAIRE

    Gerard J. Tellis; Philip Hans Franses

    2006-01-01

    The abundance of highly disaggregate data (e.g., at five-second intervals) raises the question of the optimal data interval to estimate advertising carryover. The literature assumes that (1) the optimal data interval is the interpurchase time, (2) too disaggregate data causes a disaggregation bias, and (3) recovery of true parameters requires assumption of the underlying advertising process. In contrast, we show that (1) the optimal data interval is what we call , (2) too disaggregate data do...

  3. Optimal Estimation of Sea Surface Temperature from AMSR-E

    Directory of Open Access Journals (Sweden)

    Pia Nielsen-Englyst

    2018-02-01

    Full Text Available The Optimal Estimation (OE technique is developed within the European Space Agency Climate Change Initiative (ESA-CCI to retrieve subskin Sea Surface Temperature (SST from AQUA’s Advanced Microwave Scanning Radiometer—Earth Observing System (AMSR-E. A comprehensive matchup database with drifting buoy observations is used to develop and test the OE setup. It is shown that it is essential to update the first guess atmospheric and oceanic state variables and to perform several iterations to reach an optimal retrieval. The optimal number of iterations is typically three to four in the current setup. In addition, updating the forward model, using a multivariate regression model is shown to improve the capability of the forward model to reproduce the observations. The average sensitivity of the OE retrieval is 0.5 and shows a latitudinal dependency with smaller sensitivity for cold waters and larger sensitivity for warmer waters. The OE SSTs are evaluated against drifting buoy measurements during 2010. The results show an average difference of 0.02 K with a standard deviation of 0.47 K when considering the 64% matchups, where the simulated and observed brightness temperatures are most consistent. The corresponding mean uncertainty is estimated to 0.48 K including the in situ and sampling uncertainties. An independent validation against Argo observations from 2009 to 2011 shows an average difference of 0.01 K, a standard deviation of 0.50 K and a mean uncertainty of 0.47 K, when considering the best 62% of retrievals. The satellite versus in situ discrepancies are highest in the dynamic oceanic regions due to the large satellite footprint size and the associated sampling effects. Uncertainty estimates are available for all retrievals and have been validated to be accurate. They can thus be used to obtain very good retrieval results. In general, the results from the OE retrieval are very encouraging and demonstrate that passive microwave

  4. Joint fundamental frequency and order estimation using optimal filtering

    Directory of Open Access Journals (Sweden)

    Jakobsson Andreas

    2011-01-01

    Full Text Available Abstract In this paper, the problem of jointly estimating the number of harmonics and the fundamental frequency of periodic signals is considered. We show how this problem can be solved using a number of methods that either are or can be interpreted as filtering methods in combination with a statistical model selection criterion. The methods in question are the classical comb filtering method, a maximum likelihood method, and some filtering methods based on optimal filtering that have recently been proposed, while the model selection criterion is derived herein from the maximum a posteriori principle. The asymptotic properties of the optimal filtering methods are analyzed and an order-recursive efficient implementation is derived. Finally, the estimators have been compared in computer simulations that show that the optimal filtering methods perform well under various conditions. It has previously been demonstrated that the optimal filtering methods perform extremely well with respect to fundamental frequency estimation under adverse conditions, and this fact, combined with the new results on model order estimation and efficient implementation, suggests that these methods form an appealing alternative to classical methods for analyzing multi-pitch signals.

  5. Pareto-Optimal Estimates of California Precipitation Change

    Science.gov (United States)

    Langenbrunner, Baird; Neelin, J. David

    2017-12-01

    In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.

  6. Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...

  7. Estimation of optimal educational cost per medical student.

    Science.gov (United States)

    Yang, Eunbae B; Lee, Seunghee

    2009-09-01

    This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.

  8. Multi-objective optimization with estimation of distribution algorithm in a noisy environment.

    Science.gov (United States)

    Shim, Vui Ann; Tan, Kay Chen; Chia, Jun Yong; Al Mamun, Abdullah

    2013-01-01

    Many real-world optimization problems are subjected to uncertainties that may be characterized by the presence of noise in the objective functions. The estimation of distribution algorithm (EDA), which models the global distribution of the population for searching tasks, is one of the evolutionary computation techniques that deals with noisy information. This paper studies the potential of EDAs; particularly an EDA based on restricted Boltzmann machines that handles multi-objective optimization problems in a noisy environment. Noise is introduced to the objective functions in the form of a Gaussian distribution. In order to reduce the detrimental effect of noise, a likelihood correction feature is proposed to tune the marginal probability distribution of each decision variable. The EDA is subsequently hybridized with a particle swarm optimization algorithm in a discrete domain to improve its search ability. The effectiveness of the proposed algorithm is examined via eight benchmark instances with different characteristics and shapes of the Pareto optimal front. The scalability, hybridization, and computational time are rigorously studied. Comparative studies show that the proposed approach outperforms other state of the art algorithms.

  9. An ant colony optimization algorithm for phylogenetic estimation under the minimum evolution principle

    Directory of Open Access Journals (Sweden)

    Milinkovitch Michel C

    2007-11-01

    Full Text Available Abstract Background Distance matrix methods constitute a major family of phylogenetic estimation methods, and the minimum evolution (ME principle (aiming at recovering the phylogeny with shortest length is one of the most commonly used optimality criteria for estimating phylogenetic trees. The major difficulty for its application is that the number of possible phylogenies grows exponentially with the number of taxa analyzed and the minimum evolution principle is known to belong to the NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGacaGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8xdX7Kaeeiuaafaaa@3888@-hard class of problems. Results In this paper, we introduce an Ant Colony Optimization (ACO algorithm to estimate phylogenies under the minimum evolution principle. ACO is an optimization technique inspired from the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the search of approximate solutions to discrete optimization problems. Conclusion We show that the ACO algorithm is potentially competitive in comparison with state-of-the-art algorithms for the minimum evolution principle. This is the first application of an ACO algorithm to the phylogenetic estimation problem.

  10. Two biased estimation techniques in linear regression: Application to aircraft

    Science.gov (United States)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  11. Forest parameter estimation using polarimetric SAR interferometry techniques at low frequencies

    International Nuclear Information System (INIS)

    Lee, Seung-Kuk

    2013-01-01

    Polarimetric Synthetic Aperture Radar Interferometry (Pol-InSAR) is an active radar remote sensing technique based on the coherent combination of both polarimetric and interferometric observables. The Pol-InSAR technique provided a step forward in quantitative forest parameter estimation. In the last decade, airborne SAR experiments evaluated the potential of Pol-InSAR techniques to estimate forest parameters (e.g., the forest height and biomass) with high accuracy over various local forest test sites. This dissertation addresses the actual status, potentials and limitations of Pol-InSAR inversion techniques for 3-D forest parameter estimations on a global scale using lower frequencies such as L- and P-band. The multi-baseline Pol-InSAR inversion technique is applied to optimize the performance with respect to the actual level of the vertical wave number and to mitigate the impact of temporal decorrelation on the Pol-InSAR forest parameter inversion. Temporal decorrelation is a critical issue for successful Pol-InSAR inversion in the case of repeat-pass Pol-InSAR data, as provided by conventional satellites or airborne SAR systems. Despite the limiting impact of temporal decorrelation in Pol-InSAR inversion, it remains a poorly understood factor in forest height inversion. Therefore, the main goal of this dissertation is to provide a quantitative estimation of the temporal decorrelation effects by using multi-baseline Pol-InSAR data. A new approach to quantify the different temporal decorrelation components is proposed and discussed. Temporal decorrelation coefficients are estimated for temporal baselines ranging from 10 minutes to 54 days and are converted to height inversion errors. In addition, the potential of Pol-InSAR forest parameter estimation techniques is addressed and projected onto future spaceborne system configurations and mission scenarios (Tandem-L and BIOMASS satellite missions at L- and P-band). The impact of the system parameters (e.g., bandwidth

  12. Performance optimization in electro- discharge machining using a suitable multiresponse optimization technique

    Directory of Open Access Journals (Sweden)

    I. Nayak

    2017-06-01

    Full Text Available In the present research work, four different multi response optimization techniques, viz. multiple response signal-to-noise (MRSN ratio, weighted signal-to-noise (WSN ratio, Grey relational analysis (GRA and VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje in Serbian methods have been used to optimize the electro-discharge machining (EDM performance characteristics such as material removal rate (MRR, tool wear rate (TWR and surface roughness (SR simultaneously. Experiments have been planned on a D2 steel specimen based on L9 orthogonal array. Experimental results are analyzed using the standard procedure. The optimum level combinations of input process parameters such as voltage, current, pulse-on-time and pulse-off-time, and percentage contributions of each process parameter using ANOVA technique have been determined. Different correlations have been developed between the various input process parameters and output performance characteristics. Finally, the optimum performances of these four methods are compared and the results show that WSN ratio method is the best multiresponse optimization technique for this process. From the analysis, it is also found that the current has the maximum effect on the overall performance of EDM operation as compared to other process parameters.

  13. Fusion blanket design and optimization techniques

    International Nuclear Information System (INIS)

    Gohar, Y.

    2005-01-01

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to define the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design techniques of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art techniques and tools for performing blanket design and analysis. This report describes some of the BSDOS techniques and demonstrates its use. In addition, the use of the optimization technique of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this report, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design techniques

  14. Doubly Robust Estimation of Optimal Dynamic Treatment Regimes

    DEFF Research Database (Denmark)

    Barrett, Jessica K; Henderson, Robin; Rosthøj, Susanne

    2014-01-01

    We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret-regression appro......We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret......-regression approach of Almirall et al. (in Biometrics 66:131-139, 2010) and Henderson et al. (in Biometrics 66:1192-1201, 2010) and demonstrate that it is equivalent to a reduced form of Robins' efficient g-estimation procedure (Robins, in Proceedings of the Second Symposium on Biostatistics. Springer, New York, pp....... 189-326, 2004). Simulation studies suggest that while the regret-regression approach is most efficient when there is no model misspecification, in the presence of misspecification the efficient g-estimation procedure is more robust. The g-estimation method can be difficult to apply in complex...

  15. Inverse Optimization and Forecasting Techniques Applied to Decision-making in Electricity Markets

    DEFF Research Database (Denmark)

    Saez Gallego, Javier

    patterns that the load traditionally exhibited. On the other hand, this thesis is motivated by the decision-making processes of market players. In response to these challenges, this thesis provides mathematical models for decision-making under uncertainty in electricity markets. Demand-side bidding refers......This thesis deals with the development of new mathematical models that support the decision-making processes of market players. It addresses the problems of demand-side bidding, price-responsive load forecasting and reserve determination. From a methodological point of view, we investigate a novel...... approach to model the response of aggregate price-responsive load as a constrained optimization model, whose parameters are estimated from data by using inverse optimization techniques. The problems tackled in this dissertation are motivated, on one hand, by the increasing penetration of renewable energy...

  16. Optimizing Probability of Detection Point Estimate Demonstration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  17. OPTIMAL ESTIMATES FOR THE SEMIDISCRETE GALERKIN METHOD APPLIED TO PARABOLIC INTEGRO-DIFFERENTIAL EQUATIONS WITH NONSMOOTH DATA

    KAUST Repository

    GOSWAMI, DEEPJYOTI; PANI, AMIYA K.; YADAV, SANGITA

    2014-01-01

    AWe propose and analyse an alternate approach to a priori error estimates for the semidiscrete Galerkin approximation to a time-dependent parabolic integro-differential equation with nonsmooth initial data. The method is based on energy arguments combined with repeated use of time integration, but without using parabolic-type duality techniques. An optimal L2-error estimate is derived for the semidiscrete approximation when the initial data is in L2. A superconvergence result is obtained and then used to prove a maximum norm estimate for parabolic integro-differential equations defined on a two-dimensional bounded domain. © 2014 Australian Mathematical Society.

  18. Comparison of particle swarm optimization and other metaheuristics on electricity demand estimation: A case study of Iran

    International Nuclear Information System (INIS)

    Askarzadeh, Alireza

    2014-01-01

    The importance of energy demand estimation stems from energy planning, formulating strategies and recommending energy policies. Most often, energy demand is mathematically formulated by socio-economic indicators. The challenging problem is to determine the optimal or near optimal weighting factors. Inspired by social behavior of bird flocking or fish schooling, PSO (particle swarm optimization) is a population-based search technique which has attracted significant attention to tackle the complexity of difficult optimization problems. This paper studies the performance of different PSO variants for estimating Iran's electricity demand. Seven PSO variants namely, original PSO, PSO-w (PSO with weighting factor), PSO-cf (PSO with constriction factor), PSO-rf (PSO with repulsion factor), PSO-vc (PSO with velocity control), CLPSO (comprehensive learning PSO) and a MPSO (modified PSO), are used to find the unknown weighting factors based on the data from 1982 to 2003. The validation process is then conducted by testing the optimized models by using the data from 2004 to 2009. It is seen that PSO-vc produces more promising results than the other variants, HS (harmony search) and ABSO (artificial bee swarm optimization) algorithms in terms of MAPE (mean absolute percentage error). This value is obtained 2.47 and 2.50 for the exponential and quadratic models, respectively. - Highlights: • Electricity demand estimation is modelled using socio-economic indicators. • Different PSO variants are investigated in terms of accuracy. • Exponential model can estimate the Iran's electricity demand with high accuracy. • PSO with velocity control produces more accurate result than the others

  19. On-line computer control of a nuclear reactor using optimal control and state estimation methods

    International Nuclear Information System (INIS)

    Tye, C.

    1980-01-01

    This paper describes the experimental implementation of a nuclear reactor control system using combined optimal state feedback based on the Quadratic Regulator and state estimation using Kalman filtering techniques. The results obtained from the experiments indicate that a reactor control loop designed using this approach has improved stability margins, greater speed of response and noise filtering properties compared with a conventional reactor control loop. 11 refs

  20. Computational optimization techniques applied to microgrids planning

    DEFF Research Database (Denmark)

    Gamarra, Carlos; Guerrero, Josep M.

    2015-01-01

    Microgrids are expected to become part of the next electric power system evolution, not only in rural and remote areas but also in urban communities. Since microgrids are expected to coexist with traditional power grids (such as district heating does with traditional heating systems......), their planning process must be addressed to economic feasibility, as a long-term stability guarantee. Planning a microgrid is a complex process due to existing alternatives, goals, constraints and uncertainties. Usually planning goals conflict each other and, as a consequence, different optimization problems...... appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...

  1. Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey

    Directory of Open Access Journals (Sweden)

    Abdelrahman Osman Elfaki

    2014-01-01

    Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.

  2. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    Science.gov (United States)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  3. Muscle optimization techniques impact the magnitude of calculated hip joint contact forces

    NARCIS (Netherlands)

    Wesseling, M.; Derikx, L.C.; de Groote, F.; Bartels, W.; Meyer, C.; Verdonschot, Nicolaas Jacobus Joseph; Jonkers, I.

    2015-01-01

    In musculoskeletal modelling, several optimization techniques are used to calculate muscle forces, which strongly influence resultant hip contact forces (HCF). The goal of this study was to calculate muscle forces using four different optimization techniques, i.e., two different static optimization

  4. Multi-objective optimization in quantum parameter estimation

    Science.gov (United States)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  5. Optimization of freeform surfaces using intelligent deformation techniques for LED applications

    Science.gov (United States)

    Isaac, Annie Shalom; Neumann, Cornelius

    2018-04-01

    For many years, optical designers have great interests in designing efficient optimization algorithms to bring significant improvement to their initial design. However, the optimization is limited due to a large number of parameters present in the Non-uniform Rationaly b-Spline Surfaces. This limitation was overcome by an indirect technique known as optimization using freeform deformation (FFD). In this approach, the optical surface is placed inside a cubical grid. The vertices of this grid are modified, which deforms the underlying optical surface during the optimization. One of the challenges in this technique is the selection of appropriate vertices of the cubical grid. This is because these vertices share no relationship with the optical performance. When irrelevant vertices are selected, the computational complexity increases. Moreover, the surfaces created by them are not always feasible to manufacture, which is the same problem faced in any optimization technique while creating freeform surfaces. Therefore, this research addresses these two important issues and provides feasible design techniques to solve them. Finally, the proposed techniques are validated using two different illumination examples: street lighting lens and stop lamp for automobiles.

  6. Optimal state estimation over communication channels with random delays

    KAUST Repository

    Mahmoud, Magdi S.; Liu, Bo

    2013-01-01

    This paper is concerned with the optimal estimation of linear systems over unreliable communication channels with random delays. The measurements are delivered without time stamp, and the probabilities of time delays are assumed to be known. Since the estimation is time-driven, the actual time delays are converted into virtual time delays among the formulation. The receiver of estimation node stores the sum of arrived measurements between two adjacent processing time instants and also counts the number of arrived measurements. The original linear system is modeled as an extended system with uncertain observation to capture the feature of communication, then the optimal estimation algorithm of systems with uncertain observations is proposed. Additionally, a numerical simulation is presented to show the performance of this work. © 2013 The Franklin Institute.

  7. Optimal state estimation over communication channels with random delays

    KAUST Repository

    Mahmoud, Magdi S.

    2013-04-01

    This paper is concerned with the optimal estimation of linear systems over unreliable communication channels with random delays. The measurements are delivered without time stamp, and the probabilities of time delays are assumed to be known. Since the estimation is time-driven, the actual time delays are converted into virtual time delays among the formulation. The receiver of estimation node stores the sum of arrived measurements between two adjacent processing time instants and also counts the number of arrived measurements. The original linear system is modeled as an extended system with uncertain observation to capture the feature of communication, then the optimal estimation algorithm of systems with uncertain observations is proposed. Additionally, a numerical simulation is presented to show the performance of this work. © 2013 The Franklin Institute.

  8. Pilot power optimization for AF relaying using maximum likelihood channel estimation

    KAUST Repository

    Wang, Kezhi

    2014-09-01

    Bit error rates (BERs) for amplify-and-forward (AF) relaying systems with two different pilot-symbol-aided channel estimation methods, disintegrated channel estimation (DCE) and cascaded channel estimation (CCE), are derived in Rayleigh fading channels. Based on these BERs, the pilot powers at the source and at the relay are optimized when their total transmitting powers are fixed. Numerical results show that the optimized system has a better performance than other conventional nonoptimized allocation systems. They also show that the optimal pilot power in variable gain is nearly the same as that in fixed gain for similar system settings. andcopy; 2014 IEEE.

  9. Potential utilities of optimal estimation and control in nuclear power plants

    International Nuclear Information System (INIS)

    Tylee, J.L.; Purviance, J.E.

    1983-01-01

    Optimal estimation and control theories offer the potential for more precise control and diagnosis of nuclear power plants. The important element of these theories is that a mathematical plant model is used in conjunction with the actual plant data to optimize some performance criteria. These criteria involve important plant variables and incorporate a sense of the desired plant performance. Several applications of optimal estimation and control to nuclear systems are discussed

  10. Optimized support vector regression for drilling rate of penetration estimation

    Science.gov (United States)

    Bodaghi, Asadollah; Ansari, Hamid Reza; Gholami, Mahsa

    2015-12-01

    In the petroleum industry, drilling optimization involves the selection of operating conditions for achieving the desired depth with the minimum expenditure while requirements of personal safety, environment protection, adequate information of penetrated formations and productivity are fulfilled. Since drilling optimization is highly dependent on the rate of penetration (ROP), estimation of this parameter is of great importance during well planning. In this research, a novel approach called `optimized support vector regression' is employed for making a formulation between input variables and ROP. Algorithms used for optimizing the support vector regression are the genetic algorithm (GA) and the cuckoo search algorithm (CS). Optimization implementation improved the support vector regression performance by virtue of selecting proper values for its parameters. In order to evaluate the ability of optimization algorithms in enhancing SVR performance, their results were compared to the hybrid of pattern search and grid search (HPG) which is conventionally employed for optimizing SVR. The results demonstrated that the CS algorithm achieved further improvement on prediction accuracy of SVR compared to the GA and HPG as well. Moreover, the predictive model derived from back propagation neural network (BPNN), which is the traditional approach for estimating ROP, is selected for comparisons with CSSVR. The comparative results revealed the superiority of CSSVR. This study inferred that CSSVR is a viable option for precise estimation of ROP.

  11. A Note on optimal estimation in the presence of outliers

    Directory of Open Access Journals (Sweden)

    John N. Haddad

    2017-06-01

    Full Text Available Haddad, J. 2017. A Note on optimal estimation in the presence of outliers. Lebanese Science Journal, 18(1: 136-141. The basic estimation problem of the mean and standard deviation of a random normal process in the presence of an outlying observation is considered. The value of the outlier is taken as a constraint imposed on the maximization problem of the log likelihood. It is shown that the optimal solution of the maximization problem exists and expressions for the estimates are given. Applications to estimation in the presence of outliers and outlier detection are discussed and illustrated through a simulation study and analysis of trade data

  12. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  13. A comparative analysis of particle swarm optimization and differential evolution algorithms for parameter estimation in nonlinear dynamic systems

    International Nuclear Information System (INIS)

    Banerjee, Amit; Abu-Mahfouz, Issam

    2014-01-01

    The use of evolutionary algorithms has been popular in recent years for solving the inverse problem of identifying system parameters given the chaotic response of a dynamical system. The inverse problem is reformulated as a minimization problem and population-based optimizers such as evolutionary algorithms have been shown to be efficient solvers of the minimization problem. However, to the best of our knowledge, there has been no published work that evaluates the efficacy of using the two most popular evolutionary techniques – particle swarm optimization and differential evolution algorithm, on a wide range of parameter estimation problems. In this paper, the two methods along with their variants (for a total of seven algorithms) are applied to fifteen different parameter estimation problems of varying degrees of complexity. Estimation results are analyzed using nonparametric statistical methods to identify if an algorithm is statistically superior to others over the class of problems analyzed. Results based on parameter estimation quality suggest that there are significant differences between the algorithms with the newer, more sophisticated algorithms performing better than their canonical versions. More importantly, significant differences were also found among variants of the particle swarm optimizer and the best performing differential evolution algorithm

  14. A PSO–GA optimal model to estimate primary energy demand of China

    International Nuclear Information System (INIS)

    Yu Shiwei; Wei Yiming; Wang Ke

    2012-01-01

    To improve estimation efficiency for future projections, the present study has proposed a hybrid algorithm, Particle Swarm Optimization and Genetic Algorithm optimal Energy Demand Estimating (PSO–GA EDE) model, for China. The coefficients of the three forms of the model (linear, exponential, and quadratic) are optimized by PSO–GA using factors, such as GDP, population, economic structure, urbanization rate, and energy consumption structure, that affect demand. Based on 20-year historical data between 1990 and 2009, the simulation results of the proposed model have greater accuracy and reliability than other single optimization methods. Moreover, it can be used with optimal coefficients for the energy demand projections of China. The departure coefficient method is applied to get the weights of the three forms of the model to obtain a combinational prediction. The energy demand of China is going to be 4.79, 4.04, and 4.48 billion tce in 2015, and 6.91, 5.03, and 6.11 billion tce (“standard” tons coal equivalent) in 2020 under three different scenarios. Further, the projection results are compared with other estimating methods. - Highlights: ► A hybrid algorithm PSO–GA optimal energy demands estimating model for China. ► Energy demand of China is estimated by 2020 in three different scenarios. ► The projection results are compared with other estimating methods.

  15. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  16. Optimal estimate of a pure qubit state from Uhlmann-Josza fidelity

    Energy Technology Data Exchange (ETDEWEB)

    Aoki, Manuel Avila, E-mail: manvlk@yahoo.com [Centro Universitario UAEM Valle de Chalco, UAEMex, Edo. de Mexico (Mexico)

    2012-04-15

    In the framework of collective measurements, efforts have been made to reconstruct one-qubit states. Such schemes find an obstacle in the no-cloning theorem, which prevents full reconstruction of a quantum state. Quantum Mechanics thus restricts to obtain estimates of the reconstruction of a pure qubit. We discuss the optimal estimate on the basis of the Uhlmann-Josza fidelity, respecting the limitations imposed by the no-cloning theorem. We derive a realistic optimal expression for the average fidelity. Our formalism also introduces an optimization parameter L. Values close to zero imply full reconstruction of the qubit (i. e., the classical limit), while larger L's represent good quantum optimization of the qubit estimate. The parameter L is interpreted as the degree of quantumness of the average fidelity associated with the reconstruction. (author)

  17. Minimum K-S estimator using PH-transform technique

    Directory of Open Access Journals (Sweden)

    Somchit Boonthiem

    2016-07-01

    Full Text Available In this paper, we propose an improvement of the Minimum Kolmogorov-Smirnov (K-S estimator using proportional hazards transform (PH-transform technique. The data of experiment is 47 fire accidents data of an insurance company in Thailand. This experiment has two operations, the first operation, we minimize K-S statistic value using grid search technique for nine distributions; Rayleigh distribution, gamma distribution, Pareto distribution, log-logistic distribution, logistic distribution, normal distribution, Weibull distribution, lognormal distribution, and exponential distribution and the second operation, we improve K-S statistic using PHtransform. The result appears that PH-transform technique can improve the Minimum K-S estimator. The algorithms give better the Minimum K-S estimator for seven distributions; Rayleigh distribution, logistic distribution, gamma distribution, Pareto distribution, log-logistic distribution, normal distribution, Weibull distribution, log-normal distribution, and exponential distribution while the Minimum K-S estimators of normal distribution and logistic distribution are unchanged

  18. An optimization planning technique for Suez Canal Network in Egypt

    Energy Technology Data Exchange (ETDEWEB)

    Abou El-Ela, A.A.; El-Zeftawy, A.A.; Allam, S.M.; Atta, Gasir M. [Electrical Engineering Dept., Faculty of Eng., Shebin El-Kom (Egypt)

    2010-02-15

    This paper introduces a proposed optimization technique POT for predicting the peak load demand and planning of transmission line systems. Many of traditional methods have been presented for long-term load forecasting of electrical power systems. But, the results of these methods are approximated. Therefore, the artificial neural network (ANN) technique for long-term peak load forecasting is modified and discussed as a modern technique in long-term load forecasting. The modified technique is applied on the Egyptian electrical network dependent on its historical data to predict the electrical peak load demand forecasting up to year 2017. This technique is compared with extrapolation of trend curves as a traditional method. The POT is applied also to obtain the optimal planning of transmission lines for the 220 kV of Suez Canal Network (SCN) using the ANN technique. The minimization of the transmission network costs are considered as an objective function, while the transmission lines (TL) planning constraints are satisfied. Zafarana site on the Red Sea coast is considered as an optimal site for installing big wind farm (WF) units in Egypt. So, the POT is applied to plan both the peak load and the electrical transmission of SCN with and without considering WF to develop the impact of WF units on the electrical transmission system of Egypt, considering the reliability constraints which were taken as a separate model in the previous techniques. The application on SCN shows the capability and the efficiently of the proposed techniques to obtain the predicting peak load demand and the optimal planning of transmission lines of SCN up to year 2017. (author)

  19. FORTRAN subroutine for computing the optimal estimate of f(x)

    International Nuclear Information System (INIS)

    Gaffney, P.W.

    1980-10-01

    A FORTRAN subroutine called RANGE is presented that is designed to compute the optimal estimate of a function f given values of the function at n distinct points x 1 2 < ... < x/sub n/ and given a bound on one of the derivatives of f. We donate this estimate by Ω. It is optimal in the sense that the error abs value (f - Ω) has the smallest possible error bound

  20. Optimal and sub-optimal post-detection timing estimators for PET

    International Nuclear Information System (INIS)

    Hero, A.O.; Antoniadis, N.; Clinthorne, N.; Rogers, W.L.; Hutchins, G.D.

    1990-01-01

    In this paper the authors derive linear and non-linear approximations to the post-detection likelihood function for scintillator interaction time in nuclear particle detection systems. The likelihood function is the optimal statistic for performing detection and estimation of scintillator events and event times. The authors derive the likelihood function approximations from a statistical model for the post-detection waveform which is common in the optical communications literature and takes account of finite detector bandwidth, random gains, and thermal noise. They then present preliminary simulation results for the associated approximate maximum likelihood timing estimators which indicate that significant MSE improvements may be achieved for low post-detection signal-to-noise ratio

  1. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  2. Channel Estimation and Optimal Power Allocation for a Multiple-Antenna OFDM System

    Directory of Open Access Journals (Sweden)

    Yao Kung

    2002-01-01

    Full Text Available We propose combining channel estimation and optimal power allocation approaches for a multiple-antenna orthogonal frequency division multiplexing (OFDM system in high-speed transmission applications. We develop a least-square channel estimation approach, derive the performance bound of the estimator, and investigate the optimal training sequences for initial channel acquisition. Based on the channel estimates, the optimal power allocation solution which maximizes the bandwidth efficiency is derived under power and quality of service (Qos (symbol error rate constraints. It is shown that combining channel tracking and adaptive power allocation can dramatically enhance the outage capacity of an OFDM multiple-antenna system when severing fading occurs.

  3. A new Bayesian recursive technique for parameter estimation

    Science.gov (United States)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  4. An adaptive dual-optimal path-planning technique for unmanned air vehicles

    Directory of Open Access Journals (Sweden)

    Whitfield Clifford A.

    2016-01-01

    Full Text Available A multi-objective technique for unmanned air vehicle path-planning generation through task allocation has been developed. The dual-optimal path-planning technique generates real-time adaptive flight paths based on available flight windows and environmental influenced objectives. The environmentally-influenced flight condition determines the aircraft optimal orientation within a downstream virtual window of possible vehicle destinations that is based on the vehicle’s kinematics. The intermittent results are then pursued by a dynamic optimization technique to determine the flight path. This path-planning technique is a multi-objective optimization procedure consisting of two goals that do not require additional information to combine the conflicting objectives into a single-objective. The technique was applied to solar-regenerative high altitude long endurance flight which can benefit significantly from an adaptive real-time path-planning technique. The objectives were to determine the minimum power required flight paths while maintaining maximum solar power for continual surveillance over an area of interest (AOI. The simulated path generation technique prolonged the flight duration over a sustained turn loiter flight path by approximately 2 months for a year of flight. The potential for prolonged solar powered flight was consistent for all latitude locations, including 2 months of available flight at 60° latitude, where sustained turn flight was no longer capable.

  5. Electrostatic afocal-zoom lens design using computer optimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Sise, Omer, E-mail: omersise@gmail.com

    2014-12-15

    Highlights: • We describe the detailed design of a five-element electrostatic afocal-zoom lens. • The simplex optimization is used to optimize lens voltages. • The method can be applied to multi-element electrostatic lenses. - Abstract: Electron optics is the key to the successful operation of electron collision experiments where well designed electrostatic lenses are needed to drive electron beam before and after the collision. In this work, the imaging properties and aberration analysis of an electrostatic afocal-zoom lens design were investigated using a computer optimization technique. We have found a whole new range of voltage combinations that has gone unnoticed until now. A full range of voltage ratios and spherical and chromatic aberration coefficients were systematically analyzed with a range of magnifications between 0.3 and 3.2. The grid-shadow evaluation was also employed to show the effect of spherical aberration. The technique is found to be useful for searching the optimal configuration in a multi-element lens system.

  6. Genetic Spot Optimization for Peak Power Estimation in Large VLSI Circuits

    Directory of Open Access Journals (Sweden)

    Michael S. Hsiao

    2002-01-01

    Full Text Available Estimating peak power involves optimization of the circuit's switching function. The switching of a given gate is not only dependent on the output capacitance of the node, but also heavily dependent on the gate delays in the circuit, since multiple switching events can result from uneven circuit delay paths in the circuit. Genetic spot expansion and optimization are proposed in this paper to estimate tight peak power bounds for large sequential circuits. The optimization spot shifts and expands dynamically based on the maximum power potential (MPP of the nodes under optimization. Four genetic spot optimization heuristics are studied for sequential circuits. Experimental results showed an average of 70.7% tighter peak power bounds for large sequential benchmark circuits was achieved in short execution times.

  7. Efficient reanalysis techniques for robust topology optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Sigmund, Ole; Lazarov, Boyan Stefanov

    2012-01-01

    efficient robust topology optimization procedures based on reanalysis techniques. The approach is demonstrated on two compliant mechanism design problems where robust design is achieved by employing either a worst case formulation or a stochastic formulation. It is shown that the time spent on finite...

  8. Estimating the optimal growth-maximising public debt threshold for ...

    African Journals Online (AJOL)

    This paper attempts to estimate an optimal growth-maximising public debt threshold for Zimbabwe. The public debt threshold is estimated by assessing the relationship between public debt and economic growth. The analysis is undertaken to determine the tipping point beyond which increases in public debt adversely affect ...

  9. Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.

    Science.gov (United States)

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K

    2016-09-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.

  10. Optimizing estimates of annual variations and trends in geocenter motion and J2 from a combination of GRACE data and geophysical models

    Science.gov (United States)

    Sun, Yu; Riva, Riccardo; Ditmar, Pavel

    2016-11-01

    The focus of the study is optimizing the technique for estimating geocenter motion and variations in J2 by combining data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission with output from an Ocean Bottom Pressure model and a Glacial Isostatic Adjustment (GIA) model. First, we conduct an end-to-end numerical simulation study. We generate input time-variable gravity field observations by perturbing a synthetic Earth model with realistically simulated errors. We show that it is important to avoid large errors at short wavelengths and signal leakage from land to ocean, as well as to account for self-attraction and loading effects. Second, the optimal implementation strategy is applied to real GRACE data. We show that the estimates of annual amplitude in geocenter motion are in line with estimates from other techniques, such as satellite laser ranging (SLR) and global GPS inversion. At the same time, annual amplitudes of C10 and C11 are increased by about 50% and 20%, respectively, compared to estimates based on Swenson et al. (2008). Estimates of J2 variations are by about 15% larger than SLR results in terms of annual amplitude. Linear trend estimates are dependent on the adopted GIA model but still comparable to some SLR results.

  11. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  12. Optimal replacement time estimation for machines and equipment based on cost function

    OpenAIRE

    J. Šebo; J. Buša; P. Demeč; J. Svetlík

    2013-01-01

    The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables). Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is ...

  13. Optimal causal inference: estimating stored information and approximating causal architecture.

    Science.gov (United States)

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

  14. Field Application of Cable Tension Estimation Technique Using the h-SI Method

    Directory of Open Access Journals (Sweden)

    Myung-Hyun Noh

    2015-01-01

    Full Text Available This paper investigates field applicability of a new system identification technique of estimating tensile force for a cable of long span bridges. The newly proposed h-SI method using the combination of the sensitivity updating algorithm and the advanced hybrid microgenetic algorithm can allow not only avoiding the trap of local minimum at initial searching stage but also finding the optimal solution in terms of better numerical efficiency than existing methods. First, this paper overviews the procedure of tension estimation through a theoretical formulation. Secondly, the validity of the proposed technique is numerically examined using a set of dynamic data obtained from benchmark numerical samples considering the effect of sag extensibility and bending stiffness of a sag-cable system. Finally, the feasibility of the proposed method is investigated through actual field data extracted from a cable-stayed Seohae Bridge. The test results show that the existing methods require precise initial data in advance but the proposed method is not affected by such initial information. In particular, the proposed method can improve accuracy and convergence rate toward final values. Consequently, the proposed method can be more effective than existing methods in terms of characterizing the tensile force variation for cable structures.

  15. Application of optimal estimation techniques to FFTF decay heat removal analysis

    International Nuclear Information System (INIS)

    Nutt, W.T.; Additon, S.L.; Parziale, E.A.

    1979-01-01

    The verification and adjustment of plant models for decay heat removal analysis using a mix of engineering judgment and formal techniques from control theory are discussed. The formal techniques facilitate dealing with typical test data which are noisy, redundant and do not measure all of the plant model state variables directly. Two pretest examples are presented. 5 refs

  16. Estimating Global Seafloor Total Organic Carbon Using a Machine Learning Technique and Its Relevance to Methane Hydrates

    Science.gov (United States)

    Lee, T. R.; Wood, W. T.; Dale, J.

    2017-12-01

    Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.

  17. Remote optimal state estimation over communication channels with random delays

    KAUST Repository

    Mahmoud, Magdi S.

    2014-01-22

    This paper considers the optimal estimation of linear systems over unreliable communication channels with random delays. In this work, it is assumed that the system to be estimated is far away from the filter. The observations of the system are capsulized without time stamp and then transmitted to the network node at which the filter is located. The probabilities of time delays are assumed to be known. The event-driven estimation scheme is applied in this paper and the estimate of the states is updated only at each time instant when any measurement arrives. To capture the feature of communication, the system considered is augmented, and the arrived measurements are regarded as the uncertain observations of the augmented system. The corresponding optimal estimation algorithm is proposed and additionally, a numerical simulation represents the performance of this work. © 2014 The authors. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  18. A near-optimal low complexity sensor fusion technique for accurate indoor localization based on ultrasound time of arrival measurements from low-quality sensors

    Science.gov (United States)

    Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.

    2009-05-01

    A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.

  19. Optimal quantum state estimation with use of the no-signaling principle

    International Nuclear Information System (INIS)

    Han, Yeong-Deok; Bae, Joonwoo; Wang Xiangbin; Hwang, Won-Young

    2010-01-01

    A simple derivation of the optimal state estimation of a quantum bit was obtained by using the no-signaling principle. In particular, the no-signaling principle determines a unique form of the guessing probability independent of figures of merit, such as the fidelity or information gain. This proves that the optimal estimation for a quantum bit can be achieved by the same measurement for almost all figures of merit.

  20. Optimal allocation of sensors for state estimation of distributed parameter systems

    International Nuclear Information System (INIS)

    Sunahara, Yoshifumi; Ohsumi, Akira; Mogami, Yoshio.

    1978-01-01

    The purpose of this paper is to present a method for finding the optimal allocation of sensors for state estimation of linear distributed parameter systems. This method is based on the criterion that the error covariance associated with the state estimate becomes minimal with respect to the allocation of the sensors. A theorem is established, giving the sufficient condition for optimizing the allocation of sensors to make minimal the error covariance approximated by a modal expansion. The remainder of this paper is devoted to illustrate important phases of the general theory of the optimal measurement allocation problem. To do this, several examples are demonstrated, including extensive discussions on the mutual relation between the optimal allocation and the dynamics of sensors. (author)

  1. Remote optimal state estimation over communication channels with random delays

    KAUST Repository

    Mahmoud, Magdi S.; Al-Sunni, Fouad; Liu, Bo

    2014-01-01

    This paper considers the optimal estimation of linear systems over unreliable communication channels with random delays. In this work, it is assumed that the system to be estimated is far away from the filter. The observations of the system

  2. A novel technique for active vibration control, based on optimal

    Indian Academy of Sciences (India)

    In the last few decades, researchers have proposed many control techniques to suppress unwanted vibrations in a structure. In this work, a novel and simple technique is proposed for the active vibration control. In this technique, an optimal tracking control is employed to suppress vibrations in a structure by simultaneously ...

  3. Parameter estimation in stochastic mammogram model by heuristic optimization techniques.

    NARCIS (Netherlands)

    Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.

    2006-01-01

    The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or

  4. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  5. Tabu search, a versatile technique for the functions optimization

    International Nuclear Information System (INIS)

    Castillo M, J.A.

    2003-01-01

    The basic elements of the Tabu search technique are presented, putting emphasis in the qualities that it has in comparison with the traditional methods of optimization known as in descending pass. Later on some modifications are sketched that have been implemented in the technique along the time, so that this it is but robust. Finally they are given to know some areas where this technique has been applied, obtaining successful results. (Author)

  6. COMPARISON OF RECURSIVE ESTIMATION TECHNIQUES FOR POSITION TRACKING RADIOACTIVE SOURCES

    International Nuclear Information System (INIS)

    Muske, K.; Howse, J.

    2000-01-01

    This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity

  7. A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections

    International Nuclear Information System (INIS)

    Zhang, You; Yin, Fang-Fang; Ren, Lei; Segars, W. Paul

    2013-01-01

    Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion

  8. Estimation of optimal nasotracheal tube depth in adult patients.

    Science.gov (United States)

    Ji, Sung-Mi

    2017-12-01

    The aim of this study was to estimate the optimal depth of nasotracheal tube placement. We enrolled 110 patients scheduled to undergo oral and maxillofacial surgery, requiring nasotracheal intubation. After intubation, the depth of tube insertion was measured. The neck circumference and distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch were measured. To estimate optimal tube depth, correlation and regression analyses were performed using clinical and anthropometric parameters. The mean tube depth was 28.9 ± 1.3 cm in men (n = 62), and 26.6 ± 1.5 cm in women (n = 48). Tube depth significantly correlated with height (r = 0.735, P < 0.001). Distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch correlated with depth of the endotracheal tube (r = 0.363, r = 0.362, and r = 0.546, P < 0.05). The tube depth also correlated with the sum of these distances (r = 0.646, P < 0.001). We devised the following formula for estimating tube depth: 19.856 + 0.267 × sum of the three distances (R 2 = 0.432, P < 0.001). The optimal tube depth for nasotracheally intubated adult patients correlated with height and sum of the distances from nares to tragus, tragus to angle of the mandible, and angle of the mandible to sternal notch. The proposed equation would be a useful guide to determine optimal nasotracheal tube placement.

  9. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    Science.gov (United States)

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These

  10. Airfoil shape optimization using non-traditional optimization technique and its validation

    Directory of Open Access Journals (Sweden)

    R. Mukesh

    2014-07-01

    Full Text Available Computational fluid dynamics (CFD is one of the computer-based solution methods which is more widely employed in aerospace engineering. The computational power and time required to carry out the analysis increase as the fidelity of the analysis increases. Aerodynamic shape optimization has become a vital part of aircraft design in the recent years. Generally if we want to optimize an airfoil we have to describe the airfoil and for that, we need to have at least hundred points of x and y co-ordinates. It is really difficult to optimize airfoils with this large number of co-ordinates. Nowadays many different schemes of parameter sets are used to describe general airfoil such as B-spline, and PARSEC. The main goal of these parameterization schemes is to reduce the number of needed parameters as few as possible while controlling the important aerodynamic features effectively. Here the work has been done on the PARSEC geometry representation method. The objective of this work is to introduce the knowledge of describing general airfoil using twelve parameters by representing its shape as a polynomial function. And also we have introduced the concept of Genetic Algorithm to optimize the aerodynamic characteristics of a general airfoil for specific conditions. A MATLAB program has been developed to implement PARSEC, Panel Technique, and Genetic Algorithm. This program has been tested for a standard NACA 2411 airfoil and optimized to improve its coefficient of lift. Pressure distribution and co-efficient of lift for airfoil geometries have been calculated using the Panel method. The optimized airfoil has improved co-efficient of lift compared to the original one. The optimized airfoil is validated using wind tunnel data.

  11. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.

    Science.gov (United States)

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-08-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

  12. Optimal replacement time estimation for machines and equipment based on cost function

    Directory of Open Access Journals (Sweden)

    J. Šebo

    2013-01-01

    Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.

  13. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  14. Size-exclusion chromatography (HPLC-SEC) technique optimization by simplex method to estimate molecular weight distribution of agave fructans.

    Science.gov (United States)

    Moreno-Vilet, Lorena; Bostyn, Stéphane; Flores-Montaño, Jose-Luis; Camacho-Ruiz, Rosa-María

    2017-12-15

    Agave fructans are increasingly important in food industry and nutrition sciences as a potential ingredient of functional food, thus practical analysis tools to characterize them are needed. In view of the importance of the molecular weight on the functional properties of agave fructans, this study has the purpose to optimize a method to determine their molecular weight distribution by HPLC-SEC for industrial application. The optimization was carried out using a simplex method. The optimum conditions obtained were at column temperature of 61.7°C using tri-distilled water without salt, adjusted pH of 5.4 and a flow rate of 0.36mL/min. The exclusion range is from 1 to 49 of polymerization degree (180-7966Da). This proposed method represents an accurate and fast alternative to standard methods involving multiple-detection or hydrolysis of fructans. The industrial applications of this technique might be for quality control, study of fractionation processes and determination of purity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Analysis of Wireless Sensor Network Topology and Estimation of Optimal Network Deployment by Deterministic Radio Channel Characterization

    Directory of Open Access Journals (Sweden)

    Erik Aguirre

    2015-02-01

    Full Text Available One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs, mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.

  16. Analysis of wireless sensor network topology and estimation of optimal network deployment by deterministic radio channel characterization.

    Science.gov (United States)

    Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2015-02-05

    One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.

  17. Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems.

    Science.gov (United States)

    Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano

    2012-05-10

    The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.

  18. Material saving by means of CWR technology using optimization techniques

    Science.gov (United States)

    Pérez, Iñaki; Ambrosio, Cristina

    2017-10-01

    Material saving is currently a must for the forging companies, as material costs sum up to 50% for parts made of steel and up to 90% in other materials like titanium. For long products, cross wedge rolling (CWR) technology can be used to obtain forging preforms with a suitable distribution of the material along its own axis. However, defining the correct preform dimensions is not an easy task and it could need an intensive trial-and-error campaign. To speed up the preform definition, it is necessary to apply optimization techniques on Finite Element Models (FEM) able to reproduce the material behaviour when being rolled. Meta-models Assisted Evolution Strategies (MAES), that combine evolutionary algorithms with Kriging meta-models, are implemented in FORGE® software and they allow reducing optimization computation costs in a relevant way. The paper shows the application of these optimization techniques to the definition of the right preform for a shaft from a vehicle of the agricultural sector. First, the current forging process, based on obtaining the forging preform by means of an open die forging operation, is showed. Then, the CWR preform optimization is developed by using the above mentioned optimization techniques. The objective is to reduce, as much as possible, the initial billet weight, so that a calculation of flash weight reduction due to the use of the proposed preform is stated. Finally, a simulation of CWR process for the defined preform is carried out to check that most common failures (necking, spirals,..) in CWR do not appear in this case.

  19. Novel optimization technique of isolated microgrid with hydrogen energy storage.

    Science.gov (United States)

    Beshr, Eman Hassan; Abdelghany, Hazem; Eteiba, Mahmoud

    2018-01-01

    This paper presents a novel optimization technique for energy management studies of an isolated microgrid. The system is supplied by various Distributed Energy Resources (DERs), Diesel Generator (DG), a Wind Turbine Generator (WTG), Photovoltaic (PV) arrays and supported by fuel cell/electrolyzer Hydrogen storage system for short term storage. Multi-objective optimization is used through non-dominated sorting genetic algorithm to suit the load requirements under the given constraints. A novel multi-objective flower pollination algorithm is utilized to check the results. The Pros and cons of the two optimization techniques are compared and evaluated. An isolated microgrid is modelled using MATLAB software package, dispatch of active/reactive power, optimal load flow analysis with slack bus selection are carried out to be able to minimize fuel cost and line losses under realistic constraints. The performance of the system is studied and analyzed during both summer and winter conditions and three case studies are presented for each condition. The modified IEEE 15 bus system is used to validate the proposed algorithm.

  20. Novel optimization technique of isolated microgrid with hydrogen energy storage.

    Directory of Open Access Journals (Sweden)

    Eman Hassan Beshr

    Full Text Available This paper presents a novel optimization technique for energy management studies of an isolated microgrid. The system is supplied by various Distributed Energy Resources (DERs, Diesel Generator (DG, a Wind Turbine Generator (WTG, Photovoltaic (PV arrays and supported by fuel cell/electrolyzer Hydrogen storage system for short term storage. Multi-objective optimization is used through non-dominated sorting genetic algorithm to suit the load requirements under the given constraints. A novel multi-objective flower pollination algorithm is utilized to check the results. The Pros and cons of the two optimization techniques are compared and evaluated. An isolated microgrid is modelled using MATLAB software package, dispatch of active/reactive power, optimal load flow analysis with slack bus selection are carried out to be able to minimize fuel cost and line losses under realistic constraints. The performance of the system is studied and analyzed during both summer and winter conditions and three case studies are presented for each condition. The modified IEEE 15 bus system is used to validate the proposed algorithm.

  1. Evaluation of mfcc estimation techniques for music similarity

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Christensen, Mads Græsbøll; Murthi, Manohar

    2006-01-01

    Spectral envelope parameters in the form of mel-frequencycepstral coefficients are often used for capturing timbral information of music signals in connection with genre classification applications. In this paper, we evaluate mel-frequencycepstral coefficient (MFCC) estimation techniques, namely...... independent linear prediction and MVDR spectral estimators did not exhibit any statistically significant improvement over MFCCs based on the simpler FFT....

  2. TECHNIQUE OF OPTIMAL AUDIT PLANNING FOR INFORMATION SECURITY MANAGEMENT SYSTEM

    Directory of Open Access Journals (Sweden)

    F. N. Shago

    2014-03-01

    Full Text Available Complication of information security management systems leads to the necessity of improving the scientific and methodological apparatus for these systems auditing. Planning is an important and determining part of information security management systems auditing. Efficiency of audit will be defined by the relation of the reached quality indicators to the spent resources. Thus, there is an important and urgent task of developing methods and techniques for optimization of the audit planning, making it possible to increase its effectiveness. The proposed technique gives the possibility to implement optimal distribution for planning time and material resources on audit stages on the basis of dynamics model for the ISMS quality. Special feature of the proposed approach is the usage of a priori data as well as a posteriori data for the initial audit planning, and also the plan adjustment after each audit event. This gives the possibility to optimize the usage of audit resources in accordance with the selected criteria. Application examples of the technique are given while planning audit information security management system of the organization. The result of computational experiment based on the proposed technique showed that the time (cost audit costs can be reduced by 10-15% and, consequently, quality assessments obtained through audit resources allocation can be improved with respect to well-known methods of audit planning.

  3. Energy, exergy, economic (3E) analyses and multi-objective optimization of vapor absorption heat transformer using NSGA-II technique

    International Nuclear Information System (INIS)

    Jain, Vaibhav; Sachdeva, Gulshan

    2017-01-01

    Highlights: • Study includes energy, exergy and economic analyses of absorption heat transformer. • It addresses multi-objective optimization study using NSGA-II technique. • Total annual cost and total exergy destruction are simultaneously optimized. • Results with multi-objective optimized design are more acceptable than other. - Abstract: Present paper addresses the energy, exergy and economic (3E) analyses of absorption heat transformer (AHT) working with LiBr-H 2 O fluid pair. The heat exchangers namely absorber, condenser, evaporator, generator and solution heat exchanger are designed for the size and cost estimation of AHT. Later, the effect of operating variables is examined on the system performance, size and cost. Simulation studies showed a conflict between thermodynamic and economic performance of the system. The heat exchangers with lower investment cost showed high irreversible losses and vice versa. Thus, the operating variables of systems are determined economically as well as thermodynamically by implementing non-dominated sort genetic algorithm-II (NSGA-II) technique of multi-objective optimization. In present work, if the cost based optimized design is chosen, total exergy destruction is 2.4% higher than its minimum possible value; whereas, if total exergy based optimized design is chosen, total annual cost is 6.1% higher than its minimum possible value. On the other hands, total annual cost and total exergy destruction are only 1.0% and 0.8%, respectively more from their minimum possible values with multi-objective optimized design. Thus, the multi-objective optimized design of the AHT is best outcome than any other single-objective optimized designs.

  4. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  5. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jacob Laigaard

    1991-01-01

    responses simulated by two SDOF ARMA models loaded by the same band-limited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  6. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    1992-01-01

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  7. Optimal complex exponentials BEM and channel estimation in doubly selective channel

    International Nuclear Information System (INIS)

    Song, Lijun; Lei, Xia; Yu, Feng; Jin, Maozhu

    2016-01-01

    Over doubly selective channel, the optimal complex exponentials BEM (CE-BEM) is required to characterize the transmission in transform domain in order to reducing the huge number of the estimated parameters during directly estimating the impulse response in time domain. This paper proposed an improved CE-BEM to alleviating the high frequency sampling error caused by conventional CE-BEM. On the one hand, exploiting the improved CE-BEM, we achieve the sampling point is in the Doppler spread spectrum and the maximum sampling frequency is equal to the maximum Doppler shift. On the other hand we optimize the function and dimension of basis in CE-BEM respectively ,and obtain the closed solution of the EM based channel estimation differential operator by exploiting the above optimal BEM. Finally, the numerical results and theoretic analysis show that the dimension of basis is mainly depend on the maximum Doppler shift and signal-to-noise ratio (SNR), and if fixing the number of the pilot symbol, the dimension of basis is higher, the modeling error is smaller, while the accuracy of the parameter estimation is reduced, which implies that we need to achieve a tradeoff between the modeling error and the accuracy of the parameter estimation and the basis function influences the accuracy of describing the Doppler spread spectrum after identifying the dimension of the basis.

  8. Application of Advanced Particle Swarm Optimization Techniques to Wind-thermal Coordination

    DEFF Research Database (Denmark)

    Singh, Sri Niwas; Østergaard, Jacob; Yadagiri, J.

    2009-01-01

    wind-thermal coordination algorithm is necessary to determine the optimal proportion of wind and thermal generator capacity that can be integrated into the system. In this paper, four versions of Particle Swarm Optimization (PSO) techniques are proposed for solving wind-thermal coordination problem...

  9. Sensitive Constrained Optimal PMU Allocation with Complete Observability for State Estimation Solution

    Directory of Open Access Journals (Sweden)

    R. Manam

    2017-12-01

    Full Text Available In this paper, a sensitive constrained integer linear programming approach is formulated for the optimal allocation of Phasor Measurement Units (PMUs in a power system network to obtain state estimation. In this approach, sensitive buses along with zero injection buses (ZIB are considered for optimal allocation of PMUs in the network to generate state estimation solutions. Sensitive buses are evolved from the mean of bus voltages subjected to increase of load consistently up to 50%. Sensitive buses are ranked in order to place PMUs. Sensitive constrained optimal PMU allocation in case of single line and no line contingency are considered in observability analysis to ensure protection and control of power system from abnormal conditions. Modeling of ZIB constraints is included to minimize the number of PMU network allocations. This paper presents optimal allocation of PMU at sensitive buses with zero injection modeling, considering cost criteria and redundancy to increase the accuracy of state estimation solution without losing observability of the whole system. Simulations are carried out on IEEE 14, 30 and 57 bus systems and results obtained are compared with traditional and other state estimation methods available in the literature, to demonstrate the effectiveness of the proposed method.

  10. Simulation of microcirculatory hemodynamics: estimation of boundary condition using particle swarm optimization.

    Science.gov (United States)

    Pan, Qing; Wang, Ruofan; Reglin, Bettina; Fang, Luping; Pries, Axel R; Ning, Gangmin

    2014-01-01

    Estimation of the boundary condition is a critical problem in simulating hemodynamics in microvascular networks. This paper proposed a boundary estimation strategy based on a particle swarm optimization (PSO) algorithm, which aims to minimize the number of vessels with inverted flow direction in comparison to the experimental observation. The algorithm took boundary values as the particle swarm and updated the position of the particles iteratively to approach the optimization target. The method was tested in a real rat mesenteric network. With random initial boundary values, the method achieved a minimized 9 segments with an inverted flow direction in the network with 546 vessels. Compared with reported literature, the current work has the advantage of a better fit with experimental observations and is more suitable for the boundary estimation problem in pulsatile hemodynamic models due to the experiment-based optimization target selection.

  11. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  12. THD Minimization from H-Bridge Cascaded Multilevel Inverter Using Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    MUDASIR AHMED MEMON

    2017-01-01

    Full Text Available In this paper, PSO (Particle Swarm Optimization based technique is proposed to derive optimized switching angles that minimizes the THD (Total Harmonic Distortion and reduces the effect of selected low order non-triple harmonics from the output of the multilevel inverter. Conventional harmonic elimination techniques have plenty of limitations, and other heuristic techniques also not provide the satisfactory results. In this paper, single phase symmetrical cascaded H-Bridge 11-Level multilevel inverter is considered, and proposed algorithm is utilized to obtain the optimized switching angles that reduced the effect of 5th, 7th, 11th and 13th non-triplen harmonics from the output voltage of the multilevel inverter. A simulation result indicates that this technique outperforms other methods in terms of minimizing THD and provides high-quality output voltage waveform.

  13. THD minimization from h-bridge cascaded multilevel inverter using particle swarm optimization technique

    International Nuclear Information System (INIS)

    Memon, M.A.; Memon, S.; Khan, S.

    2017-01-01

    In this paper, PSO (Particle Swarm Optimization) based technique is proposed to derive optimized switching angles that minimizes the THD (Total Harmonic Distortion) and reduces the effect of selected low order non-triple harmonics from the output of the multilevel inverter. Conventional harmonic elimination techniques have plenty of limitations, and other heuristic techniques also not provide the satisfactory results. In this paper, single phase symmetrical cascaded H-Bridge 11-Level multilevel inverter is considered, and proposed algorithm is utilized to obtain the optimized switching angles that reduced the effect of 5th, 7th, 11th and 13th non-triplen harmonics from the output voltage of the multilevel inverter. A simulation result indicates that this technique outperforms other methods in terms of minimizing THD and provides high-quality output voltage waveform. (author)

  14. Comparison of techniques for estimating herbage intake by grazing dairy cows

    NARCIS (Netherlands)

    Smit, H.J.; Taweel, H.Z.; Tas, B.M.; Tamminga, S.; Elgersma, A.

    2005-01-01

    For estimating herbage intake during grazing, the traditional sward cutting technique was compared in grazing experiments in 2002 and 2003 with the recently developed n-alkanes technique and with the net energy method. The first method estimates herbage intake by the difference between the herbage

  15. Operator support through modern optimal estimation and control

    International Nuclear Information System (INIS)

    Burdick, G.R.

    1980-01-01

    Applications of Modern Optimal Estimation and Control Theories are late in coming to the nuclear industry. Some features of the theories that might be exploited in nuclear systems applications are described. Activities at the Idaho National Engineering Laboratory relating to operator support using those theories are identified and some implementation challenges are discussed

  16. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  17. A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2001-01-01

    There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.

  18. A new estimation technique of sovereign default risk

    Directory of Open Access Journals (Sweden)

    Mehmet Ali Soytaş

    2016-12-01

    Full Text Available Using the fixed-point theorem, sovereign default models are solved by numerical value function iteration and calibration methods, which due to their computational constraints, greatly limits the models' quantitative performance and foregoes its country-specific quantitative projection ability. By applying the Hotz-Miller estimation technique (Hotz and Miller, 1993- often used in applied microeconometrics literature- to dynamic general equilibrium models of sovereign default, one can estimate the ex-ante default probability of economies, given the structural parameter values obtained from country-specific business-cycle statistics and relevant literature. Thus, with this technique we offer an alternative solution method to dynamic general equilibrium models of sovereign default to improve upon their quantitative inference ability.

  19. Optimal time-domain technique for pulse width modulation in power electronics

    Directory of Open Access Journals (Sweden)

    I. Mayergoyz

    2018-05-01

    Full Text Available Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.

  20. Solar photovoltaic power forecasting using optimized modified extreme learning machine technique

    Directory of Open Access Journals (Sweden)

    Manoja Kumar Behera

    2018-06-01

    Full Text Available Prediction of photovoltaic power is a significant research area using different forecasting techniques mitigating the effects of the uncertainty of the photovoltaic generation. Increasingly high penetration level of photovoltaic (PV generation arises in smart grid and microgrid concept. Solar source is irregular in nature as a result PV power is intermittent and is highly dependent on irradiance, temperature level and other atmospheric parameters. Large scale photovoltaic generation and penetration to the conventional power system introduces the significant challenges to microgrid a smart grid energy management. It is very critical to do exact forecasting of solar power/irradiance in order to secure the economic operation of the microgrid and smart grid. In this paper an extreme learning machine (ELM technique is used for PV power forecasting of a real time model whose location is given in the Table 1. Here the model is associated with the incremental conductance (IC maximum power point tracking (MPPT technique that is based on proportional integral (PI controller which is simulated in MATLAB/SIMULINK software. To train single layer feed-forward network (SLFN, ELM algorithm is implemented whose weights are updated by different particle swarm optimization (PSO techniques and their performance are compared with existing models like back propagation (BP forecasting model. Keywords: PV array, Extreme learning machine, Maximum power point tracking, Particle swarm optimization, Craziness particle swarm optimization, Accelerate particle swarm optimization, Single layer feed-forward network

  1. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  2. Cosmological parameter estimation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Prasad, J; Souradeep, T

    2014-01-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

  3. Empirical Estimates in Economic and Financial Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Houda, Michal; Kaňková, Vlasta

    2012-01-01

    Roč. 19, č. 29 (2012), s. 50-69 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/1610; GA ČR GAP402/11/0150; GA ČR GAP402/10/0956 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic programming * empirical estimates * moment generating functions * stability * Wasserstein metric * L1-norm * Lipschitz property * consistence * convergence rate * normal distribution * Pareto distribution * Weibull distribution * distribution tails * simulation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2012/E/houda-empirical estimates in economic and financial optimization problems.pdf

  4. Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques

    Science.gov (United States)

    2017-11-01

    on Bio -Inspired Optimization Techniques by Canh Ly, Nghia Tran, and Ozlem Kilic Approved for public release; distribution is...Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio -Inspired Optimization Techniques by...SUBTITLE Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio -Inspired Optimization Techniques 5a. CONTRACT NUMBER

  5. Tuning of PID controller using optimization techniques for a MIMO process

    Science.gov (United States)

    Thulasi dharan, S.; Kavyarasan, K.; Bagyaveereswaran, V.

    2017-11-01

    In this paper, two processes were considered one is Quadruple tank process and the other is CSTR (Continuous Stirred Tank Reactor) process. These are majorly used in many industrial applications for various domains, especially, CSTR in chemical plants.At first mathematical model of both the process is to be done followed by linearization of the system due to MIMO process and controllers are the major part to control the whole process to our desired point as per the applications so the tuning of the controller plays a major role among the whole process. For tuning of parameters we use two optimizations techniques like Particle Swarm Optimization, Genetic Algorithm. The above techniques are majorly used in different applications to obtain which gives the best among all, we use these techniques to obtain the best tuned values among many. Finally, we will compare the performance of the each process with both the techniques.

  6. Optimal Colored Noise for Estimating Phase Response Curves

    Science.gov (United States)

    Morinaga, Kazuhiko; Miyata, Ryota; Aonishi, Toru

    2015-09-01

    The phase response curve (PRC) is an important measure representing the interaction between oscillatory elements. To understand synchrony in biological systems, many research groups have sought to measure PRCs directly from biological cells including neurons. Ermentrout et al. and Ota et al. showed that PRCs can be identified through measurement of white-noise spike-triggered averages. The disadvantage of this method is that one has to collect more than ten-thousand spikes to ensure the accuracy of the estimate. In this paper, to achieve a more accurate estimation of PRCs with a limited sample size, we use colored noise, which has recently drawn attention because of its unique effect on dynamical systems. We numerically show that there is an optimal colored noise to estimate PRCs in the most rigorous fashion.

  7. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    D V A N Ravi Kumar

    2017-07-04

    Jul 4, 2017 ... In this paper, two novel methods based on the Estimate Merge Technique ... mentioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in .... equations are converted to sequential equations to make ... estimation error and low convergence time) at feasibly high.

  8. Loading pattern optimization by multi-objective simulated annealing with screening technique

    International Nuclear Information System (INIS)

    Tong, K. P.; Hyun, C. L.; Hyung, K. J.; Chang, H. K.

    2006-01-01

    This paper presents a new multi-objective function which is made up of the main objective term as well as penalty terms related to the constraints. All the terms are represented in the same functional form and the coefficient of each term is normalized so that each term has equal weighting in the subsequent simulated annealing optimization calculations. The screening technique introduced in the previous work is also adopted in order to save computer time in 3-D neutronics evaluation of trial loading patterns. For numerical test of the new multi-objective function in the loading pattern optimization, the optimum loading patterns for the initial and the cycle 7 reload PWR core of Yonggwang Unit 4 are calculated by the simulated annealing algorithm with screening technique. A total of 10 optimum loading patterns are obtained for the initial core through 10 independent simulated annealing optimization runs. For the cycle 7 reload core one optimum loading pattern has been obtained from a single simulated annealing optimization run. More SA optimization runs will be conducted to optimum loading patterns for the cycle 7 reload core and results will be presented in the further work. (authors)

  9. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    Science.gov (United States)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  10. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  11. Optimization of PCR Condition: The First Study of High Resolution Melting Technique for Screening of APOA1 Variance.

    Science.gov (United States)

    Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; Ep Mundhofir, Farmaditya; Mh Faradz, Sultana; Hisatome, Ichiro

    2017-03-01

    High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100-400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1 .

  12. Rovibrational controlled-NOT gates using optimized stimulated Raman adiabatic passage techniques and optimal control theory

    International Nuclear Information System (INIS)

    Sugny, D.; Bomble, L.; Ribeyre, T.; Dulieu, O.; Desouter-Lecomte, M.

    2009-01-01

    Implementation of quantum controlled-NOT (CNOT) gates in realistic molecular systems is studied using stimulated Raman adiabatic passage (STIRAP) techniques optimized in the time domain by genetic algorithms or coupled with optimal control theory. In the first case, with an adiabatic solution (a series of STIRAP processes) as starting point, we optimize in the time domain different parameters of the pulses to obtain a high fidelity in two realistic cases under consideration. A two-qubit CNOT gate constructed from different assignments in rovibrational states is considered in diatomic (NaCs) or polyatomic (SCCl 2 ) molecules. The difficulty of encoding logical states in pure rotational states with STIRAP processes is illustrated. In such circumstances, the gate can be implemented by optimal control theory and the STIRAP sequence can then be used as an interesting trial field. We discuss the relative merits of the two methods for rovibrational computing (structure of the control field, duration of the control, and efficiency of the optimization).

  13. Optimized inspection techniques and structural analysis in lifetime management

    International Nuclear Information System (INIS)

    Aguado, M.T.; Marcelles, I.

    1993-01-01

    Preservation of the option of extending the service lifetime of a nuclear power plant beyond its normal design lifetime requires correct remaining lifetime management from the very beginning of plant operation. The methodology used in plant remaining lifetime management is essentially based on the use of standard inspections, surveillance and monitoring programs and calculations, such as thermal-stress and fracture mechanics analysis. The inspection techniques should be continuously optimized, in order to be able to detect and dimension existing defects with the highest possible degree of accuracy. The information obtained during the inspection is combined with the historical data of the components: design, quality, operation, maintenance, and transients, and with the results of destructive testing, fracture mechanics and thermal fatigue analysis. These data are used to estimate the remaining lifetime of nuclear power plant components, systems and structures with the highest degree possible of accuracy. The use of this methodology allows component repairs and replacements to be reduced or avoided and increases the safety levels and availability of the nuclear power plant. Use of this strategy avoids the need for heavy investments at the end of the licensing period

  14. Intelligent Heuristic Techniques for the Optimization of the Transshipment and Storage Operations at Maritime Container Terminals

    Directory of Open Access Journals (Sweden)

    Christopher Expósito-Izquierdo

    2017-02-01

    Full Text Available This paper summarizes the main contributions of the Ph.D. thesis of Christopher Exp\\'osito-Izquierdo. This thesis seeks to develop a wide set of intelligent heuristic and meta-heuristic algorithms aimed at solving some of the most highlighted optimization problems associated with the transshipment and storage of containers at conventional maritime container terminals. Under the premise that no optimization technique can have a better performance than any other technique under all possible assumptions, the main point of interest in the domain of maritime logistics is to propose optimization techniques superior in terms of effectiveness and computational efficiency to previous proposals found in the scientific literature when solving individual optimization problems under realistic scenarios. Simultaneously, these optimization techniques should be enough competitive to be potentially implemented in practice. }}

  15. Optimization of coronary optical coherence tomography imaging using the attenuation-compensated technique: a validation study.

    NARCIS (Netherlands)

    Teo, Jing Chun; Foin, Nicolas; Otsuka, Fumiyuki; Bulluck, Heerajnarain; Fam, Jiang Ming; Wong, Philip; Low, Fatt Hoe; Leo, Hwa Liang; Mari, Jean-Martial; Joner, Michael; Girard, Michael J A; Virmani, Renu; Bezerra, HG.; Costa, MA.; Guagliumi, G.; Rollins, AM.; Simon, D.; Gutiérrez-Chico, JL.; Alegría-Barrero, E.; Teijeiro-Mestre, R.; Chan, PH.; Tsujioka, H.; de Silva, R.; Otsuka, F.; Joner, M.; Prati, F.; Virmani, R.; Narula, J.; Members, WC.; Levine, GN.; Bates, ER.; Blankenship, JC.; Bailey, SR.; Bittl, JA.; Prati, F.; Guagliumi, G.; Mintz, G.S.; Costa, Marco; Regar, E.; Akasaka, T.; Roleder, T.; Jąkała, J.; Kałuża, GL.; Partyka, Ł.; Proniewska, K.; Pociask, E.; Girard, MJA.; Strouthidis, NG.; Ethier, CR.; Mari, JM.; Mari, JM.; Strouthidis, NG.; Park, SC.; Girard, MJA.; van der Lee, R.; Foin, N.; Otsuka, F.; Wong, P.K.; Mari, J-M.; Joner, M.; Nakano, M.; Vorpahl, M.; Otsuka, F.; Taniwaki, M.; Yazdani, SK.; Finn, AV.; Nakano, M.; Yahagi, K.; Yamamoto, H.; Taniwaki, M.; Otsuka, F.; Ladich, ER.; Girard, MJ.; Ang, M.; Chung, CW.; Farook, M.; Strouthidis, N.; Mehta, JS.; Foin, N.; Mari, JM.; Nijjer, S.; Sen, S.; Petraco, R.; Ghione, M.; Liu, X.; Kang, JU.; Virmani, R.; Kolodgie, F.D.; Burke, AP.; Farb, A.; Schwartz, S.M.; Yahagi, K.; Kolodgie, F.D.; Otsuka, F.; Finn, AV.; Davis, HR.; Joner, M.; Kume, T.; Akasaka, T.; Kawamoto, T.; Watanabe, N.; Toyota, E.; Neishi, Y.; Rieber, J.; Meissner, O.; Babaryka, G.; Reim, S.; Oswald, M.E.; Koenig, A.S.; Tearney, G. J.; Regar, E.; Akasaka, T.; Adriaenssens, T.; Barlis, P.; Bezerra, HG.; Yabushita, H.; Bouma, BE.; Houser, S. L.; Aretz, HT.; Jang, I-K.; Schlendorf, KH.; Guo, J.; Sun, L.; Chen, Y.D.; Tian, F.; Liu, HB.; Chen, L.; Kawasaki, M.; Bouma, BE.; Bressner, J. E.; Houser, S. L.; Nadkarni, S. K.; MacNeill, BD.; Jansen, CHP.; Onthank, DC.; Cuello, F.; Botnar, RM.; Wiethoff, AJ.; Warley, A.; von Birgelen, C.; Hartmann, A. M.; Kubo, T.; Akasaka, T.; Shite, J.; Suzuki, T.; Uemura, S.; Yu, B.; Habara, M.; Nasu, K.; Terashima, M.; Kaneda, H.; Yokota, D.; Ko, E.; Virmani, R.; Burke, AP.; Kolodgie, F.D.; Farb, A.; Takarada, S.; Imanishi, T.; Kubo, T.; Tanimoto, T.; Kitabata, H.; Nakamura, N.; Hattori, K.; Ozaki, Y.; Ismail, TF.; Okumura, M.; Naruse, H.; Kan, S.; Nishio, R.; Shinke, T.; Otake, H.; Nakagawa, M.; Nagoshi, R.; Inoue, T.; Sinclair, H.D.; Bourantas, C.; Bagnall, A.; Mintz, G.S.; Kunadian, V.; Tearney, G. J.; Yabushita, H.; Houser, S. L.; Aretz, HT.; Jang, I-K.; Schlendorf, KH.; van Soest, G.; Goderie, T.; Regar, E.; Koljenović, S.; Leenders, GL. van; Gonzalo, N.; Xu, C.; Schmitt, JM.; Carlier, SG.; Virmani, R.; van der Meer, FJ; Faber, D.J.; Sassoon, DMB.; Aalders, M.C.; Pasterkamp, G.; Leeuwen, TG. van; Schmitt, JM.; Knuttel, A.; Yadlowsky, M.; Eckhaus, MA.; Karamata, B.; Laubscher, M.; Leutenegger, M.; Bourquin, S.; Lasser, T.; Lambelet, P.; Vermeer, K.A.; Mo, J.; Weda, J.J.A.; Lemij, H.G.; Boer, JF. de

    2016-01-01

    PURPOSE To optimize conventional coronary optical coherence tomography (OCT) images using the attenuation-compensated technique to improve identification of plaques and the external elastic lamina (EEL) contour. METHOD The attenuation-compensated technique was optimized via manipulating contrast

  16. Optimization of DNA Sensor Model Based Nanostructured Graphene Using Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2013-01-01

    Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.

  17. Direction of Arrival Estimation Accuracy Enhancement via Using Displacement Invariance Technique

    Directory of Open Access Journals (Sweden)

    Youssef Fayad

    2015-01-01

    Full Text Available A new algorithm for improving Direction of Arrival Estimation (DOAE accuracy has been carried out. Two contributions are introduced. First, Doppler frequency shift that resulted from the target movement is estimated using the displacement invariance technique (DIT. Second, the effect of Doppler frequency is modeled and incorporated into ESPRIT algorithm in order to increase the estimation accuracy. It is worth mentioning that the subspace approach has been employed into ESPRIT and DIT methods to reduce the computational complexity and the model’s nonlinearity effect. The DOAE accuracy has been verified by closed-form Cramér-Rao bound (CRB. The simulation results of the proposed algorithm are better than those of the previous estimation techniques leading to the estimator performance enhancement.

  18. How to apply the optimal estimation method to your lidar measurements for improved retrievals of temperature and composition

    Science.gov (United States)

    Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.

    2018-04-01

    The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.

  19. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  20. Asymptotic optimality of RESTART estimators in highly dependable systems

    International Nuclear Information System (INIS)

    Villén-Altamirano, J.

    2014-01-01

    We consider a wide class of models that includes the highly reliable Markovian systems (HRMS) often used to represent the evolution of multi-component systems in reliability settings. Repair times and component lifetimes are random variables that follow a general distribution, and the repair service adopts a priority repair rule based on system failure risk. Since crude simulation has proved to be inefficient for highly-dependable systems, the RESTART method is used for the estimation of steady-state unavailability and other reliability measures. In this method, a number of simulation retrials are performed when the process enters regions of the state space where the chance of occurrence of a rare event (e.g., a system failure) is higher. The main difficulty involved in applying this method is finding a suitable function, called the importance function, to define the regions. In this paper we introduce an importance function which, for unbalanced systems, represents a great improvement over the importance function used in previous papers. We also demonstrate the asymptotic optimality of RESTART estimators in these models. Several examples are presented to show the effectiveness of the new approach, and probabilities up to the order of 10 −42 are accurately estimated with little computational effort. - Highlights: • Rare event probabilities of highly reliable systems are estimated by simulation. • The asymptotic optimality of the application is proved. • A better importance function for highly reliable systems is provided in the paper

  1. Estimation of water demand in water distribution systems using particle swarm optimization

    CSIR Research Space (South Africa)

    Letting, LK

    2017-08-01

    Full Text Available and an evolutionary algorithm is a potential solution to the demand estimation problem. This paper presents a detailed process simulation model for water demand estimation using the particle swarm optimization (PSO) algorithm. Nodal water demands and pipe flows...

  2. Comparative Study of Online Open Circuit Voltage Estimation Techniques for State of Charge Estimation of Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Hicham Chaoui

    2017-04-01

    Full Text Available Online estimation techniques are extensively used to determine the parameters of various uncertain dynamic systems. In this paper, online estimation of the open-circuit voltage (OCV of lithium-ion batteries is proposed by two different adaptive filtering methods (i.e., recursive least square, RLS, and least mean square, LMS, along with an adaptive observer. The proposed techniques use the battery’s terminal voltage and current to estimate the OCV, which is correlated to the state of charge (SOC. Experimental results highlight the effectiveness of the proposed methods in online estimation at different charge/discharge conditions and temperatures. The comparative study illustrates the advantages and limitations of each online estimation method.

  3. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    Science.gov (United States)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  4. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  5. Optimal Orientation Planning and Control Deviation Estimation on FAST Cable-Driven Parallel Robot

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-03-01

    Full Text Available The paper is devoted theoretically to the optimal orientation planning and control deviation estimation of FAST cable-driven parallel robot. Regarding the robot characteristics, the solutions are obtained from two constrained optimizations, both of which are based on the equilibrium of the cabin and the attention on force allocation among 6 cable tensions. A kind of control algorithm is proposed based on the position and force feedbacks. The analysis proves that the orientation control depends on force feedback and the optimal tension solution corresponding to the planned orientation. Finally, the estimation of orientation deviation is given under the limit range of tension errors.

  6. Efficiency Optimization Control of IPM Synchronous Motor Drives with Online Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Sadegh Vaez-Zadeh

    2011-04-01

    Full Text Available This paper describes an efficiency optimization control method for high performance interior permanent magnet synchronous motor drives with online estimation of motor parameters. The control system is based on an input-output feedback linearization method which provides high performance control and simultaneously ensures the minimization of the motor losses. The controllable electrical loss can be minimized by the optimal control of the armature current vector. It is shown that parameter variations except at near the nominal conditions have undesirable effect on the controller performance. Therefore, a parameter estimation method based on the second method of Lyapunov is presented which guarantees the stability and convergence of the estimation. The extensive simulation results show the feasibility of the proposed controller and observer and their desirable performances.

  7. Joint sensor location/power rating optimization for temporally-correlated source estimation

    KAUST Repository

    Bushnaq, Osama M.

    2017-12-22

    The optimal sensor selection for scalar state parameter estimation in wireless sensor networks is studied in the paper. A subset of N candidate sensing locations is selected to measure a state parameter and send the observation to a fusion center via wireless AWGN channel. In addition to selecting the optimal sensing location, the sensor type to be placed in these locations is selected from a pool of T sensor types such that different sensor types have different power ratings and costs. The sensor transmission power is limited based on the amount of energy harvested at the sensing location and the type of the sensor. The Kalman filter is used to efficiently obtain the MMSE estimator at the fusion center. Sensors are selected such that the MMSE estimator error is minimized subject to a prescribed system budget. This goal is achieved using convex relaxation and greedy algorithm approaches.

  8. A Novel Analytical Technique for Optimal Allocation of Capacitors in Radial Distribution Systems

    Directory of Open Access Journals (Sweden)

    Sarfaraz Nawaz

    2017-07-01

    Full Text Available In this paper, a novel analytical technique is proposed to determine the optimal size and location of shunt capacitor units in radial distribution systems. An objective function is formulated to reduce real power loss, to improve the voltage profile and to increase annual cost savings. A new constant, the Loss Sensitivity Constant (LSC, is proposed here. The value of LSC decides the location and size of candidate buses. The technique is demonstrated on an IEEE-33 bus system at different load levels and the 130-bus distribution system of Jamawa Ramgarh village, Jaipur city. The obtained results are compared with the latest optimization techniques to show the effectiveness and robustness of the proposed technique.

  9. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  10. Updated Magmatic Flux Rate Estimates for the Hawaii Plume

    Science.gov (United States)

    Wessel, P.

    2013-12-01

    Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.

  11. Connection between optimal control theory and adiabatic-passage techniques in quantum systems

    Science.gov (United States)

    Assémat, E.; Sugny, D.

    2012-08-01

    This work explores the relationship between optimal control theory and adiabatic passage techniques in quantum systems. The study is based on a geometric analysis of the Hamiltonian dynamics constructed from Pontryagin's maximum principle. In a three-level quantum system, we show that the stimulated Raman adiabatic passage technique can be associated to a peculiar Hamiltonian singularity. One deduces that the adiabatic pulse is solution of the optimal control problem only for a specific cost functional. This analysis is extended to the case of a four-level quantum system.

  12. Virtual Power Plant and Microgrids controller for Energy Management based on optimization techniques

    Directory of Open Access Journals (Sweden)

    Maher G. M. Abdolrasol

    2017-06-01

    Full Text Available This paper discuss virtual power plant (VPP and Microgrid controller for energy management system (EMS based on optimization techniques by using two optimization techniques namely Backtracking search algorithm (BSA and particle swarm optimization algorithm (PSO. The research proposes use of multi Microgrid in the distribution networks to aggregate the power form distribution generation and form it into single Microgrid and let these Microgrid deal directly with the central organizer called virtual power plant. VPP duties are price forecast, demand forecast, weather forecast, production forecast, shedding loads, make intelligent decision and for aggregate & optimizes the data. This huge system has been tested and simulated by using Matlab simulink. These paper shows optimizations of two methods were really significant in the results. But BSA is better than PSO to search for better parameters which could make more power saving as in the results and the discussion.

  13. Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model

    OpenAIRE

    Shulgin, A.

    2015-01-01

    Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...

  14. Reliability analysis of large scaled structures by optimization technique

    International Nuclear Information System (INIS)

    Ishikawa, N.; Mihara, T.; Iizuka, M.

    1987-01-01

    This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)

  15. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    Science.gov (United States)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  16. Motion estimation of tagged cardiac magnetic resonance images using variational techniques

    Czech Academy of Sciences Publication Activity Database

    Carranza-Herrezuelo, N.; Bajo, A.; Šroubek, Filip; Santamarta, C.; Cristóbal, G.; Santos, A.; Ledesma-Carbayo, M.J.

    2010-01-01

    Roč. 34, č. 6 (2010), s. 514-522 ISSN 0895-6111 Institutional research plan: CEZ:AV0Z10750506 Keywords : medical imaging processing * motion estimation * variational techniques * tagged cardiac magnetic resonance images * optical flow Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.110, year: 2010 http://library.utia.cas.cz/separaty/2010/ZOI/sroubek- motion estimation of tagged cardiac magnetic resonance images using variational techniques.pdf

  17. Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mohayai, Tanaz Angelina [IIT, Chicago; Snopok, Pavel [IIT, Chicago; Neuffer, David [Fermilab; Rogers, Chris [Rutherford

    2017-10-12

    The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.

  18. Query optimization over crowdsourced data

    KAUST Repository

    Park, Hyunjung

    2013-08-26

    Deco is a comprehensive system for answering declarative queries posed over stored relational data together with data obtained on-demand from the crowd. In this paper we describe Deco\\'s cost-based query optimizer, building on Deco\\'s data model, query language, and query execution engine presented earlier. Deco\\'s objective in query optimization is to find the best query plan to answer a query, in terms of estimated monetary cost. Deco\\'s query semantics and plan execution strategies require several fundamental changes to traditional query optimization. Novel techniques incorporated into Deco\\'s query optimizer include a cost model distinguishing between "free" existing data versus paid new data, a cardinality estimation algorithm coping with changes to the database state during query execution, and a plan enumeration algorithm maximizing reuse of common subplans in a setting that makes reuse challenging. We experimentally evaluate Deco\\'s query optimizer, focusing on the accuracy of cost estimation and the efficiency of plan enumeration.

  19. Learning-curve estimation techniques for nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year.

  20. Learning-curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  1. Learning curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  2. Estimating the concentration of gold nanoparticles incorporated on natural rubber membranes using multi-level starlet optimal segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, A. F. de, E-mail: siqueiraaf@gmail.com; Cabrera, F. C., E-mail: flavioccabrera@yahoo.com.br [UNESP – Univ Estadual Paulista, Dep de Física, Química e Biologia (Brazil); Pagamisse, A., E-mail: aylton@fct.unesp.br [UNESP – Univ Estadual Paulista, Dep de Matemática e Computação (Brazil); Job, A. E., E-mail: job@fct.unesp.br [UNESP – Univ Estadual Paulista, Dep de Física, Química e Biologia (Brazil)

    2014-12-15

    This study consolidates multi-level starlet segmentation (MLSS) and multi-level starlet optimal segmentation (MLSOS) techniques for photomicrograph segmentation, based on starlet wavelet detail levels to separate areas of interest in an input image. Several segmentation levels can be obtained using MLSS; after that, Matthews correlation coefficient is used to choose an optimal segmentation level, giving rise to MLSOS. In this paper, MLSOS is employed to estimate the concentration of gold nanoparticles with diameter around 47  nm, reduced on natural rubber membranes. These samples were used for the construction of SERS/SERRS substrates and in the study of the influence of natural rubber membranes with incorporated gold nanoparticles on the physiology of Leishmania braziliensis. Precision, recall, and accuracy are used to evaluate the segmentation performance, and MLSOS presents an accuracy greater than 88 % for this application.

  3. Estimation of power lithium-ion battery SOC based on fuzzy optimal decision

    Science.gov (United States)

    He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin

    2018-06-01

    In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.

  4. A novel technique for optimal integration of active steering and differential braking with estimation to improve vehicle directional stability.

    Science.gov (United States)

    Mirzaeinejad, Hossein; Mirzaei, Mehdi; Rafatnia, Sadra

    2018-06-11

    This study deals with the enhancement of directional stability of vehicle which turns with high speeds on various road conditions using integrated active steering and differential braking systems. In this respect, the minimum usage of intentional asymmetric braking force to compensate the drawbacks of active steering control with small reduction of vehicle longitudinal speed is desired. To this aim, a new optimal multivariable controller is analytically developed for integrated steering and braking systems based on the prediction of vehicle nonlinear responses. A fuzzy programming extracted from the nonlinear phase plane analysis is also used for managing the two control inputs in various driving conditions. With the proposed fuzzy programming, the weight factors of the control inputs are automatically tuned and softly changed. In order to simulate a real-world control system, some required information about the system states and parameters which cannot be directly measured, are estimated using the Unscented Kalman Filter (UKF). Finally, simulations studies are carried out using a validated vehicle model to show the effectiveness of the proposed integrated control system in the presence of model uncertainties and estimation errors. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Estimation of component redundancy in optimal age maintenance

    OpenAIRE

    Siopa, Jorge; Garção, José; Silva, Júlio

    2012-01-01

    The classical Optimal Age-Replacement defines the maintenance strategy based on the equipment failure consequences. For severe consequences an early equipment replacement is recommended. For minor consequences the repair after failure is proposed. One way of reducing the failure consequences is the use of redundancies, especially if the equipment failure rate is decreasing over time, since in this case the preventive replacement does not reduce the risk of failure. The estimation of an ac...

  6. Phase estimation for global defocus correction in optical coherence tomography

    DEFF Research Database (Denmark)

    Jensen, Mikkel; Israelsen, Niels Møller; Podoleanu, Adrian

    2017-01-01

    In this work we investigate three techniques for estimation of the non-linear phase present due to defocus in opticalcoherence tomography, and apply them with the angular spectrum method. The techniques are: Least squarestting the of unwrapped phase of the angular spectrum, iterative optimization......, and sub-aperture correlations. The estimated phase of a single en-face image is used to extrapolate the non-linear phase at all depths, whichin the end can be used to correct the entire 3-D tomogram, and any other tomogram from the same system.......In this work we investigate three techniques for estimation of the non-linear phase present due to defocus in opticalcoherence tomography, and apply them with the angular spectrum method. The techniques are: Least squarestting the of unwrapped phase of the angular spectrum, iterative optimization...

  7. Comparison of process estimation techniques for on-line calibration monitoring

    International Nuclear Information System (INIS)

    Shumaker, B. D.; Hashemian, H. M.; Morton, G. W.

    2006-01-01

    The goal of on-line calibration monitoring is to reduce the number of unnecessary calibrations performed each refueling cycle on pressure, level, and flow transmitters in nuclear power plants. The effort requires a baseline for determining calibration drift and thereby the need for a calibration. There are two ways to establish the baseline: averaging and modeling. Averaging techniques have proven to be highly successful in the applications when there are a large number of redundant transmitters; but, for systems with little or no redundancy, averaging methods are not always reliable. That is, for non-redundant transmitters, more sophisticated process estimation techniques are needed to augment or replace the averaging techniques. This paper explores three well-known process estimation techniques; namely Independent Component Analysis (ICA), Auto-Associative Neural Networks (AANN), and Auto-Associative Kernel Regression (AAKR). Using experience and data from an operating nuclear plant, the paper will present an evaluation of the effectiveness of these methods in detecting transmitter drift in actual plant conditions. (authors)

  8. Optimization Techniques for Improving the Performance of Silicone-Based Dielectric Elastomers

    DEFF Research Database (Denmark)

    Skov, Anne Ladegaard; Yu, Liyun

    2017-01-01

    the electro-mechanical performance of dielectric elastomers are highlighted. Various optimization methods for improved energy transduction are investigated and discussed, with special emphasis placed on the promise each method holds. The compositing and blending of elastomers are shown to be simple, versatile...... methods that can solve a number of optimization issues. More complicated methods, involving chemical modification of the silicone backbone as well as controlling the network structure for improved mechanical properties, are shown to solve yet more issues. From the analysis, it is obvious...... that there is not a single optimization technique that will lead to the universal optimization of dielectric elastomer films, though each method may lead to elastomers with certain features, and thus certain potentials....

  9. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    Science.gov (United States)

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  10. An entropy flow optimization technique for helium liquefaction cycles

    International Nuclear Information System (INIS)

    Minta, M.; Smith, J.L.

    1984-01-01

    This chapter proposes a new method of analyzing thermodynamic cycles based on a continuous distribution of precooling over the temperature range of the cycle. The method gives the optimum distribution of precooling over the temperature range of the cycle by specifying the mass flow to be expanded at each temperature. The result is used to select a cycle configuration with discrete expansions and to initialize the independent variables for final optimization. Topics considered include the continuous precooling model, the results for ideal gas, the results for real gas, and the application to the design of a saturated vapor compression (SVC) cycle. The optimization technique for helium liquefaction cycles starts with the minimization of the generated entropy in a cycle model with continuous precooling. The pressure ratio, the pressure level and the distribution of the heat exchange are selected based on the results of the continuous precooling analysis. It is concluded that the technique incorporates the non-ideal behavior of helium in the procedure and allows the trade-off between heat exchange area and losses to be determined

  11. Joint optimization of MIMO radar waveform and biased estimator with prior information in the presence of clutter

    Directory of Open Access Journals (Sweden)

    Liu Hongwei

    2011-01-01

    Full Text Available Abstract In this article, we consider the problem of joint optimization of multi-input multi-output (MIMO radar waveform and biased estimator with prior information on targets of interest in the presence of signal-dependent noise. A novel constrained biased Cramer-Rao bound (CRB based method is proposed to optimize the waveform covariance matrix (WCM and biased estimator such that the performance of parameter estimation can be improved. Under a simplifying assumption, the resultant nonlinear optimization problem is solved resorting to a convex relaxation that belongs to the semidefinite programming (SDP class. An optimal solution of the initial problem is then constructed through a suitable approximation to an optimal solution of the relaxed one (in a least squares (LS sense. Numerical results show that the performance of parameter estimation can be improved considerably by the proposed method compared to uncorrelated waveforms.

  12. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  13. A low tritium hydride bed inventory estimation technique

    Energy Technology Data Exchange (ETDEWEB)

    Klein, J.E.; Shanahan, K.L.; Baker, R.A. [Savannah River National Laboratory, Aiken, SC (United States); Foster, P.J. [Savannah River Nuclear Solutions, Aiken, SC (United States)

    2015-03-15

    Low tritium hydride beds were developed and deployed into tritium service in Savannah River Site. Process beds to be used for low concentration tritium gas were not fitted with instrumentation to perform the steady-state, flowing gas calorimetric inventory measurement method. Low tritium beds contain less than the detection limit of the IBA (In-Bed Accountability) technique used for tritium inventory. This paper describes two techniques for estimating tritium content and uncertainty for low tritium content beds to be used in the facility's physical inventory (PI). PI are performed periodically to assess the quantity of nuclear material used in a facility. The first approach (Mid-point approximation method - MPA) assumes the bed is half-full and uses a gas composition measurement to estimate the tritium inventory and uncertainty. The second approach utilizes the bed's hydride material pressure-composition-temperature (PCT) properties and a gas composition measurement to reduce the uncertainty in the calculated bed inventory.

  14. Efficiency Optimization by Considering the High Voltage Flyback Transformer Parasitics using an Automatic Winding Layout Technique

    DEFF Research Database (Denmark)

    Thummala, Prasanth; Schneider, Henrik; Zhang, Zhe

    2015-01-01

    .The energy efficiency is optimized using a proposed new automatic winding layout (AWL) technique and a comprehensive loss model.The AWL technique generates a large number of transformer winding layouts.The transformer parasitics such as dc resistance, leakage inductance and self-capacitance are calculated...... for each winding layout.An optimization technique is formulated to minimize the sum of energy losses during charge and discharge operations.The efficiency and energy loss distribution results from the optimization routine provide a deep insight into the high voltage transformer designand its impact...

  15. Optimization of Hydraulic Machinery Bladings by Multilevel CFD Techniques

    Directory of Open Access Journals (Sweden)

    Thum Susanne

    2005-01-01

    Full Text Available The numerical design optimization for complex hydraulic machinery bladings requires a high number of design parameters and the use of a precise CFD solver yielding high computational costs. To reduce the CPU time needed, a multilevel CFD method has been developed. First of all, the 3D blade geometry is parametrized by means of a geometric design tool to reduce the number of design parameters. To keep geometric accuracy, a special B-spline modification technique has been developed. On the first optimization level, a quasi-3D Euler code (EQ3D is applied. To guarantee a sufficiently accurate result, the code is calibrated by a Navier-Stokes recalculation of the initial design and can be recalibrated after a number of optimization steps by another Navier-Stokes computation. After having got a convergent solution, the optimization process is repeated on the second level using a full 3D Euler code yielding a more accurate flow prediction. Finally, a 3D Navier-Stokes code is applied on the third level to search for the optimum optimorum by means of a fine-tuning of the geometrical parameters. To show the potential of the developed optimization system, the runner blading of a water turbine having a specific speed n q = 41 1 / min was optimized applying the multilevel approach.

  16. Optimal Wavelength Selection in Ultraviolet Spectroscopy for the Estimation of Toxin Reduction Ratio during Hemodialysis

    Directory of Open Access Journals (Sweden)

    Amir Ghanifar

    2016-06-01

    Full Text Available Introduction The concentration of substances, including urea, creatinine, and uric acid, can be used as an index to measure toxic uremic solutes in the blood during dialysis and interdialytic intervals. The on-line monitoring of toxin concentration allows for the clearance measurement of some low-molecular-weight solutes at any time during hemodialysis.The aim of this study was to determine the optimal wavelength for estimating the changes in urea, creatinine, and uric acid in dialysate, using ultraviolet (UV spectroscopy. Materials and Methods In this study, nine uremic patients were investigated, using on-line spectrophotometry. The on-line absorption measurements (UV radiation were performed with a spectrophotometer module, connected to the fluid outlet of the dialysis machine. Dialysate samples were obtained and analyzed, using standard biochemical methods. Optimal wavelengths for both creatinine and uric acid were selected by using a combination of genetic algorithms (GAs, i.e., GA-partial least squares (GA-PLS and interval partial least squares (iPLS. Results The Artifitial Neural Network (ANN sensitivity analysis determined the wavelengths of the UV band most suitable for estimating the concentration of creatinine and uric acid. The two optimal wavelengths were 242 and 252 nm for creatinine and 295 and 298 nm for uric acid. Conclusion It can be concluded that the reduction ratio of creatinine and uric acid (dialysis efficiency could be continuously monitored during hemodialysis by UV spectroscopy.Compared to the conventional method, which is particularly sensitive to the sampling technique and involves post-dialysis blood sampling, iterative measurements throughout the dialysis session can yield more reliable data.

  17. Evolutionary optimization technique for site layout planning

    KAUST Repository

    El Ansary, Ayman M.

    2014-02-01

    Solving the site layout planning problem is a challenging task. It requires an iterative approach to satisfy design requirements (e.g. energy efficiency, skyview, daylight, roads network, visual privacy, and clear access to favorite views). These design requirements vary from one project to another based on location and client preferences. In the Gulf region, the most important socio-cultural factor is the visual privacy in indoor space. Hence, most of the residential houses in this region are surrounded by high fences to provide privacy, which has a direct impact on other requirements (e.g. daylight and direction to a favorite view). This paper introduces a novel technique to optimally locate and orient residential buildings to satisfy a set of design requirements. The developed technique is based on genetic algorithm which explores the search space for possible solutions. This study considers two dimensional site planning problems. However, it can be extended to solve three dimensional cases. A case study is presented to demonstrate the efficiency of this technique in solving the site layout planning of simple residential dwellings. © 2013 Elsevier B.V. All rights reserved.

  18. The Value Estimation of an HFGW Frequency Time Standard for Telecommunications Network Optimization

    Science.gov (United States)

    Harper, Colby; Stephenson, Gary

    2007-01-01

    The emerging technology of gravitational wave control is used to augment a communication system using a development roadmap suggested in Stephenson (2003) for applications emphasized in Baker (2005). In the present paper consideration is given to the value of a High Frequency Gravitational Wave (HFGW) channel purely as providing a method of frequency and time reference distribution for use within conventional Radio Frequency (RF) telecommunications networks. Specifically, the native value of conventional telecommunications networks may be optimized by using an unperturbed frequency time standard (FTS) to (1) improve terminal navigation and Doppler estimation performance via improved time difference of arrival (TDOA) from a universal time reference, and (2) improve acquisition speed, coding efficiency, and dynamic bandwidth efficiency through the use of a universal frequency reference. A model utilizing a discounted cash flow technique provides an estimation of the additional value using HFGW FTS technology could bring to a mixed technology HFGW/RF network. By applying a simple net present value analysis with supporting reference valuations to such a network, it is demonstrated that an HFGW FTS could create a sizable improvement within an otherwise conventional RF telecommunications network. Our conservative model establishes a low-side value estimate of approximately 50B USD Net Present Value for an HFGW FTS service, with reasonable potential high-side values to significant multiples of this low-side value floor.

  19. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  20. Application of chaos-based chaotic invasive weed optimization techniques for environmental OPF problems in the power system

    International Nuclear Information System (INIS)

    Ghasemi, Mojtaba; Ghavidel, Sahand; Aghaei, Jamshid; Gitizadeh, Mohsen; Falah, Hasan

    2014-01-01

    Highlights: • Chaotic invasive weed optimization techniques based on chaos. • Nonlinear environmental OPF problem considering non-smooth fuel cost curves. • A comparative study of CIWO techniques for environmental OPF problem. - Abstract: This paper presents efficient chaotic invasive weed optimization (CIWO) techniques based on chaos for solving optimal power flow (OPF) problems with non-smooth generator fuel cost functions (non-smooth OPF) with the minimum pollution level (environmental OPF) in electric power systems. OPF problem is used for developing corrective strategies and to perform least cost dispatches. However, cost based OPF problem solutions usually result in unattractive system gaze emission issue (environmental OPF). In the present paper, the OPF problem is formulated by considering the emission issue. The total emission can be expressed as a non-linear function of power generation, as a multi-objective optimization problem, where optimal control settings for simultaneous minimization of fuel cost and gaze emission issue are obtained. The IEEE 30-bus test power system is presented to illustrate the application of the environmental OPF problem using CIWO techniques. Our experimental results suggest that CIWO techniques hold immense promise to appear as efficient and powerful algorithm for optimization in the power systems

  1. Cost analysis and estimating tools and techniques

    CERN Document Server

    Nussbaum, Daniel

    1990-01-01

    Changes in production processes reflect the technological advances permeat­ ing our products and services. U. S. industry is modernizing and automating. In parallel, direct labor is fading as the primary cost driver while engineering and technology related cost elements loom ever larger. Traditional, labor-based ap­ proaches to estimating costs are losing their relevance. Old methods require aug­ mentation with new estimating tools and techniques that capture the emerging environment. This volume represents one of many responses to this challenge by the cost analysis profession. The Institute of Cost Analysis (lCA) is dedicated to improving the effective­ ness of cost and price analysis and enhancing the professional competence of its members. We encourage and promote exchange of research findings and appli­ cations between the academic community and cost professionals in industry and government. The 1990 National Meeting in Los Angeles, jointly spo~sored by ICA and the National Estimating Society (NES),...

  2. Fast Spectral Velocity Estimation Using Adaptive Techniques: In-Vivo Results

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Udesen, Jesper

    2007-01-01

    Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window(OW) is very sbort. In this paper two adaptive techniques are tested and compared to the averaged perlodogram (Welch) for blood velocity estimation. The Blood Power...... the blood process over slow-time and averaging over depth to find the power spectral density estimate. In this paper, the two adaptive methods are explained, and performance Is assessed in controlled steady How experiments and in-vivo measurements. The three methods were tested on a circulating How rig...... with a blood mimicking fluid flowing in the tube. The scanning section is submerged in water to allow ultrasound data acquisition. Data was recorded using a BK8804 linear array transducer and the RASMUS ultrasound scanner. The controlled experiments showed that the OW could be significantly reduced when...

  3. Comparison of various spring analogy related mesh deformation techniques in two-dimensional airfoil design optimization

    Science.gov (United States)

    Yang, Y.; Özgen, S.

    2017-06-01

    During the last few decades, CFD (Computational Fluid Dynamics) has developed greatly and has become a more reliable tool for the conceptual phase of aircraft design. This tool is generally combined with an optimization algorithm. In the optimization phase, the need for regenerating the computational mesh might become cumbersome, especially when the number of design parameters is high. For this reason, several mesh generation and deformation techniques have been developed in the past decades. One of the most widely used techniques is the Spring Analogy. There are numerous spring analogy related techniques reported in the literature: linear spring analogy, torsional spring analogy, semitorsional spring analogy, and ball vertex spring analogy. This paper gives the explanation of linear spring analogy method and angle inclusion in the spring analogy method. In the latter case, two di¨erent solution methods are proposed. The best feasible method will later be used for two-dimensional (2D) Airfoil Design Optimization with objective function being to minimize sectional drag for a required lift coe©cient at di¨erent speeds. Design variables used in the optimization include camber and thickness distribution of the airfoil. SU2 CFD is chosen as the §ow solver during the optimization procedure. The optimization is done by using Phoenix ModelCenter Optimization Tool.

  4. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Directory of Open Access Journals (Sweden)

    Tashkova Katerina

    2011-10-01

    Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of

  5. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis.

    Science.gov (United States)

    Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo

    2011-10-11

    We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and

  6. Machine Learning Techniques in Optimal Design

    Science.gov (United States)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  7. A new slit lamp-based technique for anterior chamber angle estimation.

    Science.gov (United States)

    Gispets, Joan; Cardona, Genís; Tomàs, Núria; Fusté, Cèlia; Binns, Alison; Fortes, Miguel A

    2014-06-01

    To design and test a new noninvasive method for anterior chamber angle (ACA) estimation based on the slit lamp that is accessible to all eye-care professionals. A new technique (slit lamp anterior chamber estimation [SLACE]) that aims to overcome some of the limitations of the van Herick procedure was designed. The technique, which only requires a slit lamp, was applied to estimate the ACA of 50 participants (100 eyes) using two different slit lamp models, and results were compared with gonioscopy as the clinical standard. The Spearman nonparametric correlation between ACA values as determined by gonioscopy and SLACE were 0.81 (p gonioscopy (Spaeth classification). The SLACE technique, when compared with gonioscopy, displayed good accuracy in the detection of narrow angles, and it may be useful for eye-care clinicians without access to expensive alternative equipment or those who cannot perform gonioscopy because of legal constraints regarding the use of diagnostic drugs.

  8. Hybrid Firefly Variants Algorithm for Localization Optimization in WSN

    Directory of Open Access Journals (Sweden)

    P. SrideviPonmalar

    2017-01-01

    Full Text Available Localization is one of the key issues in wireless sensor networks. Several algorithms and techniques have been introduced for localization. Localization is a procedural technique of estimating the sensor node location. In this paper, a novel three hybrid algorithms based on firefly is proposed for localization problem. Hybrid Genetic Algorithm-Firefly Localization Algorithm (GA-FFLA, Hybrid Differential Evolution-Firefly Localization Algorithm (DE-FFLA and Hybrid Particle Swarm Optimization -Firefly Localization Algorithm (PSO-FFLA are analyzed, designed and implemented to optimize the localization error. The localization algorithms are compared based on accuracy of estimation of location, time complexity and iterations required to achieve the accuracy. All the algorithms have hundred percent estimation accuracy but with variations in the number of firefliesr requirements, variation in time complexity and number of iteration requirements. Keywords: Localization; Genetic Algorithm; Differential Evolution; Particle Swarm Optimization

  9. An improved technique for the prediction of optimal image resolution ...

    African Journals Online (AJOL)

    user

    2010-10-04

    Oct 4, 2010 ... Available online at http://www.academicjournals.org/AJEST ... robust technique for predicting optimal image resolution for the mapping of savannah ecosystems was developed. .... whether to purchase multi-spectral imagery acquired by GeoEye-2 ..... Analysis of the spectral behaviour of the pasture class in.

  10. Observation of lens aberrations for high resolution electron microscopy II: Simple expressions for optimal estimates

    Energy Technology Data Exchange (ETDEWEB)

    Saxton, W. Owen, E-mail: wos1@cam.ac.uk

    2015-04-15

    This paper lists simple closed-form expressions estimating aberration coefficients (defocus, astigmatism, three-fold astigmatism, coma / misalignment, spherical aberration) on the basis of image shift or diffractogram shape measurements as a function of injected beam tilt. Simple estimators are given for a large number of injected tilt configurations, optimal in the sense of least-squares fitting of all the measurements, and so better than most reported previously. Standard errors are given for most, allowing different approaches to be compared. Special attention is given to the measurement of the spherical aberration, for which several simple procedures are given, and the effect of foreknowledge of this on other aberration estimates is noted. Details and optimal expressions are also given for a new and simple method of analysis, requiring measurements of the diffractogram mirror axis direction only, which are simpler to make than the focus and astigmatism measurements otherwise required. - Highlights: • Optimal estimators for CTEM lens aberrations are more accurate and/or use fewer observations. • Estimators have been found for defocus, astigmatism, three-fold astigmatism, coma and spherical aberration. • Estimators have been found relying on diffractogram shape, image shift and diffractogram orientation only, for a variety of beam tilts. • The standard error for each estimator has been found.

  11. The optimal injection technique for the osteoarthritic ankle: A randomized, cross-over trial

    NARCIS (Netherlands)

    Witteveen, Angelique G. H.; Kok, Aimee; Sierevelt, Inger N.; Kerkhoffs, Gino M. M. J.; van Dijk, C. Niek

    2013-01-01

    Background: To optimize the injection technique for the osteoarthritic ankle in order to enhance the effect of intra-articular injections and minimize adverse events. Methods: Randomized cross-over trial. Comparing two injection techniques in patients with symptomatic ankle osteoarthritis. Patients

  12. Efficient optimal joint channel estimation and data detection for massive MIMO systems

    KAUST Repository

    Alshamary, Haider Ali Jasim

    2016-08-15

    In this paper, we propose an efficient optimal joint channel estimation and data detection algorithm for massive MIMO wireless systems. Our algorithm is optimal in terms of the generalized likelihood ratio test (GLRT). For massive MIMO systems, we show that the expected complexity of our algorithm grows polynomially in the channel coherence time. Simulation results demonstrate significant performance gains of our algorithm compared with suboptimal non-coherent detection algorithms. To the best of our knowledge, this is the first algorithm which efficiently achieves GLRT-optimal non-coherent detections for massive MIMO systems with general constellations.

  13. Optimization-based scatter estimation using primary modulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao@sjtu.edu.cn [School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Song, Ying [Department of Radiation Oncology, West China Hospital, Sichuan University, Chengdu 610041 (China)

    2016-08-15

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function is designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.

  14. New Techniques for Optimal Treatment Planning for LINAC-based Sterotactic Radiosurgery

    International Nuclear Information System (INIS)

    Suh, Tae Suk

    1992-01-01

    Since LINAC-based stereotactic radiosurgery uses multiple noncoplanar arcs, three-dimensional dose evaluation and many beam parameters, a lengthy computation time is required to optimize even the simplest case by a trial and error. The basic approach presented in this paper is to show promising methods using an experimental optimization and an analytic optimization. The purpose of this paper is not to describe the detailed methods, but introduce briefly, proceeding research done currently or in near future. A more detailed description will be shown in ongoing published papers. Experimental optimization is based on two approaches. One is shaping the target volumes through the use of multiple isocenters determined from dose experience and testing. The other method is conformal therapy using a beam eye view technique and field shaping. The analytic approach is to adapt computer-aided design optimization in finding optimum irradiation parameters automatically

  15. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  16. Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm

    International Nuclear Information System (INIS)

    Oliva, Diego; Abd El Aziz, Mohamed; Ella Hassanien, Aboul

    2017-01-01

    Highlights: •We modify the whale algorithm using chaotic maps. •We apply a chaotic algorithm to estimate parameter of photovoltaic cells. •We perform a study of chaos in whale algorithm. •Several comparisons and metrics support the experimental results. •We test the method with data from real solar cells. -- Abstract: The using of solar energy has been increased since it is a clean source of energy. In this way, the design of photovoltaic cells has attracted the attention of researchers over the world. There are two main problems in this field: having a useful model to characterize the solar cells and the absence of data about photovoltaic cells. This situation even affects the performance of the photovoltaic modules (panels). The characteristics of the current vs. voltage are used to describe the behavior of solar cells. Considering such values, the design problem involves the solution of the complex non-linear and multi-modal objective functions. Different algorithms have been proposed to identify the parameters of the photovoltaic cells and panels. Most of them commonly fail in finding the optimal solutions. This paper proposes the Chaotic Whale Optimization Algorithm (CWOA) for the parameters estimation of solar cells. The main advantage of the proposed approach is using the chaotic maps to compute and automatically adapt the internal parameters of the optimization algorithm. This situation is beneficial in complex problems, because along the iterative process, the proposed algorithm improves their capabilities to search for the best solution. The modified method is able to optimize complex and multimodal objective functions. For example, the function for the estimation of parameters of solar cells. To illustrate the capabilities of the proposed algorithm in the solar cell design, it is compared with other optimization methods over different datasets. Moreover, the experimental results support the improved performance of the proposed approach

  17. Resizing Technique-Based Hybrid Genetic Algorithm for Optimal Drift Design of Multistory Steel Frame Buildings

    Directory of Open Access Journals (Sweden)

    Hyo Seon Park

    2014-01-01

    Full Text Available Since genetic algorithm-based optimization methods are computationally expensive for practical use in the field of structural optimization, a resizing technique-based hybrid genetic algorithm for the drift design of multistory steel frame buildings is proposed to increase the convergence speed of genetic algorithms. To reduce the number of structural analyses required for the convergence, a genetic algorithm is combined with a resizing technique that is an efficient optimal technique to control the drift of buildings without the repetitive structural analysis. The resizing technique-based hybrid genetic algorithm proposed in this paper is applied to the minimum weight design of three steel frame buildings. To evaluate the performance of the algorithm, optimum weights, computational times, and generation numbers from the proposed algorithm are compared with those from a genetic algorithm. Based on the comparisons, it is concluded that the hybrid genetic algorithm shows clear improvements in convergence properties.

  18. A CMOS-compatible silicon substrate optimization technique and its application in radio frequency crosstalk isolation

    International Nuclear Information System (INIS)

    Li Chen; Liao Huailin; Huang Ru; Wang Yangyuan

    2008-01-01

    In this paper, a complementary metal-oxide semiconductor (CMOS)-compatible silicon substrate optimization technique is proposed to achieve effective isolation. The selective growth of porous silicon is used to effectively suppress the substrate crosstalk. The isolation structures are fabricated in standard CMOS process and then this post-CMOS substrate optimization technique is carried out to greatly improve the performances of crosstalk isolation. Three-dimensional electro-magnetic simulation is implemented to verify the obvious effect of our substrate optimization technique. The morphologies and growth condition of porous silicon fabricated have been investigated in detail. Furthermore, a thick selectively grown porous silicon (SGPS) trench for crosstalk isolation has been formed and about 20dB improvement in substrate isolation is achieved. These results demonstrate that our post-CMOS SGPS technique is very promising for RF IC applications. (cross-disciplinary physics and related areas of science and technology)

  19. Cosmological parameter estimation using particle swarm optimization

    Science.gov (United States)

    Prasad, Jayanti; Souradeep, Tarun

    2012-06-01

    Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

  20. Application of Nontraditional Optimization Techniques for Airfoil Shape Optimization

    Directory of Open Access Journals (Sweden)

    R. Mukesh

    2012-01-01

    Full Text Available The method of optimization algorithms is one of the most important parameters which will strongly influence the fidelity of the solution during an aerodynamic shape optimization problem. Nowadays, various optimization methods, such as genetic algorithm (GA, simulated annealing (SA, and particle swarm optimization (PSO, are more widely employed to solve the aerodynamic shape optimization problems. In addition to the optimization method, the geometry parameterization becomes an important factor to be considered during the aerodynamic shape optimization process. The objective of this work is to introduce the knowledge of describing general airfoil geometry using twelve parameters by representing its shape as a polynomial function and coupling this approach with flow solution and optimization algorithms. An aerodynamic shape optimization problem is formulated for NACA 0012 airfoil and solved using the methods of simulated annealing and genetic algorithm for 5.0 deg angle of attack. The results show that the simulated annealing optimization scheme is more effective in finding the optimum solution among the various possible solutions. It is also found that the SA shows more exploitation characteristics as compared to the GA which is considered to be more effective explorer.

  1. Optimization long hole blast fragmentation techniques and detonating circuit underground uranium mine stope

    International Nuclear Information System (INIS)

    Li Qin; Yang Lizhi; Song Lixia; Qin De'en; Xue Yongshe; Wang Zhipeng

    2012-01-01

    Aim at high rate of large blast fragmentation, a big difficulty in long hole drilling and blasting underground uranium mine stope, it is pointed out at the same time of taking integrated technical management measures, the key is to optimize the drilling and blasting parameters and insure safety the act of one that primes, adopt 'minimum burden' blasting technique, renew the stope fragmentation process, and use new process of hole bottom indirect initiation fragmentation; optimize the detonating circuit and use safe, reliable and economically rational duplex non-electric detonating circuit. The production practice shows that under the guarantee of strictly controlled construction quality, the application of optimized blast fragmentation technique has enhanced the reliability of safety detonation and preferably solved the problem of high rate of large blast fragments. (authors)

  2. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  3. Good Manufacturing Practices (GMP) manufacturing of advanced therapy medicinal products: a novel tailored model for optimizing performance and estimating costs.

    Science.gov (United States)

    Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra

    2013-03-01

    Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  4. The Use of Coupled Code Technique for Best Estimate Safety Analysis of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2006-01-01

    Issues connected with the thermal-hydraulics and neutronics of nuclear plants still challenge the design, safety and the operation of Light Water nuclear Reactors (LWR). The lack of full understanding of complex mechanisms related to the interaction between these issues imposed the adoption of conservative safety limits. Those safety margins put restrictions on the optimal exploitation of the plants and consequently reduced economic profit of the plant. In the light of the sustained development in computer technology, the possibilities of code capabilities have been enlarged substantially. Consequently, advanced safety evaluations and design optimizations that were not possible few years ago can now be performed. In fact, during the last decades Best Estimate (BE) neutronic and thermal-hydraulic calculations were so far carried out following rather parallel paths with only few interactions between them. Nowadays, it becomes possible to switch to new generation of computational tools, namely, Coupled Code technique. The application of such method is mandatory for the analysis of accident conditions where strong coupling between the core neutronics and the primary circuit thermal-hydraulics, and more especially when asymmetrical processes take place in the core leading to local space-dependent power generation. Through the current study, a demonstration of the maturity level achieved in the calculation of 3-D core performance during complex accident scenarios in NPPs is emphasized. Typical applications are outlined and discussed showing the main features and limitations of this technique. (author)

  5. Optimization-based particle filter for state and parameter estimation

    Institute of Scientific and Technical Information of China (English)

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  6. Constrained Optimization of MIMO Training Sequences

    Directory of Open Access Journals (Sweden)

    Coon Justin P

    2007-01-01

    Full Text Available Multiple-input multiple-output (MIMO systems have shown a huge potential for increased spectral efficiency and throughput. With an increasing number of transmitting antennas comes the burden of providing training for channel estimation for coherent detection. In some special cases optimal, in the sense of mean-squared error (MSE, training sequences have been designed. However, in many practical systems it is not feasible to analytically find optimal solutions and numerical techniques must be used. In this paper, two systems (unique word (UW single carrier and OFDM with nulled subcarriers are considered and a method of designing near-optimal training sequences using nonlinear optimization techniques is proposed. In particular, interior-point (IP algorithms such as the barrier method are discussed. Although the two systems seem unrelated, the cost function, which is the MSE of the channel estimate, is shown to be effectively the same for each scenario. Also, additional constraints, such as peak-to-average power ratio (PAPR, are considered and shown to be easily included in the optimization process. Numerical examples illustrate the effectiveness of the designed training sequences, both in terms of MSE and bit-error rate (BER.

  7. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  8. A comparison of small-area estimation techniques to estimate selected stand attributes using LiDAR-derived auxiliary variables

    Science.gov (United States)

    Michael E. Goerndt; Vicente J. Monleon; Hailemariam. Temesgen

    2011-01-01

    One of the challenges often faced in forestry is the estimation of forest attributes for smaller areas of interest within a larger population. Small-area estimation (SAE) is a set of techniques well suited to estimation of forest attributes for small areas in which the existing sample size is small and auxiliary information is available. Selected SAE methods were...

  9. Better Drumming Through Calibration: Techniques for Pre-Performance Robotic Percussion Optimization

    OpenAIRE

    Murphy, Jim; Kapur, Ajay; Carnegie, Dale

    2012-01-01

    A problem with many contemporary musical robotic percussion systems lies in the fact that solenoids fail to respond lin-early to linear increases in input velocity. This nonlinearity forces performers to individually tailor their compositions to specific robotic drummers. To address this problem, we introduce a method of pre-performance calibration using metaheuristic search techniques. A variety of such techniques are introduced and evaluated and the results of the optimized solenoid-based p...

  10. Optimization of a Fuzzy-Logic-Control-Based MPPT Algorithm Using the Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Po-Chen Cheng

    2015-06-01

    Full Text Available In this paper, an asymmetrical fuzzy-logic-control (FLC-based maximum power point tracking (MPPT algorithm for photovoltaic (PV systems is presented. Two membership function (MF design methodologies that can improve the effectiveness of the proposed asymmetrical FLC-based MPPT methods are then proposed. The first method can quickly determine the input MF setting values via the power–voltage (P–V curve of solar cells under standard test conditions (STC. The second method uses the particle swarm optimization (PSO technique to optimize the input MF setting values. Because the PSO approach must target and optimize a cost function, a cost function design methodology that meets the performance requirements of practical photovoltaic generation systems (PGSs is also proposed. According to the simulated and experimental results, the proposed asymmetrical FLC-based MPPT method has the highest fitness value, therefore, it can successfully address the tracking speed/tracking accuracy dilemma compared with the traditional perturb and observe (P&O and symmetrical FLC-based MPPT algorithms. Compared to the conventional FLC-based MPPT method, the obtained optimal asymmetrical FLC-based MPPT can improve the transient time and the MPPT tracking accuracy by 25.8% and 0.98% under STC, respectively.

  11. Application of the control variate technique to estimation of total sensitivity indices

    International Nuclear Information System (INIS)

    Kucherenko, S.; Delpuech, B.; Iooss, B.; Tarantola, S.

    2015-01-01

    Global sensitivity analysis is widely used in many areas of science, biology, sociology and policy planning. The variance-based methods also known as Sobol' sensitivity indices has become the method of choice among practitioners due to its efficiency and ease of interpretation. For complex practical problems, estimation of Sobol' sensitivity indices generally requires a large number of function evaluations to achieve reasonable convergence. To improve the efficiency of the Monte Carlo estimates for the Sobol' total sensitivity indices we apply the control variate reduction technique and develop a new formula for evaluation of total sensitivity indices. Presented results using well known test functions show the efficiency of the developed technique. - Highlights: • We analyse the efficiency of the Monte Carlo estimates of Sobol' sensitivity indices. • The control variate technique is applied for estimation of total sensitivity indices. • We develop a new formula for evaluation of Sobol' total sensitivity indices. • We present test results demonstrating the high efficiency of the developed formula

  12. THE OPTIMIZATION OF TECHNOLOGICAL MINING PARAMETERS IN QUARRY FOR DIMENSION STONE BLOCKS QUALITY IMPROVEMENT BASED ON PHOTOGRAMMETRIC TECHNIQUES OF MEASUREMENT

    Directory of Open Access Journals (Sweden)

    Ruslan Sobolevskyi

    2018-01-01

    Full Text Available This research focuses on patterns of change in the dimension stone commodity blocks quality production on previously identifi ed and measured geometrical parameters of natural cracks, modelling and planning out the fi nal dimension of stone products and fi nished products based on the proposed digital photogrammetric techniques. The optimal parameters of surveying are investigated and the infl uence of surveying distance to length and crack area is estimated. Rational technological parameters of dimension stone blocks production are taken into account.

  13. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....

  14. Parameter estimation for chaotic systems with a Drift Particle Swarm Optimization method

    International Nuclear Information System (INIS)

    Sun Jun; Zhao Ji; Wu Xiaojun; Fang Wei; Cai Yujie; Xu Wenbo

    2010-01-01

    Inspired by the motion of electrons in metal conductors in an electric field, we propose a variant of Particle Swarm Optimization (PSO), called Drift Particle Swarm Optimization (DPSO) algorithm, and apply it in estimating the unknown parameters of chaotic dynamic systems. The principle and procedure of DPSO are presented, and the algorithm is used to identify Lorenz system and Chen system. The experiment results show that for the given parameter configurations, DPSO can identify the parameters of the systems accurately and effectively, and it may be a promising tool for chaotic system identification as well as other numerical optimization problems in physics.

  15. A characteristic study of CCF modeling techniques and optimization of CCF defense strategies

    International Nuclear Information System (INIS)

    Kim, Min Chull

    2000-02-01

    Common Cause Failures (CCFs ) are among the major contributors to risk and core damage frequency (CDF ) from operating nuclear power plants (NPPs ). Our study on CCF focused on the following aspects : 1) a characteristic study on the CCF modeling techniques and 2) development of the optimal CCF defense strategy. Firstly, the characteristics of CCF modeling techniques were studied through sensitivity study of CCF occurrence probability upon system redundancy. The modeling techniques considered in this study include those most widely used worldwide, i.e., beta factor, MGL, alpha factor, and binomial failure rate models. We found that MGL and alpha factor models are essentially identical in terms of the CCF probability. Secondly, in the study for CCF defense, the various methods identified in the previous studies for defending against CCF were classified into five different categories. Based on these categories, we developed a generic method by which the optimal CCF defense strategy can be selected. The method is not only qualitative but also quantitative in nature: the selection of the optimal strategy among candidates is based on the use of analytic hierarchical process (AHP). We applied this method to two motor-driven valves for containment sump isolation in Ulchin 3 and 4 nuclear power plants. The result indicates that the method for developing an optimal CCF defense strategy is effective

  16. Optimal Mass Transport for Statistical Estimation, Image Analysis, Information Geometry, and Control

    Science.gov (United States)

    2017-01-10

    advances on formulating and solving optimal transport problems on discrete spaces (networks) while ensuring robustness of the transportation plan. This...Metric Uncertainty for Spectral Estimation based on Nevanlinna-Pick Interpolation, (with J. Karlsson) Intern. Symp. on the Math . Theory of Networks and...Systems, Melbourne 2012. 22. Geometric tools for the estimation of structured covariances, (with L. Ning, X. Jiang) Intern. Symposium on the Math . Theory

  17. A concise account of techniques available for shipboard sea state estimation

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2017-01-01

    This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions. In the frequ......This article gives a review of techniques applied to make sea state estimation on the basis of measured responses on a ship. The general concept of the procedures is similar to that of a classical wave buoy, which exploits a linear assumption between waves and the associated motions...

  18. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  19. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  20. Parallel halftoning technique using dot diffusion optimization

    Science.gov (United States)

    Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara

    2017-05-01

    In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.

  1. Studies Regarding Design and Optimization of Mechanisms Using Modern Techniques of CAD and CAE

    Directory of Open Access Journals (Sweden)

    Marius Tufoi

    2010-01-01

    Full Text Available The paper presents applications of modern techniques of CAD (Computer Aided Design and CAE (Computer Aided Engineering to design and optimize the mechanisms used in mechanical engineering. The use exemplification of these techniques was achieved by designing and optimizing parts of a drawing installation for horizontal continuous casting of metals. By applying these design methods and using finite element method at simulations on designed mechanisms results a number of advantages over traditional methods of drawing and design: speed in drawing, design and optimization of parts and mechanisms, kinematic analysis option, kinetostatic and dynamic through simulation, without requiring physical realization of the part or mechanism, the determination by finite element method of tension, elongations, travel and safety factor and the possibility of optimization for these sizes to ensure the mechanical strength of each piece separately. Achieving these studies was possible using SolidWorks 2009 software suite.

  2. Learning curve estimation techniques for the nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  3. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    Science.gov (United States)

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  4. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    Science.gov (United States)

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  5. Greenhouse Environmental Control Using Optimized MIMO PID Technique

    Directory of Open Access Journals (Sweden)

    Fateh BOUNAAMA

    2011-10-01

    Full Text Available Climate control for protected crops brings the added dimension of a biological system into a physical system control situation. The thermally dynamic nature of a greenhouse suggests that disturbance attenuation (load control of external temperature, humidity, and sunlight is far more important than is the case for controlling other types of buildings. This paper investigates the application of multi-inputs multi-outputs (MIMO PID controller to a MIMO greenhouse environmental model with actuation constraints. This method is based on decoupling the system at low frequency point. The optimal tuning values are determined using genetic algorithms optimization (GA. The inside outsides climate model of the environmental greenhouse, and the automatically collected data sets of Avignon, France are used to simulate and test this technique. The control objective is to maintain a highly coupled inside air temperature and relative humidity of strongly perturbed greenhouse, at specified set-points, by the ventilation/cooling and moisturizing operations.

  6. Photon attenuation correction technique in SPECT based on nonlinear optimization

    International Nuclear Information System (INIS)

    Suzuki, Shigehito; Wakabayashi, Misato; Okuyama, Keiichi; Kuwamura, Susumu

    1998-01-01

    Photon attenuation correction in SPECT was made using a nonlinear optimization theory, in which an optimum image is searched so that the sum of square errors between observed and reprojected projection data is minimized. This correction technique consists of optimization and step-width algorithms, which determine at each iteration a pixel-by-pixel directional value of search and its step-width, respectively. We used the conjugate gradient and quasi-Newton methods as the optimization algorithm, and Curry rule and the quadratic function method as the step-width algorithm. Statistical fluctuations in the corrected image due to statistical noise in the emission projection data grew as the iteration increased, depending on the combination of optimization and step-width algorithms. To suppress them, smoothing for directional values was introduced. Computer experiments and clinical applications showed a pronounced reduction in statistical fluctuations of the corrected image for all combinations. Combinations using the conjugate gradient method were superior in noise characteristic and computation time. The use of that method with the quadratic function method was optimum if noise property was regarded as important. (author)

  7. Optimal covariate designs theory and applications

    CERN Document Server

    Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar

    2015-01-01

    This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...

  8. Parameter estimation for an expanding universe

    Directory of Open Access Journals (Sweden)

    Jieci Wang

    2015-03-01

    Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.

  9. Optimized evaporation technique for leachate treatment: Small scale implementation.

    Science.gov (United States)

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature. Copyright © 2016. Published by Elsevier Ltd.

  10. Optimal heavy tail estimation – Part 1: Order selection

    Directory of Open Access Journals (Sweden)

    M. Mudelsee

    2017-12-01

    Full Text Available The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1 serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  11. Comparison of optimization techniques for MRR and surface roughness in wire EDM process for gear cutting

    Directory of Open Access Journals (Sweden)

    K.D. Mohapatra

    2016-11-01

    Full Text Available The objective of the present work is to use a suitable method that can optimize the process parameters like pulse on time (TON, pulse off time (TOFF, wire feed rate (WF, wire tension (WT and servo voltage (SV to attain the maximum value of MRR and minimum value of surface roughness during the production of a fine pitch spur gear made of copper. The spur gear has a pressure angle of 20⁰ and pitch circle diameter of 70 mm. The wire has a diameter of 0.25 mm and is made of brass. Experiments were conducted according to Taguchi’s orthogonal array concept with five factors and two levels. Thus, Taguchi quality loss design technique is used to optimize the output responses carried out from the experiments. Another optimization technique i.e. desirability with grey Taguchi technique has been used to optimize the process parameters. Both the optimized results are compared to find out the best combination of MRR and surface roughness. A confirmation test was carried out to identify the significant improvement in the machining performance in case of Taguchi quality loss. Finally, it was concluded that desirability with grey Taguchi technique produced a better result than the Taguchi quality loss technique in case of MRR and Taguchi quality loss gives a better result in case of surface roughness. The quality of the wire after the cutting operation has been presented in the scanning electron microscopy (SEM figure.

  12. Rotor Pole Shape Optimization of Permanent Magnet Brushless DC Motors Using the Reduced Basis Technique

    Directory of Open Access Journals (Sweden)

    GHOLAMIAN, A. S.

    2009-06-01

    Full Text Available In this paper, a magnet shape optimization method for reduction of cogging torque and torque ripple in Permanent Magnet (PM brushless DC motors is presented by using the reduced basis technique coupled by finite element and design of experiments methods. The primary objective of the method is to reduce the enormous number of design variables required to define the magnet shape. The reduced basis technique is a weighted combination of several basis shapes. The aim of the method is to find the best combination using the weights for each shape as the design variables. A multi-level design process is developed to find suitable basis shapes or trial shapes at each level that can be used in the reduced basis technique. Each level is treated as a separated optimization problem until the required objective is achieved. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the magnet shape optimization of a 6-poles/18-slots PM BLDC motor.

  13. Optimization of Thermal Aspects of Friction Stir Welding – Initial Studies Using a Space Mapping Technique

    DEFF Research Database (Denmark)

    Larsen, Anders Astrup; Bendsøe, Martin P.; Schmidt, Henrik Nikolaj Blicher

    2007-01-01

    The aim of this paper is to optimize a thermal model of a friction stir welding process. The optimization is performed using a space mapping technique in which an analytical model is used along with the FEM model to be optimized. The results are compared to traditional gradient based optimization...

  14. Development and comparision of techniques for estimating design basis flood flows for nuclear power plants

    International Nuclear Information System (INIS)

    1980-05-01

    Estimation of the design basis flood for Nuclear Power Plants can be carried out using either deterministic or stochastic techniques. Stochastic techniques, while widely used for the solution of a variety of hydrological and other problems, have not been used to date (1980) in connection with the estimation of design basis flood for NPP siting. This study compares the two techniques against one specific river site (Galt on the Grand River, Ontario). The study concludes that both techniques lead to comparable results , but that stochastic techniques have the advantage of extracting maximum information from available data and presenting the results (flood flow) as a continuous function of probability together with estimation of confidence limits. (author)

  15. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Science.gov (United States)

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  16. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  17. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  18. Performance of Estimation of distribution algorithm for initial core loading optimization of AHWR-LEU

    International Nuclear Information System (INIS)

    Thakur, Amit; Singh, Baltej; Gupta, Anurag; Duggal, Vibhuti; Bhatt, Kislay; Krishnani, P.D.

    2016-01-01

    Highlights: • EDA has been applied to optimize initial core of AHWR-LEU. • Suitable value of weighing factor ‘α’ and population size in EDA was estimated. • The effect of varying initial distribution function on optimized solution was studied. • For comparison, Genetic algorithm was also applied. - Abstract: Population based evolutionary algorithms now form an integral part of fuel management in nuclear reactors and are frequently being used for fuel loading pattern optimization (LPO) problems. In this paper we have applied Estimation of distribution algorithm (EDA) to optimize initial core loading pattern (LP) of AHWR-LEU. In EDA, new solutions are generated by sampling the probability distribution model estimated from the selected best candidate solutions. The weighing factor ‘α’ decides the fraction of current best solution for updating the probability distribution function after each generation. A wider use of EDA warrants a comprehensive study on parameters like population size, weighing factor ‘α’ and initial probability distribution function. In the present study, we have done an extensive analysis on these parameters (population size, weighing factor ‘α’ and initial probability distribution function) in EDA. It is observed that choosing a very small value of ‘α’ may limit the search of optimized solutions in the near vicinity of initial probability distribution function and better loading patterns which are away from initial distribution function may not be considered with due weightage. It is also observed that increasing the population size improves the optimized loading pattern, however the algorithm still fails if the initial distribution function is not close to the expected optimized solution. We have tried to find out the suitable values for ‘α’ and population size to be considered for AHWR-LEU initial core loading pattern optimization problem. For sake of comparison and completeness, we have also addressed the

  19. Optimization of AFP-radioimmunoassay using Antibody Capture Technique

    International Nuclear Information System (INIS)

    Moustafa, K.A.

    2003-01-01

    Alpha-fetoprotein (AFP) is a substance produced by the unborn baby. When the neural tube is not properly formed large amounts of AFP pass into the amniotic fluid and reach the mother's blood. By measuring AFP in the mother's blood and amniotic fluid, it is possible to tell whether or not there is a chance that the unborn baby has a neural tube defect. AFP also used as a tumor marker for hepatocellular carcinoma. There are many different techniques for measuring AFP in blood, but the most accurate one is the immunoassay technique. The immunoassays can be classified on the basis of methodology into three classes; (1) the antibody capture assays, (2) the antigen capture assay, (3)the two-antibody sandwich assays. In this present study, the antibody capture assay in which the antigen is attached to a solid support, and labeled antibody is allowed to bind, will be optimized

  20. A Feedback Optimal Control Algorithm with Optimal Measurement Time Points

    Directory of Open Access Journals (Sweden)

    Felix Jost

    2017-02-01

    Full Text Available Nonlinear model predictive control has been established as a powerful methodology to provide feedback for dynamic processes over the last decades. In practice it is usually combined with parameter and state estimation techniques, which allows to cope with uncertainty on many levels. To reduce the uncertainty it has also been suggested to include optimal experimental design into the sequential process of estimation and control calculation. Most of the focus so far was on dual control approaches, i.e., on using the controls to simultaneously excite the system dynamics (learning as well as minimizing a given objective (performing. We propose a new algorithm, which sequentially solves robust optimal control, optimal experimental design, state and parameter estimation problems. Thus, we decouple the control and the experimental design problems. This has the advantages that we can analyze the impact of measurement timing (sampling independently, and is practically relevant for applications with either an ethical limitation on system excitation (e.g., chemotherapy treatment or the need for fast feedback. The algorithm shows promising results with a 36% reduction of parameter uncertainties for the Lotka-Volterra fishing benchmark example.

  1. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    Science.gov (United States)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.

  2. PERFORMANCE ANALYSIS OF PILOT BASED CHANNEL ESTIMATION TECHNIQUES IN MB OFDM SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Madheswaran

    2011-12-01

    Full Text Available Ultra wideband (UWB communication is mainly used for short range of communication in wireless personal area networks. Orthogonal Frequency Division Multiplexing (OFDM is being used as a key physical layer technology for Fourth Generation (4G wireless communication. OFDM based communication gives high spectral efficiency and mitigates Inter-symbol Interference (ISI in a wireless medium. In this paper the IEEE 802.15.3a based Multiband OFDM (MB OFDM system is considered. The pilot based channel estimation techniques are considered to analyze the performance of MB OFDM systems over Liner Time Invariant (LTI Channel models. In this paper, pilot based Least Square (LS and Least Minimum Mean Square Error (LMMSE channel estimation technique has been considered for UWB OFDM system. In the proposed method, the estimated Channel Impulse Responses (CIRs are filtered in the time domain for the consideration of the channel delay spread. Also the performance of proposed system has been analyzed for different modulation techniques for various pilot density patterns.

  3. A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters

    Science.gov (United States)

    Beattie, J. R.; Garvin, H. L.

    1982-01-01

    The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.

  4. Qualitative performance comparison of reactivity estimation between the extended Kalman filter technique and the inverse point kinetic method

    International Nuclear Information System (INIS)

    Shimazu, Y.; Rooijen, W.F.G. van

    2014-01-01

    Highlights: • Estimation of the reactivity of nuclear reactor based on neutron flux measurements. • Comparison of the traditional method, and the new approach based on Extended Kalman Filtering (EKF). • Estimation accuracy depends on filter parameters, the selection of which is described in this paper. • The EKF algorithm is preferred if the signal to noise ratio is low (low flux situation). • The accuracy of the EKF depends on the ratio of the filter coefficients. - Abstract: The Extended Kalman Filtering (EKF) technique has been applied for estimation of subcriticality with a good noise filtering and accuracy. The Inverse Point Kinetic (IPK) method has also been widely used for reactivity estimation. The important parameters for the EKF estimation are the process noise covariance, and the measurement noise covariance. However the optimal selection is quite difficult. On the other hand, there is only one parameter in the IPK method, namely the time constant for the first order delay filter. Thus, the selection of this parameter is quite easy. Thus, it is required to give certain idea for the selection of which method should be selected and how to select the required parameters. From this point of view, a qualitative performance comparison is carried out

  5. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  6. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  7. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Directory of Open Access Journals (Sweden)

    Hiroshi Shiraishi

    2012-01-01

    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  8. Design refinement of multilayer optical thin film devices with two optimization techniques

    International Nuclear Information System (INIS)

    Apparao, K.V.S.R.

    1992-01-01

    The design efficiency of two different optimization techniques of designing multilayer optical thin film devices is compared. Ten different devices of varying complexities are chosen as design examples for the comparison. The design refinement efficiency and the design parameter characteristics of all the sample designs obtained with the two techniques are compared. The results of the comparison demonstrate that the new method of design developed using damped least squares technique with indirect derivatives give superior and efficient designs compared to the method developed with direct derivatives. (author). 23 refs., 4 tabs., 14 figs

  9. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  10. Estimation of fatigue life using electromechanical impedance technique

    Science.gov (United States)

    Lim, Yee Yan; Soh, Chee Kiong

    2010-04-01

    Fatigue induced damage is often progressive and gradual in nature. Structures subjected to large number of fatigue load cycles will encounter the process of progressive crack initiation, propagation and finally fracture. Monitoring of structural health, especially for the critical components, is therefore essential for early detection of potential harmful crack. Recent advent of smart materials such as piezo-impedance transducer adopting the electromechanical impedance (EMI) technique and wave propagation technique are well proven to be effective in incipient damage detection and characterization. Exceptional advantages such as autonomous, real-time and online, remote monitoring may provide a cost-effective alternative to the conventional structural health monitoring (SHM) techniques. In this study, the main focus is to investigate the feasibility of characterizing a propagating fatigue crack in a structure using the EMI technique as well as estimating its remaining fatigue life using the linear elastic fracture mechanics (LEFM) approach. Uniaxial cyclic tensile load is applied on a lab-sized aluminum beam up to failure. Progressive shift in admittance signatures measured by the piezo-impedance transducer (PZT patch) corresponding to increase of loading cycles reflects effectiveness of the EMI technique in tracing the process of fatigue damage progression. With the use of LEFM, prediction of the remaining life of the structure at different cycles of loading is possible.

  11. Towards Real-Time Maneuver Detection: Automatic State and Dynamics Estimation with the Adaptive Optimal Control Based Estimator

    Science.gov (United States)

    Lubey, D.; Scheeres, D.

    Tracking objects in Earth orbit is fraught with complications. This is due to the large population of orbiting spacecraft and debris that continues to grow, passive (i.e. no direct communication) and data-sparse observations, and the presence of maneuvers and dynamics mismodeling. Accurate orbit determination in this environment requires an algorithm to capture both a system's state and its state dynamics in order to account for mismodelings. Previous studies by the authors yielded an algorithm called the Optimal Control Based Estimator (OCBE) - an algorithm that simultaneously estimates a system's state and optimal control policies that represent dynamic mismodeling in the system for an arbitrary orbit-observer setup. The stochastic properties of these estimated controls are then used to determine the presence of mismodelings (maneuver detection), as well as characterize and reconstruct the mismodelings. The purpose of this paper is to develop the OCBE into an accurate real-time orbit tracking and maneuver detection algorithm by automating the algorithm and removing its linear assumptions. This results in a nonlinear adaptive estimator. In its original form the OCBE had a parameter called the assumed dynamic uncertainty, which is selected by the user with each new measurement to reflect the level of dynamic mismodeling in the system. This human-in-the-loop approach precludes real-time application to orbit tracking problems due to their complexity. This paper focuses on the Adaptive OCBE, a version of the estimator where the assumed dynamic uncertainty is chosen automatically with each new measurement using maneuver detection results to ensure that state uncertainties are properly adjusted to account for all dynamic mismodelings. The paper also focuses on a nonlinear implementation of the estimator. Originally, the OCBE was derived from a nonlinear cost function then linearized about a nominal trajectory, which is assumed to be ballistic (i.e. the nominal optimal

  12. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.

    1982-12-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rainfall runoff model may lead in some cases to nonconservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 - 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  13. Population estimation techniques for routing analysis

    International Nuclear Information System (INIS)

    Sathisan, S.K.; Chagari, A.K.

    1994-01-01

    A number of on-site and off-site factors affect the potential siting of a radioactive materials repository at Yucca Mountain, Nevada. Transportation related issues such route selection and design are among them. These involve evaluation of potential risks and impacts, including those related to population. Population characteristics (total population and density) are critical factors in the risk assessment, emergency preparedness and response planning, and ultimately in route designation. This paper presents an application of Geographic Information System (GIS) technology to facilitate such analyses. Specifically, techniques to estimate critical population information are presented. A case study using the highway network in Nevada is used to illustrate the analyses. TIGER coverages are used as the basis for population information at a block level. The data are then synthesized at tract, county and state levels of aggregation. Of particular interest are population estimates for various corridor widths along transport corridors -- ranging from 0.5 miles to 20 miles in this paper. A sensitivity analysis based on the level of data aggregation is also presented. The results of these analysis indicate that specific characteristics of the area and its population could be used as indicators to aggregate data appropriately for the analysis

  14. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  15. Optimization models and techniques for implementation and pricing of electricity markets

    International Nuclear Information System (INIS)

    Madrigal Martinez, M.

    2001-01-01

    The operation and planning of vertically integrated electric power systems can be optimized using models that simulate solutions to problems. As the electric power industry is going through a period of restructuring, there is a need for new optimization tools. This thesis describes the importance of optimization tools and presents techniques for implementing them. It also presents methods for pricing primary electricity markets. Three modeling groups are studied. The first considers a simplified continuous and discrete model for power pool auctions. The second considers the unit commitment problem, and the third makes use of a new type of linear network-constrained clearing system model for daily markets for power and spinning reserve. The newly proposed model considers bids for supply and demand and bilateral contracts. It is a direct current model for the transmission network

  16. Estimated correlation matrices and portfolio optimization

    Science.gov (United States)

    Pafka, Szilárd; Kondor, Imre

    2004-11-01

    Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs

  17. Pareto-optimal estimates that constrain mean California precipitation change

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  18. An Economical Approach to Estimate a Benchmark Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2003-01-01

    There are alternative methods of estimating capital stock for a benchmark year. However, these methods are costly and time-consuming, requiring the gathering of much basic information as well as the use of some convenient assumptions and guesses. In addition, a way is needed of checking whether the estimated benchmark is at the correct level. This paper proposes an optimal consistency method (OCM), which enables a capital stock to be estimated for a benchmark year, and which can also be used ...

  19. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Directory of Open Access Journals (Sweden)

    Wilmar Hernandez

    2007-01-01

    Full Text Available In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart sensors that today’s cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher’s interest in the fusion of intelligent sensors and optimal signal processing techniques.

  20. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Directory of Open Access Journals (Sweden)

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  1. Optimization of analytical techniques to characterize antibiotics in aquatic systems

    International Nuclear Information System (INIS)

    Al Mokh, S.

    2013-01-01

    Antibiotics are considered as pollutants when they are present in aquatic ecosystems, ultimate receptacles of anthropogenic substances. These compounds are studied as their persistence in the environment or their effects on natural organisms. Numerous efforts have been made worldwide to assess the environmental quality of different water resources for the survival of aquatic species, but also for human consumption and health risk related. Towards goal, the optimization of analytical techniques for these compounds in aquatic systems remains a necessity. Our objective is to develop extraction and detection methods for 12 molecules of aminoglycosides and colistin in sewage treatment plants and hospitals waters. The lack of analytical methods for analysis of these compounds and the deficiency of studies for their detection in water is the reason for their study. Solid Phase Extraction (SPE) in classic mode (offline) or online followed by Liquid Chromatography analysis coupled with Mass Spectrometry (LC/MS/MS) is the most method commonly used for this type of analysis. The parameters are optimized and validated to ensure the best conditions for the environmental analysis. This technique was applied to real samples of wastewater treatment plants in Bordeaux and Lebanon. (author)

  2. Radioactive tracer technique in process optimization: applications in the chemical industry

    International Nuclear Information System (INIS)

    Charlton, J.S.

    1989-01-01

    Process optimization is concerned with the selection of the most appropriate technological design of the process and with controlling its operation to obtain maximum benefit. The role of radioactive tracers in process optimization is discussed and the various circumstances under which such techniques may be beneficially applied are identified. Case studies are presented which illustrate how radioisotopes may be used to monitor plant performance under dynamic conditions to improve production efficiency and to investigate the cause of production limitations. In addition, the use of sealed sources to provide information complementary to the tracer study is described. (author)

  3. BER and optimal power allocation for amplify-and-forward relaying using pilot-aided maximum likelihood estimation

    KAUST Repository

    Wang, Kezhi

    2014-10-01

    Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB\\'s in effective signal-to-noise ratio.

  4. BER and optimal power allocation for amplify-and-forward relaying using pilot-aided maximum likelihood estimation

    KAUST Repository

    Wang, Kezhi; Chen, Yunfei; Alouini, Mohamed-Slim; Xu, Feng

    2014-01-01

    Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB's in effective signal-to-noise ratio.

  5. Chaotic invasive weed optimization algorithm with application to parameter estimation of chaotic systems

    International Nuclear Information System (INIS)

    Ahmadi, Mohamadreza; Mojallali, Hamed

    2012-01-01

    Highlights: ► A new meta-heuristic optimization algorithm. ► Integration of invasive weed optimization and chaotic search methods. ► A novel parameter identification scheme for chaotic systems. - Abstract: This paper introduces a novel hybrid optimization algorithm by taking advantage of the stochastic properties of chaotic search and the invasive weed optimization (IWO) method. In order to deal with the weaknesses associated with the conventional method, the proposed chaotic invasive weed optimization (CIWO) algorithm is presented which incorporates the capabilities of chaotic search methods. The functionality of the proposed optimization algorithm is investigated through several benchmark multi-dimensional functions. Furthermore, an identification technique for chaotic systems based on the CIWO algorithm is outlined and validated by several examples. The results established upon the proposed scheme are also supplemented which demonstrate superior performance with respect to other conventional methods.

  6. WE-G-204-08: Optimized Digital Radiographic Technique for Lost Surgical Devices/Needle Identification

    International Nuclear Information System (INIS)

    Gorman, A; Seabrook, G; Brakken, A; Dubois, M; Marn, C; Wilson, C; Jacobson, D; Liu, Y

    2015-01-01

    Purpose: Small surgical devices and needles are used in many surgical procedures. Conventionally, an x-ray film is taken to identify missing devices/needles if post procedure count is incorrect. There is no data to indicate smallest surgical devices/needles that can be identified with digital radiography (DR), and its optimized acquisition technique. Methods: In this study, the DR equipment used is a Canon RadPro mobile with CXDI-70c wireless DR plate, and the same DR plate on a fixed Siemens Multix unit. Small surgical devices and needles tested include Rubber Shod, Bulldog, Fogarty Hydrogrip, and needles with sizes 3-0 C-T1 through 8-0 BV175-6. They are imaged with PMMA block phantoms with thickness of 2–8 inch, and an abdomen phantom. Various DR techniques are used. Images are reviewed on the portable x-ray acquisition display, a clinical workstation, and a diagnostic workstation. Results: all small surgical devices and needles are visible in portable DR images with 2–8 inch of PMMA. However, when they are imaged with the abdomen phantom plus 2 inch of PMMA, needles smaller than 9.3 mm length can not be visualized at the optimized technique of 81 kV and 16 mAs. There is no significant difference in visualization with various techniques, or between mobile and fixed radiography unit. However, there is noticeable difference in visualizing the smallest needle on a diagnostic reading workstation compared to the acquisition display on a portable x-ray unit. Conclusion: DR images should be reviewed on a diagnostic reading workstation. Using optimized DR techniques, the smallest needle that can be identified on all phantom studies is 9.3 mm. Sample DR images of various small surgical devices/needles available on diagnostic workstation for comparison may improve their identification. Further in vivo study is needed to confirm the optimized digital radiography technique for identification of lost small surgical devices and needles

  7. Optimal deep neural networks for sparse recovery via Laplace techniques

    OpenAIRE

    Limmer, Steffen; Stanczak, Slawomir

    2017-01-01

    This paper introduces Laplace techniques for designing a neural network, with the goal of estimating simplex-constraint sparse vectors from compressed measurements. To this end, we recast the problem of MMSE estimation (w.r.t. a pre-defined uniform input distribution) as the problem of computing the centroid of some polytope that results from the intersection of the simplex and an affine subspace determined by the measurements. Owing to the specific structure, it is shown that the centroid ca...

  8. Improved dose–volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    International Nuclear Information System (INIS)

    Cheng Lishui; Hobbs, Robert F; Sgouros, George; Frey, Eric C; Segars, Paul W

    2013-01-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose–volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator–detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  9. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    Science.gov (United States)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  10. Application of Genetic Algorithm and Particle Swarm Optimization techniques for improved image steganography systems

    Directory of Open Access Journals (Sweden)

    Jude Hemanth Duraisamy

    2016-01-01

    Full Text Available Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA and Particle Swarm Optimization (PSO have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT and Finite Ridgelet Transform (FRIT are used in combination with GA and PSO to improve the efficiency of the image steganography system.

  11. Empirical Estimates in Optimization Problems: Survey with Special Regard to Heavy Tails and Dependent Data

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta

    2012-01-01

    Roč. 19, č. 30 (2012), s. 92-111 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150; GA ČR GAP402/10/1610 Institutional support: RVO:67985556 Keywords : Stochastic optimization * empirical estimates * thin and heavy tails * independent and weak dependent random samples Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/kankova-empirical estimates in optimization problems survey with special regard to heavy tails and dependent data.pdf

  12. A Standalone PV System with a Hybrid P&O MPPT Optimization Technique

    Directory of Open Access Journals (Sweden)

    S. Hota

    2017-12-01

    Full Text Available In this paper a maximum power point tracking (MPPT design for a photovoltaic (PV system using a hybrid optimization technique is proposed. For maximum power transfer, maximum harvestable power from a PV cell in a dynamically changing surrounding should be known. The proposed technique is compared with the conventional Perturb and Observe (P&O technique. A comparative analysis of power-voltage and current-voltage characteristics of a PV cell with and without the MPPT module when connected to the grid was performed in SIMULINK, to demonstrate the increment in the efficiency of the PV module after using the MPPT module.

  13. [Research Progress of Vitreous Humor Detection Technique on Estimation of Postmortem Interval].

    Science.gov (United States)

    Duan, W C; Lan, L M; Guo, Y D; Zha, L; Yan, J; Ding, Y J; Cai, J F

    2018-02-01

    Estimation of postmortem interval (PMI) plays a crucial role in forensic study and identification work. Because of the unique anatomy location, vitreous humor is considered to be used for estima- ting PMI, which has aroused interest among scholars, and some researches have been carried out. The detection techniques of vitreous humor are constantly developed and improved which have been gradually applied in forensic science, meanwhile, the study of PMI estimation using vitreous humor is updated rapidly. This paper reviews various techniques and instruments applied to vitreous humor detection, such as ion selective electrode, capillary ion analysis, spectroscopy, chromatography, nano-sensing technology, automatic biochemical analyser, flow cytometer, etc., as well as the related research progress on PMI estimation in recent years. In order to provide a research direction for scholars and promote a more accurate and efficient application in PMI estimation by vitreous humor analysis, some inner problems are also analysed in this paper. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  14. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  15. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.; Asmis, G.J.K.

    1983-01-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rain fall runoff model may lead in some cases to non-conservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 to 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  16. On-line safeguards design: an application of estimation/detection

    International Nuclear Information System (INIS)

    Candy, J.V.; Dunn, D.R.; Rozsa, R.B.

    1979-01-01

    The applicability of madern signal processing techniques to the safeguards problem for a plutonium nitrate storage tank and concentrator is addressed. The techniques involve mathematical modeling, optimal estimation of process variables, and the detection of abnormal changes in these variables due to adversary diversion. The performance of these techniques is preesented for various diversion scenarios

  17. Simple robust technique using time delay estimation for the control and synchronization of Lorenz systems

    International Nuclear Information System (INIS)

    Jin, Maolin; Chang, Pyung Hun

    2009-01-01

    This work presents two simple and robust techniques based on time delay estimation for the respective control and synchronization of chaos systems. First, one of these techniques is applied to the control of a chaotic Lorenz system with both matched and mismatched uncertainties. The nonlinearities in the Lorenz system is cancelled by time delay estimation and desired error dynamics is inserted. Second, the other technique is applied to the synchronization of the Lue system and the Lorenz system with uncertainties. The synchronization input consists of three elements that have transparent and clear meanings. Since time delay estimation enables a very effective and efficient cancellation of disturbances and nonlinearities, the techniques turn out to be simple and robust. Numerical simulation results show fast, accurate and robust performance of the proposed techniques, thereby demonstrating their effectiveness for the control and synchronization of Lorenz systems.

  18. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Xu

    2014-01-01

    Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  19. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    Science.gov (United States)

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  20. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem

    OpenAIRE

    Muller , Antoine; Pontonnier , Charles; Dumont , Georges

    2018-01-01

    International audience; The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions – two polynomial criteria and a min/max criterion – were tested on a planar musculoskeletal model. The MusIC method provides a computation frequenc...

  1. A reduced scale two loop PWR core designed with particle swarm optimization technique

    International Nuclear Information System (INIS)

    Lima Junior, Carlos A. Souza; Pereira, Claudio M.N.A; Lapa, Celso M.F.; Cunha, Joao J.; Alvim, Antonio C.M.

    2007-01-01

    Reduced scale experiments are often employed in engineering projects because they are much cheaper than real scale testing. Unfortunately, designing reduced scale thermal-hydraulic circuit or equipment, with the capability of reproducing, both accurately and simultaneously, all physical phenomena that occur in real scale and at operating conditions, is a difficult task. To solve this problem, advanced optimization techniques, such as Genetic Algorithms, have been applied. Following this research line, we have performed investigations, using the Particle Swarm Optimization (PSO) Technique, to design a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power and non accidental operating conditions. Obtained results show that the proposed methodology is a promising approach for forced flow reduced scale experiments. (author)

  2. OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING

    OpenAIRE

    Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.

    2016-01-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cram��r-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experi...

  3. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  4. MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...

    African Journals Online (AJOL)

    eobe

    development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...

  5. Optimal State Estimation for Discrete-Time Markov Jump Systems with Missing Observations

    Directory of Open Access Journals (Sweden)

    Qing Sun

    2014-01-01

    Full Text Available This paper is concerned with the optimal linear estimation for a class of direct-time Markov jump systems with missing observations. An observer-based approach of fault detection and isolation (FDI is investigated as a detection mechanic of fault case. For systems with known information, a conditional prediction of observations is applied and fault observations are replaced and isolated; then, an FDI linear minimum mean square error estimation (LMMSE can be developed by comprehensive utilizing of the correct information offered by systems. A recursive equation of filtering based on the geometric arguments can be obtained. Meanwhile, a stability of the state estimator will be guaranteed under appropriate assumption.

  6. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  7. A decoupled power flow algorithm using particle swarm optimization technique

    International Nuclear Information System (INIS)

    Acharjee, P.; Goswami, S.K.

    2009-01-01

    A robust, nondivergent power flow method has been developed using the particle swarm optimization (PSO) technique. The decoupling properties between the power system quantities have been exploited in developing the power flow algorithm. The speed of the power flow algorithm has been improved using a simple perturbation technique. The basic power flow algorithm and the improvement scheme have been designed to retain the simplicity of the evolutionary approach. The power flow is rugged, can determine the critical loading conditions and also can handle the flexible alternating current transmission system (FACTS) devices efficiently. Test results on standard test systems show that the proposed method can find the solution when the standard power flows fail.

  8. Anatomy-based transmission factors for technique optimization in portable chest x-ray

    Science.gov (United States)

    Liptak, Christopher L.; Tovey, Deborah; Segars, William P.; Dong, Frank D.; Li, Xiang

    2015-03-01

    Portable x-ray examinations often account for a large percentage of all radiographic examinations. Currently, portable examinations do not employ automatic exposure control (AEC). To aid in the design of a size-specific technique chart, acrylic slabs of various thicknesses are often used to estimate x-ray transmission for patients of various body thicknesses. This approach, while simple, does not account for patient anatomy, tissue heterogeneity, and the attenuation properties of the human body. To better account for these factors, in this work, we determined x-ray transmission factors using computational patient models that are anatomically realistic. A Monte Carlo program was developed to model a portable x-ray system. Detailed modeling was done of the x-ray spectrum, detector positioning, collimation, and source-to-detector distance. Simulations were performed using 18 computational patient models from the extended cardiac-torso (XCAT) family (9 males, 9 females; age range: 2-58 years; weight range: 12-117 kg). The ratio of air kerma at the detector with and without a patient model was calculated as the transmission factor. Our study showed that the transmission factor decreased exponentially with increasing patient thickness. For the range of patient thicknesses examined (12-28 cm), the transmission factor ranged from approximately 21% to 1.9% when the air kerma used in the calculation represented an average over the entire imaging field of view. The transmission factor ranged from approximately 21% to 3.6% when the air kerma used in the calculation represented the average signals from two discrete AEC cells behind the lung fields. These exponential relationships may be used to optimize imaging techniques for patients of various body thicknesses to aid in the design of clinical technique charts.

  9. APPLYING TEACHING-LEARNING TO ARTIFICIAL BEE COLONY FOR PARAMETER OPTIMIZATION OF SOFTWARE EFFORT ESTIMATION MODEL

    Directory of Open Access Journals (Sweden)

    THANH TUNG KHUAT

    2017-05-01

    Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.

  10. Search method optimization technique for thermal design of high power RFQ structure

    International Nuclear Information System (INIS)

    Sharma, N.K.; Joshi, S.C.

    2009-01-01

    RRCAT has taken up the development of 3 MeV RFQ structure for the low energy part of 100 MeV H - ion injector linac. RFQ is a precision machined resonating structure designed for high rf duty factor. RFQ structural stability during high rf power operation is an important design issue. The thermal analysis of RFQ has been performed using ANSYS finite element analysis software and optimization of various parameters is attempted using Search Method optimization technique. It is an effective optimization technique for the systems governed by a large number of independent variables. The method involves examining a number of combinations of values of independent variables and drawing conclusions from the magnitude of the objective function at these combinations. In these methods there is a continuous improvement in the objective function throughout the course of the search and hence these methods are very efficient. The method has been employed in optimization of various parameters (called independent variables) of RFQ like cooling water flow rate, cooling water inlet temperatures, cavity thickness etc. involved in RFQ thermal design. The temperature rise within RFQ structure is the objective function during the thermal design. Using ANSYS Programming Development Language (APDL), various multiple iterative programmes are written and the analysis are performed to minimize the objective function. The dependency of the objective function on various independent variables is established and the optimum values of the parameters are evaluated. The results of the analysis are presented in the paper. (author)

  11. Radar rainfall image repair techniques

    Directory of Open Access Journals (Sweden)

    Stephen M. Wesson

    2004-01-01

    Full Text Available There are various quality problems associated with radar rainfall data viewed in images that include ground clutter, beam blocking and anomalous propagation, to name a few. To obtain the best rainfall estimate possible, techniques for removing ground clutter (non-meteorological echoes that influence radar data quality on 2-D radar rainfall image data sets are presented here. These techniques concentrate on repairing the images in both a computationally fast and accurate manner, and are nearest neighbour techniques of two sub-types: Individual Target and Border Tracing. The contaminated data is estimated through Kriging, considered the optimal technique for the spatial interpolation of Gaussian data, where the 'screening effect' that occurs with the Kriging weighting distribution around target points is exploited to ensure computational efficiency. Matrix rank reduction techniques in combination with Singular Value Decomposition (SVD are also suggested for finding an efficient solution to the Kriging Equations which can cope with near singular systems. Rainfall estimation at ground level from radar rainfall volume scan data is of interest and importance in earth bound applications such as hydrology and agriculture. As an extension of the above, Ordinary Kriging is applied to three-dimensional radar rainfall data to estimate rainfall rate at ground level. Keywords: ground clutter, data infilling, Ordinary Kriging, nearest neighbours, Singular Value Decomposition, border tracing, computation time, ground level rainfall estimation

  12. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  13. Combinatorial techniques to efficiently investigate and optimize organic thin film processing and properties.

    Science.gov (United States)

    Wieberger, Florian; Kolb, Tristan; Neuber, Christian; Ober, Christopher K; Schmidt, Hans-Werner

    2013-04-08

    In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.

  14. Combinatorial Techniques to Efficiently Investigate and Optimize Organic Thin Film Processing and Properties

    Directory of Open Access Journals (Sweden)

    Hans-Werner Schmidt

    2013-04-01

    Full Text Available In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.

  15. Robust design of decentralized power system stabilizers using meta-heuristic optimization techniques for multimachine systems

    Directory of Open Access Journals (Sweden)

    Jeevanandham Arumugam

    2009-01-01

    Full Text Available In this paper a classical lead-lag power system stabilizer is used for demonstration. The stabilizer parameters are selected in such a manner to damp the rotor oscillations. The problem of selecting the stabilizer parameters is converted to a simple optimization problem with an eigen value based objective function and it is proposed to employ simulated annealing and particle swarm optimization for solving the optimization problem. The objective function allows the selection of the stabilizer parameters to optimally place the closed-loop eigen values in the left hand side of the complex s-plane. The single machine connected to infinite bus system and 10-machine 39-bus system are considered for this study. The effectiveness of the stabilizer tuned using the best technique, in enhancing the stability of power system. Stability is confirmed through eigen value analysis and simulation results and suitable heuristic technique will be selected for the best performance of the system.

  16. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    Directory of Open Access Journals (Sweden)

    Giancarmine Fasano

    2013-09-01

    Full Text Available An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  17. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    Science.gov (United States)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  18. High-resolution temperature-based optimization for hyperthermia treatment planning

    International Nuclear Information System (INIS)

    Kok, H P; Haaren, P M A van; Kamer, J B Van de; Wiersma, J; Dijk, J D P Van; Crezee, J

    2005-01-01

    In regional hyperthermia, optimization techniques are valuable in order to obtain amplitude/phase settings for the applicators to achieve maximal tumour heating without toxicity to normal tissue. We implemented a temperature-based optimization technique and maximized tumour temperature with constraints on normal tissue temperature to prevent hot spots. E-field distributions are the primary input for the optimization method. Due to computer limitations we are restricted to a resolution of 1 x 1 x 1 cm 3 for E-field calculations, too low for reliable treatment planning. A major problem is the fact that hot spots at low-resolution (LR) do not always correspond to hot spots at high-resolution (HR), and vice versa. Thus, HR temperature-based optimization is necessary for adequate treatment planning and satisfactory results cannot be obtained with LR strategies. To obtain HR power density (PD) distributions from LR E-field calculations, a quasi-static zooming technique has been developed earlier at the UMC Utrecht. However, quasi-static zooming does not preserve phase information and therefore it does not provide the HR E-field information required for direct HR optimization. We combined quasi-static zooming with the optimization method to obtain a millimetre resolution temperature-based optimization strategy. First we performed a LR (1 cm) optimization and used the obtained settings to calculate the HR (2 mm) PD and corresponding HR temperature distribution. Next, we performed a HR optimization using an estimation of the new HR temperature distribution based on previous calculations. This estimation is based on the assumption that the HR and LR temperature distributions, though strongly different, respond in a similar way to amplitude/phase steering. To verify the newly obtained settings, we calculate the corresponding HR temperature distribution. This method was applied to several clinical situations and found to work very well. Deviations of this estimation method for

  19. Optimal capacity and buffer size estimation under Generalized Markov Fluids Models and QoS parameters

    International Nuclear Information System (INIS)

    Bavio, José; Marrón, Beatriz

    2014-01-01

    Quality of service (QoS) for internet traffic management requires good traffic models and good estimation of sharing network resource. A link of a network processes all traffic and it is designed with certain capacity C and buffer size B. A Generalized Markov Fluid model (GMFM), introduced by Marrón (2011), is assumed for the sources because describes in a versatile way the traffic, allows estimation based on traffic traces, and also consistent effective bandwidth estimation can be done. QoS, interpreted as buffer overflow probability, can be estimated for GMFM through the effective bandwidth estimation and solving the optimization problem presented in Courcoubetis (2002), the so call inf-sup formulas. In this work we implement a code to solve the inf-sup problem and other optimization related with it, that allow us to do traffic engineering in links of data networks to calculate both, minimum capacity required when QoS and buffer size are given or minimum buffer size required when QoS and capacity are given

  20. Artificial intelligence search techniques for optimization of the cold source geometry

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1988-01-01

    Most optimization studies of cold neutron sources have concentrated on the numerical prediction or experimental measurement of the cold moderator optimum thickness which produces the largest cold neutron leakage for a given thermal neutron source. Optimizing the geometrical shape of the cold source, however, is a more difficult problem because the optimized quantity, the cold neutron leakage, is an implicit function of the shape which is the unknown in such a study. We draw an analogy between this problem and a state space search, then we use a simple Artificial Intelligence (AI) search technique to determine the optimum cold source shape based on a two-group, r-z diffusion model. We implemented this AI design concept in the computer program AID which consists of two modules, a physical model module and a search module, which can be independently modified, improved, or made more sophisticated. 7 refs., 1 fig

  1. Artificial intelligence search techniques for the optimization of cold source geometry

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1988-01-01

    Most optimization studies of cold neutron sources have concentrated on the numerical prediction or experimental measurement of the cold moderator optimum thickness that produces the largest cold neutron leakage for a given thermal neutron source. Optimizing the geometric shape of the cold source, however, is a more difficult problem because the optimized quantity, the cold neutron leakage, is an implicit function of the shape, which is the unknown in such a study. An analogy is drawn between this problem and a state space search, then a simple artificial intelligence (AI) search technique is used to determine the optimum cold source shape based on a two-group, r-z diffusion model. This AI design concept was implemented in the computer program AID, which consists of two modules, a physical model module, and a search module, which can be independently modified, improved, or made more sophisticated

  2. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    Science.gov (United States)

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  3. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Eric Frick

    2018-04-01

    Full Text Available The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA. This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO. First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated (r > 0.82 with the true, time-varying joint center solution.

  4. Sound Power Estimation by Laser Doppler Vibration Measurement Techniques

    Directory of Open Access Journals (Sweden)

    G.M. Revel

    1998-01-01

    Full Text Available The aim of this paper is to propose simple and quick methods for the determination of the sound power emitted by a vibrating surface, by using non-contact vibration measurement techniques. In order to calculate the acoustic power by vibration data processing, two different approaches are presented. The first is based on the method proposed in the Standard ISO/TR 7849, while the second is based on the superposition theorem. A laser-Doppler scanning vibrometer has been employed for vibration measurements. Laser techniques open up new possibilities in this field because of their high spatial resolution and their non-intrusivity. The technique has been applied here to estimate the acoustic power emitted by a loudspeaker diaphragm. Results have been compared with those from a commercial Boundary Element Method (BEM software and experimentally validated by acoustic intensity measurements. Predicted and experimental results seem to be in agreement (differences lower than 1 dB thus showing that the proposed techniques can be employed as rapid solutions for many practical and industrial applications. Uncertainty sources are addressed and their effect is discussed.

  5. Artificial intelligent techniques for optimizing water allocation in a reservoir watershed

    Science.gov (United States)

    Chang, Fi-John; Chang, Li-Chiu; Wang, Yu-Chung

    2014-05-01

    This study proposes a systematical water allocation scheme that integrates system analysis with artificial intelligence techniques for reservoir operation in consideration of the great uncertainty upon hydrometeorology for mitigating droughts impacts on public and irrigation sectors. The AI techniques mainly include a genetic algorithm and adaptive-network based fuzzy inference system (ANFIS). We first derive evaluation diagrams through systematic interactive evaluations on long-term hydrological data to provide a clear simulation perspective of all possible drought conditions tagged with their corresponding water shortages; then search the optimal reservoir operating histogram using genetic algorithm (GA) based on given demands and hydrological conditions that can be recognized as the optimal base of input-output training patterns for modelling; and finally build a suitable water allocation scheme through constructing an adaptive neuro-fuzzy inference system (ANFIS) model with a learning of the mechanism between designed inputs (water discount rates and hydrological conditions) and outputs (two scenarios: simulated and optimized water deficiency levels). The effectiveness of the proposed approach is tested on the operation of the Shihmen Reservoir in northern Taiwan for the first paddy crop in the study area to assess the water allocation mechanism during drought periods. We demonstrate that the proposed water allocation scheme significantly and substantially avails water managers of reliably determining a suitable discount rate on water supply for both irrigation and public sectors, and thus can reduce the drought risk and the compensation amount induced by making restrictions on agricultural use water.

  6. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  7. Space-mapping techniques applied to the optimization of a safety isolating transformer

    NARCIS (Netherlands)

    T.V. Tran; S. Brisset; D. Echeverria (David); D.J.P. Lahaye (Domenico); P. Brochet

    2007-01-01

    textabstractSpace-mapping optimization techniques allow to allign low-fidelity and high-fidelity models in order to reduce the computational time and increase the accuracy of the solution. The main idea is to build an approximate model from the difference of response between both models. Therefore

  8. Minimum deltaV Burn Planning for the International Space Station Using a Hybrid Optimization Technique, Level 1

    Science.gov (United States)

    Brown, Aaron J.

    2015-01-01

    The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic

  9. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  10. Optimization of Transverse Oscillating Fields for Vector Velocity Estimation with Convex Arrays

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    A method for making Vector Flow Images using the transverse oscillation (TO) approach on a convex array is presented. The paper presents optimization schemes for TO fields for convex probes and evaluates their performance using Field II simulations and measurements using the SARUS experimental...... from 90 to 45 degrees in steps of 15 degrees. The optimization routine changes the lateral oscillation period lx to yield the best possible estimates based on the energy ratio between positive and negative spatial frequencies in the ultrasound field. The basic equation for lx gives 1.14 mm at 40 mm...

  11. Optimizing cone beam CT scatter estimation in egs_cbct for a clinical and virtual chest phantom

    DEFF Research Database (Denmark)

    Slot Thing, Rune; Mainegra-Hing, Ernesto

    2014-01-01

    improving techniques (EITs) implemented inegs_cbct were varied. Simulation efficiencies were compared to analog simulations performed without using any EITs. Resulting scatter distributions were confirmed unbiased against the analog simulations. RESULTS: The optimal EIT parameter selection depends...... reduction techniques with a built-in denoising algorithm, efficiency improvements of 4 orders of magnitude were achieved. CONCLUSIONS: Using the built-in EITs inegs_cbct can improve scatter calculation efficiencies by more than 4 orders of magnitude. To achieve this, the user must optimize the input...

  12. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  13. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    Science.gov (United States)

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  14. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ye

    2015-01-01

    Full Text Available Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  15. Physical optimization of afterloading techniques

    International Nuclear Information System (INIS)

    Anderson, L.L.

    1985-01-01

    Physical optimization in brachytherapy refers to the process of determining the radioactive-source configuration which yields a desired dose distribution. In manually afterloaded intracavitary therapy for cervix cancer, discrete source strengths are selected iteratively to minimize the sum of squares of differences between trial and target doses. For remote afterloading with a stepping-source device, optimized (continuously variable) dwell times are obtained, either iteratively or analytically, to give least squares approximations to dose at an arbitrary number of points; in vaginal irradiation for endometrial cancer, the objective has included dose uniformity at applicator surface points in addition to a tapered contour of target dose at depth. For template-guided interstitial implants, seed placement at rectangular-grid mesh points may be least squares optimized within target volumes defined by computerized tomography; effective optimization is possible only for (uniform) seed strength high enough that the desired average peripheral dose is achieved with a significant fraction of empty seed locations. (orig.) [de

  16. Ultra-small time-delay estimation via a weak measurement technique with post-selection

    International Nuclear Information System (INIS)

    Fang, Chen; Huang, Jing-Zheng; Yu, Yang; Li, Qinzheng; Zeng, Guihua

    2016-01-01

    Weak measurement is a novel technique for parameter estimation with higher precision. In this paper we develop a general theory for the parameter estimation based on a weak measurement technique with arbitrary post-selection. The weak-value amplification model and the joint weak measurement model are two special cases in our theory. Applying the developed theory, time-delay estimation is investigated in both theory and experiments. The experimental results show that when the time delay is ultra-small, the joint weak measurement scheme outperforms the weak-value amplification scheme, and is robust against not only misalignment errors but also the wavelength dependence of the optical components. These results are consistent with theoretical predictions that have not been previously verified by any experiment. (paper)

  17. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku kose

    2017-01-01

    In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an...

  18. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    Science.gov (United States)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  19. Application of genetic algorithms for parameter estimation in liquid chromatography

    International Nuclear Information System (INIS)

    Hernandez Torres, Reynier; Irizar Mesa, Mirtha; Tavares Camara, Leoncio Diogenes

    2012-01-01

    In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography

  20. A technique for the radar cross-section estimation of axisymmetric plasmoid

    International Nuclear Information System (INIS)

    Naumov, N D; Petrovskiy, V P; Sasinovskiy, Yu K; Shkatov, O Yu

    2015-01-01

    A model for the radio waves backscattering from both penetrable plasma and reflecting plasma is developed. The technique proposed is based on Huygens's principle and reduces the radar cross-section estimation to numerical integrations. (paper)

  1. Constrained Optimization Based on Hybrid Evolutionary Algorithm and Adaptive Constraint-Handling Technique

    DEFF Research Database (Denmark)

    Wang, Yong; Cai, Zixing; Zhou, Yuren

    2009-01-01

    A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...

  2. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    Science.gov (United States)

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  3. A Rapid Screen Technique for Estimating Nanoparticle Transport in Porous Media

    Science.gov (United States)

    Quantifying the mobility of engineered nanoparticles in hydrologic pathways from point of release to human or ecological receptors is essential for assessing environmental exposures. Column transport experiments are a widely used technique to estimate the transport parameters of ...

  4. Bi Input-extended Kalman filter based estimation technique for speed-sensorless control of induction motors

    International Nuclear Information System (INIS)

    Barut, Murat

    2010-01-01

    This study offers a novel extended Kalman filter (EKF) based estimation technique for the solution of the on-line estimation problem related to uncertainties in the stator and rotor resistances inherent to the speed-sensorless high efficiency control of induction motors (IMs) in the wide speed range as well as extending the limited number of states and parameter estimations possible with a conventional single EKF algorithm. For this aim, the introduced estimation technique in this work utilizes a single EKF algorithm with the consecutive execution of two inputs derived from the two individual extended IM models based on the stator resistance and rotor resistance estimation, differently from the other approaches in past studies, which require two separate EKF algorithms operating in a switching or braided manner; thus, it has superiority over the previous EKF schemes in this regard. The proposed EKF based estimation technique performing the on-line estimations of the stator currents, the rotor flux, the rotor angular velocity, and the load torque involving the viscous friction term together with the rotor and stator resistance is also used in the combination with the speed-sensorless direct vector control of IM and tested with simulations under the challenging 12 scenarios generated instantaneously via step and/or linear variations of the velocity reference, the load torque, the stator resistance, and the rotor resistance in the range of high and zero speed, assuming that the measured stator phase currents and voltages are available. Even under those variations, the performance of the speed-sensorless direct vector control system established on the novel EKF based estimation technique is observed to be quite good.

  5. Bi Input-extended Kalman filter based estimation technique for speed-sensorless control of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Barut, Murat, E-mail: muratbarut27@yahoo.co [Nigde University, Department of Electrical and Electronics Engineering, 51245 Nigde (Turkey)

    2010-10-15

    This study offers a novel extended Kalman filter (EKF) based estimation technique for the solution of the on-line estimation problem related to uncertainties in the stator and rotor resistances inherent to the speed-sensorless high efficiency control of induction motors (IMs) in the wide speed range as well as extending the limited number of states and parameter estimations possible with a conventional single EKF algorithm. For this aim, the introduced estimation technique in this work utilizes a single EKF algorithm with the consecutive execution of two inputs derived from the two individual extended IM models based on the stator resistance and rotor resistance estimation, differently from the other approaches in past studies, which require two separate EKF algorithms operating in a switching or braided manner; thus, it has superiority over the previous EKF schemes in this regard. The proposed EKF based estimation technique performing the on-line estimations of the stator currents, the rotor flux, the rotor angular velocity, and the load torque involving the viscous friction term together with the rotor and stator resistance is also used in the combination with the speed-sensorless direct vector control of IM and tested with simulations under the challenging 12 scenarios generated instantaneously via step and/or linear variations of the velocity reference, the load torque, the stator resistance, and the rotor resistance in the range of high and zero speed, assuming that the measured stator phase currents and voltages are available. Even under those variations, the performance of the speed-sensorless direct vector control system established on the novel EKF based estimation technique is observed to be quite good.

  6. Republic of Georgia estimates for prevalence of drug use: Randomized response techniques suggest under-estimation.

    Science.gov (United States)

    Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C

    2018-06-01

    Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Accuracy in estimation of timber assortments and stem distribution - A comparison of airborne and terrestrial laser scanning techniques

    Science.gov (United States)

    Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto

    2014-11-01

    Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.

  8. Electron Irradiation of Conjunctival Lymphoma-Monte Carlo Simulation of the Minute Dose Distribution and Technique Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo, E-mail: lorenzo.brualla@uni-due.de [NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Essen (Germany); Zaragoza, Francisco J.; Sempau, Josep [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Barcelona (Spain); Wittig, Andrea [Department of Radiation Oncology, University Hospital Giessen and Marburg, Philipps-University Marburg, Marburg (Germany); Sauerwein, Wolfgang [NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Essen (Germany)

    2012-07-15

    Purpose: External beam radiotherapy is the only conservative curative approach for Stage I non-Hodgkin lymphomas of the conjunctiva. The target volume is geometrically complex because it includes the eyeball and lid conjunctiva. Furthermore, the target volume is adjacent to radiosensitive structures, including the lens, lacrimal glands, cornea, retina, and papilla. The radiotherapy planning and optimization requires accurate calculation of the dose in these anatomical structures that are much smaller than the structures traditionally considered in radiotherapy. Neither conventional treatment planning systems nor dosimetric measurements can reliably determine the dose distribution in these small irradiated volumes. Methods and Materials: The Monte Carlo simulations of a Varian Clinac 2100 C/D and human eye were performed using the PENELOPE and PENEASYLINAC codes. Dose distributions and dose volume histograms were calculated for the bulbar conjunctiva, cornea, lens, retina, papilla, lacrimal gland, and anterior and posterior hemispheres. Results: The simulated results allow choosing the most adequate treatment setup configuration, which is an electron beam energy of 6 MeV with additional bolus and collimation by a cerrobend block with a central cylindrical hole of 3.0 cm diameter and central cylindrical rod of 1.0 cm diameter. Conclusions: Monte Carlo simulation is a useful method to calculate the minute dose distribution in ocular tissue and to optimize the electron irradiation technique in highly critical structures. Using a voxelized eye phantom based on patient computed tomography images, the dose distribution can be estimated with a standard statistical uncertainty of less than 2.4% in 3 min using a computing cluster with 30 cores, which makes this planning technique clinically relevant.

  9. Optimal experiment design for magnetic resonance fingerprinting.

    Science.gov (United States)

    Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L

    2016-08-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.

  10. Improved Atmospheric Correction Over the Indian Subcontinent Using Fast Radiative Transfer and Optimal Estimation

    Science.gov (United States)

    Natraj, V.; Thompson, D. R.; Mathur, A. K.; Babu, K. N.; Kindel, B. C.; Massie, S. T.; Green, R. O.; Bhattacharya, B. K.

    2017-12-01

    Remote Visible / ShortWave InfraRed (VSWIR) spectroscopy, typified by the Next-Generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG), is a powerful tool to map the composition, health, and biodiversity of Earth's terrestrial and aquatic ecosystems. These studies must first estimate surface reflectance, removing the atmospheric effects of absorption and scattering by water vapor and aerosols. Since atmospheric state varies spatiotemporally, and is insufficiently constrained by climatological models, it is important to estimate it directly from the VSWIR data. However, water vapor and aerosol estimation is a significant ongoing challenge for existing atmospheric correction models. Conventional VSWIR atmospheric correction methods evolved from multi-band approaches and do not fully utilize the rich spectroscopic data available. We use spectrally resolved (line-by-line) radiative transfer calculations, coupled with optimal estimation theory, to demonstrate improved accuracy of surface retrievals. These spectroscopic techniques are already pervasive in atmospheric remote sounding disciplines but have not yet been applied to imaging spectroscopy. Our analysis employs a variety of scenes from the recent AVIRIS-NG India campaign, which spans various climes, elevation changes, a wide range of biomes and diverse aerosol scenarios. A key aspect of our approach is joint estimation of surface and aerosol parameters, which allows assessment of aerosol distortion effects using spectral shapes across the entire measured interval from 380-2500 nm. We expect that this method would outperform band ratio approaches, and enable evaluation of subtle aerosol parameters where in situ reference data is not available, or for extreme aerosol loadings, as is observed in the India scenarios. The results are validated using existing in-situ reference spectra, reflectance measurements from assigned partners in India, and objective spectral quality metrics for scenes without any

  11. A simple model to estimate the optimal doping of p - Type oxide superconductors

    Directory of Open Access Journals (Sweden)

    Adir Moysés Luiz

    2008-12-01

    Full Text Available Oxygen doping of superconductors is discussed. Doping high-Tc superconductors with oxygen seems to be more efficient than other doping procedures. Using the assumption of double valence fluctuations, we present a simple model to estimate the optimal doping of p-type oxide superconductors. The experimental values of oxygen content for optimal doping of the most important p-type oxide superconductors can be accounted for adequately using this simple model. We expect that our simple model will encourage further experimental and theoretical researches in superconducting materials.

  12. Active load sharing technique for on-line efficiency optimization in DC microgrids

    DEFF Research Database (Denmark)

    Sanseverino, E. Riva; Zizzo, G.; Boscaino, V.

    2017-01-01

    Recently, DC power distribution is gaining more and more importance over its AC counterpart achieving increased efficiency, greater flexibility, reduced volumes and capital cost. In this paper, a 24-120-325V two-level DC distribution system for home appliances, each including three parallel DC......-DC converters, is modeled. An active load sharing technique is proposed for the on-line optimization of the global efficiency of the DC distribution network. The algorithm aims at the instantaneous efficiency optimization of the whole DC network, based on the on-line load current sampling. A Look Up Table......, is created to store the real efficiencies of the converters taking into account components tolerances. A MATLAB/Simulink model of the DC distribution network has been set up and a Genetic Algorithm has been employed for the global efficiency optimization. Simulation results are shown to validate the proposed...

  13. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  14. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  15. The importance of the chosen technique to estimate diffuse solar radiation by means of regression

    Energy Technology Data Exchange (ETDEWEB)

    Arslan, Talha; Altyn Yavuz, Arzu [Department of Statistics. Science and Literature Faculty. Eskisehir Osmangazi University (Turkey)], email: mtarslan@ogu.edu.tr, email: aaltin@ogu.edu.tr; Acikkalp, Emin [Department of Mechanical and Manufacturing Engineering. Engineering Faculty. Bilecik University (Turkey)], email: acikkalp@gmail.com

    2011-07-01

    The Ordinary Least Squares (OLS) method is one of the most frequently used for estimation of diffuse solar radiation. The data set must provide certain assumptions for the OLS method to work. The most important is that the regression equation offered by OLS error terms must fit within the normal distribution. Utilizing an alternative robust estimator to get parameter estimations is highly effective in solving problems where there is a lack of normal distribution due to the presence of outliers or some other factor. The purpose of this study is to investigate the value of the chosen technique for the estimation of diffuse radiation. This study described alternative robust methods frequently used in applications and compared them with the OLS method. Making a comparison of the data set analysis of the OLS and that of the M Regression (Huber, Andrews and Tukey) techniques, it was study found that robust regression techniques are preferable to OLS because of the smoother explanation values.

  16. Optimization of Training Signal Transmission for Estimating MIMO Channel under Antenna Mutual Coupling Conditions

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2010-01-01

    Full Text Available This paper reports investigations on the effect of antenna mutual coupling on performance of training-based Multiple-Input Multiple-Output (MIMO channel estimation. The influence of mutual coupling is assessed for two training-based channel estimation methods, Scaled Least Square (SLS and Minimum Mean Square Error (MMSE. It is shown that the accuracy of MIMO channel estimation is governed by the sum of eigenvalues of channel correlation matrix which in turn is influenced by the mutual coupling in transmitting and receiving array antennas. A water-filling-based procedure is proposed to optimize the training signal transmission to minimize the MIMO channel estimation errors.

  17. Analysis of parameter estimation and optimization application of ant colony algorithm in vehicle routing problem

    Science.gov (United States)

    Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun

    2018-03-01

    Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.

  18. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  19. Enabling Incremental Query Re-Optimization.

    Science.gov (United States)

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  20. Linear triangular optimization technique and pricing scheme in residential energy management systems

    Science.gov (United States)

    Anees, Amir; Hussain, Iqtadar; AlKhaldi, Ali Hussain; Aslam, Muhammad

    2018-06-01

    This paper presents a new linear optimization algorithm for power scheduling of electric appliances. The proposed system is applied in a smart home community, in which community controller acts as a virtual distribution company for the end consumers. We also present a pricing scheme between community controller and its residential users based on real-time pricing and likely block rates. The results of the proposed optimization algorithm demonstrate that by applying the anticipated technique, not only end users can minimise the consumption cost, but it can also reduce the power peak to an average ratio which will be beneficial for the utilities as well.

  1. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry

    Energy Technology Data Exchange (ETDEWEB)

    Manios, G.E.; Mazonakis, M.; Damilakis, J. [University of Crete, Department of Medical Physics, Faculty of Medicine, Heraklion, Crete (Greece); Voulgaris, C.; Karantanas, A. [University of Crete, Department of Radiology, Faculty of Medicine, Heraklion, Crete (Greece)

    2016-03-15

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95 % limits of agreement were also acceptable (VAF: -16.5 %, 16.1 %; SAF: -10.8 %, 10.7 %) and the repeatability of stereology was good (VAF: CV = 4.5 %, SAF: CV = 3.2 %). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. (orig.)

  2. Using Genetic Algorithm to Estimate Hydraulic Parameters of Unconfined Aquifers

    Directory of Open Access Journals (Sweden)

    Asghar Asghari Moghaddam

    2009-03-01

    Full Text Available Nowadays, optimization techniques such as Genetic Algorithms (GA have attracted wide attention among scientists for solving complicated engineering problems. In this article, pumping test data are used to assess the efficiency of GA in estimating unconfined aquifer parameters and a sensitivity analysis is carried out to propose an optimal arrangement of GA. For this purpose, hydraulic parameters of three sets of pumping test data are calculated by GA and they are compared with the results of graphical methods. The results indicate that the GA technique is an efficient, reliable, and powerful method for estimating the hydraulic parameters of unconfined aquifer and, further, that in cases of deficiency in pumping test data, it has a better performance than graphical methods.

  3. Comparison of heuristic optimization techniques for the enrichment and gadolinia distribution in BWR fuel lattices and decision analysis

    International Nuclear Information System (INIS)

    Castillo, Alejandro; Martín-del-Campo, Cecilia; Montes-Tadeo, José-Luis; François, Juan-Luis; Ortiz-Servin, Juan-José; Perusquía-del-Cueto, Raúl

    2014-01-01

    Highlights: • Different metaheuristic optimization techniques were compared. • The optimal enrichment and gadolinia distribution in a BWR fuel lattice was studied. • A decision making tool based on the Position Vector of Minimum Regret was applied. • Similar results were found for the different optimization techniques. - Abstract: In the present study a comparison of the performance of five heuristic techniques for optimization of combinatorial problems is shown. The techniques are: Ant Colony System, Artificial Neural Networks, Genetic Algorithms, Greedy Search and a hybrid of Path Relinking and Scatter Search. They were applied to obtain an “optimal” enrichment and gadolinia distribution in a fuel lattice of a boiling water reactor. All techniques used the same objective function for qualifying the different distributions created during the optimization process as well as the same initial conditions and restrictions. The parameters included in the objective function are the k-infinite multiplication factor, the maximum local power peaking factor, the average enrichment and the average gadolinia concentration of the lattice. The CASMO-4 code was used to obtain the neutronic parameters. The criteria for qualifying the optimization techniques include also the evaluation of the best lattice with burnup and the number of evaluations of the objective function needed to obtain the best solution. In conclusion all techniques obtain similar results, but there are methods that found better solutions faster than others. A decision analysis tool based on the Position Vector of Minimum Regret was applied to aggregate the criteria in order to rank the solutions according to three functions: neutronic grade at 0 burnup, neutronic grade with burnup and global cost which aggregates the computing time in the decision. According to the results Greedy Search found the best lattice in terms of the neutronic grade at 0 burnup and also with burnup. However, Greedy Search is

  4. A parameter estimation for DC servo motor by using optimization process

    International Nuclear Information System (INIS)

    Arjoni Amir

    2010-01-01

    Modeling and simulation parameters of DC servo motor using Matlab Simulink software have been done. The objective to define the DC servo motor parameter estimation is to get DC servo motor parameter values (B, La, Ra, Km, J) which are significant value that can be used for actuation process of control systems. In the analysis of control systems DC the servo motor expressed by transfer function equation to make faster to be analyzed as a component of the actuator. To obtain the data model parameters and initial conditions of the DC servo motors is then carried out the processor modeling and simulation in which the DC servo motor combined with other components. To obtain preliminary data of the DC servo motor parameters as estimated venue, it is obtained from the data factory of the DC servo motor. The initial data parameters of the DC servo motor are applied for the optimization process by using nonlinear least square algorithm and minimize the cost function value so that the DC servo motors parameter values are obtained significantly. The result of the optimization process of the DC servo motor parameter values are B = 0.039881, J= 1.2608e-007, Km = 0.069648, La = 2.3242e-006 and Ra = 1.8837. (author)

  5. Calculational techniques for estimating population doses from radioactivity in natural gas from nuclearly stimulated wells

    International Nuclear Information System (INIS)

    Barton, C.J.; Moore, R.E.; Rohwer, P.S.; Kaye, S.V.

    1975-01-01

    Techniques for estimating radiation doses from exposure to combustion products of natural gas obtained from wells created by use of nuclear explosives were first developed in the Gasbuggy Project. These techniques were refined and extended by development of a number of computer codes in studies related to the Rulison Project, the second in the series of joint government-industry efforts to demonstrate the feasibility of increasing natural gas production from low-permeability rock formations by use of nuclear explosives. These techniques are described and dose estimates that illustrate their use are given. These dose estimation studies have been primarily theoretical, but we have tried to make our hypothetical exposure conditions correspond as closely as possible with conditions that could exist if nuclearly stimulated natural gas is used commercially. (author)

  6. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  7. On Several Fundamental Problems of Optimization, Estimation, and Scheduling in Wireless Communications

    Science.gov (United States)

    Gao, Qian

    compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full

  8. OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE.

    Science.gov (United States)

    Xie, Xianchao; Kou, S C; Brown, Lawrence

    2016-03-01

    This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results.

  9. CC-MUSIC: An Optimization Estimator for Mutual Coupling Correction of L-Shaped Nonuniform Array with Single Snapshot

    Directory of Open Access Journals (Sweden)

    Yuguan Hou

    2015-01-01

    Full Text Available For the case of the single snapshot, the integrated SNR gain could not be obtained without the multiple snapshots, which degrades the mutual coupling correction performance under the lower SNR case. In this paper, a Convex Chain MUSIC (CC-MUSIC algorithm is proposed for the mutual coupling correction of the L-shaped nonuniform array with single snapshot. It is an online self-calibration algorithm and does not require the prior knowledge of the correction matrix initialization and the calibration source with the known position. An optimization for the approximation between the no mutual coupling covariance matrix without the interpolated transformation and the covariance matrix with the mutual coupling and the interpolated transformation is derived. A global optimization problem is formed for the mutual coupling correction and the spatial spectrum estimation. Furthermore, the nonconvex optimization problem of this global optimization is transformed as a chain of the convex optimization, which is basically an alternating optimization routine. The simulation results demonstrate the effectiveness of the proposed method, which improve the resolution ability and the estimation accuracy of the multisources with the single snapshot.

  10. Stochastic global optimization as a filtering problem

    International Nuclear Information System (INIS)

    Stinis, Panos

    2012-01-01

    We present a reformulation of stochastic global optimization as a filtering problem. The motivation behind this reformulation comes from the fact that for many optimization problems we cannot evaluate exactly the objective function to be optimized. Similarly, we may not be able to evaluate exactly the functions involved in iterative optimization algorithms. For example, we may only have access to noisy measurements of the functions or statistical estimates provided through Monte Carlo sampling. This makes iterative optimization algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection of realizations of this stochastic map and picking the realization with the best properties. This motivates the use of filtering techniques to allow focusing on realizations that are more promising than others. In particular, we present a filtering reformulation of global optimization in terms of a special case of sequential importance sampling methods called particle filters. The increasing popularity of particle filters is based on the simplicity of their implementation and their flexibility. We utilize the flexibility of particle filters to construct a stochastic global optimization algorithm which can converge to the optimal solution appreciably faster than naive global optimization. Several examples of parametric exponential density estimation are provided to demonstrate the efficiency of the approach.

  11. Stochastic Optimal Estimation with Fuzzy Random Variables and Fuzzy Kalman Filtering

    Institute of Scientific and Technical Information of China (English)

    FENG Yu-hu

    2005-01-01

    By constructing a mean-square performance index in the case of fuzzy random variable, the optimal estimation theorem for unknown fuzzy state using the fuzzy observation data are given. The state and output of linear discrete-time dynamic fuzzy system with Gaussian noise are Gaussian fuzzy random variable sequences. An approach to fuzzy Kalman filtering is discussed. Fuzzy Kalman filtering contains two parts: a real-valued non-random recurrence equation and the standard Kalman filtering.

  12. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    Science.gov (United States)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  13. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    Science.gov (United States)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  14. Handbook of simulation optimization

    CERN Document Server

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  15. Economic Optimization of Spray Dryer Operation using Nonlinear Model Predictive Control with State Estimation

    DEFF Research Database (Denmark)

    Petersen, Lars Norbert; Jørgensen, John Bagterp; Rawlings, James B.

    2015-01-01

    In this paper, we develop an economically optimizing Nonlinear Model Predictive Controller (E-NMPC) for a complete spray drying plant with multiple stages. In the E-NMPC the initial state is estimated by an extended Kalman Filter (EKF) with noise covariances estimated by an autocovariance least...... squares method (ALS). We present a model for the spray drying plant and use this model for simulation as well as for prediction in the E-NMPC. The open-loop optimal control problem in the E-NMPC is solved using the single-shooting method combined with a quasi-Newton Sequential Quadratic programming (SQP......) algorithm and the adjoint method for computation of gradients. We evaluate the economic performance when unmeasured disturbances are present. By simulation, we demonstrate that the E-NMPC improves the profit of spray drying by 17% compared to conventional PI control....

  16. BRAIN Journal - Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku Kose

    2016-01-01

    ABSTRACT In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Sc...

  17. Global Optimization of Nonlinear Blend-Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Pedro A. Castillo Castillo

    2017-04-01

    Full Text Available The scheduling of gasoline-blending operations is an important problem in the oil refining industry. This problem not only exhibits the combinatorial nature that is intrinsic to scheduling problems, but also non-convex nonlinear behavior, due to the blending of various materials with different quality properties. In this work, a global optimization algorithm is proposed to solve a previously published continuous-time mixed-integer nonlinear scheduling model for gasoline blending. The model includes blend recipe optimization, the distribution problem, and several important operational features and constraints. The algorithm employs piecewise McCormick relaxation (PMCR and normalized multiparametric disaggregation technique (NMDT to compute estimates of the global optimum. These techniques partition the domain of one of the variables in a bilinear term and generate convex relaxations for each partition. By increasing the number of partitions and reducing the domain of the variables, the algorithm is able to refine the estimates of the global solution. The algorithm is compared to two commercial global solvers and two heuristic methods by solving four examples from the literature. Results show that the proposed global optimization algorithm performs on par with commercial solvers but is not as fast as heuristic approaches.

  18. A comprehensive review of prostate cancer brachytherapy: defining an optimal technique

    International Nuclear Information System (INIS)

    Vicini, Frank A.; Kini, Vijay R.; Edmundson, Gregory B.S.; Gustafson, Gary S.; Stromberg, Jannifer; Martinez, Alvaro

    1999-01-01

    Purpose: A comprehensive review of prostate cancer brachytherapy literature was performed to determine if an optimal method of implantation could be identified, and to compare and contrast techniques currently in use. Methods and Materials: A MEDLINE search was conducted to obtain all articles in the English language on prostate cancer brachytherapy from 1985 through 1998. Articles were reviewed and grouped to determine the primary technique of implantation, the method or philosophy of source placement and/or dose specification, the technique to evaluate implant quality, overall treatment results (based upon pretreatment prostate specific antigen, (PSA), and biochemical control) and clinical, pathological or biochemical outcome based upon implant quality. Results: A total of 178 articles were identified in the MEDLINE database. Of these, 53 studies discussed evaluable techniques of implantation and were used for this analysis. Of these studies, 52% used preoperative ultrasound to determine the target volume to be implanted, 16% used preoperative computerized tomography (CT) scans, and 18% placed seeds with an open surgical technique. An additional 11% of studies placed seeds or needles under ultrasound guidance using interactive real-time dosimetry. The number and distribution of radioactive sources to be implanted or the method used to prescribe dose was determined using nomograms in 27% of studies, a least squares optimization technique in 11%, or not stated in 35%. In the remaining 26%, sources were described as either uniformly, differentially, or peripherally placed in the gland. To evaluate implant quality, 28% of studies calculated some type of dose-volume histogram, 21% calculated the matched peripheral dose, 19% the minimum peripheral dose, 14% used some type of CT-based qualitative review and, in 18% of studies, no implant quality evaluation was mentioned. Six studies correlated outcome with implant dose. One study showed an association of implant dose

  19. Techniques for optimizing inerting in electron processors

    International Nuclear Information System (INIS)

    Rangwalla, I.J.; Korn, D.J.; Nablo, S.V.

    1993-01-01

    The design of an ''inert gas'' distribution system in an electron processor must satisfy a number of requirements. The first of these is the elimination or control of beam produced ozone and NO x which can be transported from the process zone by the product into the work area. Since the tolerable levels for O 3 in occupied areas around the processor are 3 in the beam heated process zone, or exhausting and dilution of the gas at the processor exit. The second requirement of the inerting system is to provide a suitable environment for completing efficient, free radical initiated addition polymerization. The competition between radical loss through de-excitation and that from O 2 quenching must be understood. This group has used gas chromatographic analysis of electron cured coatings to study the trade-offs of delivered dose, dose rate and O 2 concentrations in the process zone to determine the tolerable ranges of parameter excursions for production quality control purposes. These techniques are described for an ink coating system on paperboard, where a broad range of process parameters have been studied (D, D radical, O 2 ). It is then shown how the technique is used to optimize the use of higher purity (10-100 ppm O 2 ) nitrogen gas for inerting, in combination with lower purity (2-20,000 ppm O 2 ) non-cryogenically produced gas, as from a membrane or pressure swing adsorption generators. (author)

  20. A review of optimization techniques used in the design of fibre composite structures for civil engineering applications

    International Nuclear Information System (INIS)

    Awad, Ziad K.; Aravinthan, Thiru; Zhuge, Yan; Gonzalez, Felipe

    2012-01-01

    Highlights: → We reviewed existing optimization techniques of fibre composite structures. → Proposed an improved methodology for design optimization. → Comparison showed the MRDO is most suitable. -- Abstract: Fibre composite structures have become the most attractive candidate for civil engineering applications. Fibre reinforced plastic polymer (FRP) composite materials have been used in the rehabilitation and replacement of the old degrading traditional structures or build new structures. However, the lack of design standards for civil infrastructure limits their structural applications. The majority of the existing applications have been designed based on the research and guidelines provided by the fibre composite manufacturers or based on the designer's experience. It has been a tendency that the final structure is generally over-designed. This paper provides a review on the available studies related to the design optimization of fibre composite structures used in civil engineering such as; plate, beam, box beam, sandwich panel, bridge girder, and bridge deck. Various optimization methods are presented and compared. In addition, the importance of using the appropriate optimization technique is discussed. An improved methodology, which considering experimental testing, numerical modelling, and design constrains, is proposed in the paper for design optimization of composite structures.

  1. Academic Training: Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms - Lecture series

    CERN Multimedia

    Françoise Benz

    2004-01-01

    ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on natural annealing processes or Evolutionary Computation, based on biological evolution processes. Geneti...

  2. Academic Training: Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms - Lecture serie

    CERN Multimedia

    Françoise Benz

    2004-01-01

    ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on nat...

  3. Neoliberal Optimism: Applying Market Techniques to Global Health.

    Science.gov (United States)

    Mei, Yuyang

    2017-01-01

    Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.

  4. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  5. DOA Estimation of Low Altitude Target Based on Adaptive Step Glowworm Swarm Optimization-multiple Signal Classification Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Hao

    2015-06-01

    Full Text Available The traditional MUltiple SIgnal Classification (MUSIC algorithm requires significant computational effort and can not be employed for the Direction Of Arrival (DOA estimation of targets in a low-altitude multipath environment. As such, a novel MUSIC approach is proposed on the basis of the algorithm of Adaptive Step Glowworm Swarm Optimization (ASGSO. The virtual spatial smoothing of the matrix formed by each snapshot is used to realize the decorrelation of the multipath signal and the establishment of a fullorder correlation matrix. ASGSO optimizes the function and estimates the elevation of the target. The simulation results suggest that the proposed method can overcome the low altitude multipath effect and estimate the DOA of target readily and precisely without radar effective aperture loss.

  6. Optimal algorithm switching for the estimation of systole period from cardiac microacceleration signals (SonR).

    Science.gov (United States)

    Giorgis, L; Frogerais, P; Amblard, A; Donal, E; Mabo, P; Senhadji, L; Hernández, A I

    2012-11-01

    Previous studies have shown that cardiac microacceleration signals, recorded either cutaneously, or embedded into the tip of an endocardial pacing lead, provide meaningful information to characterize the cardiac mechanical function. This information may be useful to personalize and optimize the cardiac resynchronization therapy, delivered by a biventricular pacemaker, for patients suffering from chronic heart failure (HF). This paper focuses on the improvement of a previously proposed method for the estimation of the systole period from a signal acquired with a cardiac microaccelerometer (SonR sensor, Sorin CRM SAS, France). We propose an optimal algorithm switching approach, to dynamically select the best configuration of the estimation method, as a function of different control variables, such as the signal-to-noise ratio or heart rate. This method was evaluated on a database containing recordings from 31 patients suffering from chronic HF and implanted with a biventricular pacemaker, for which various cardiac pacing configurations were tested. Ultrasound measurements of the systole period were used as a reference and the improved method was compared with the original estimator. A reduction of 11% on the absolute estimation error was obtained for the systole period with the proposed algorithm switching approach.

  7. Optimal non-linear health insurance.

    Science.gov (United States)

    Blomqvist, A

    1997-06-01

    Most theoretical and empirical work on efficient health insurance has been based on models with linear insurance schedules (a constant co-insurance parameter). In this paper, dynamic optimization techniques are used to analyse the properties of optimal non-linear insurance schedules in a model similar to one originally considered by Spence and Zeckhauser (American Economic Review, 1971, 61, 380-387) and reminiscent of those that have been used in the literature on optimal income taxation. The results of a preliminary numerical example suggest that the welfare losses from the implicit subsidy to employer-financed health insurance under US tax law may be a good deal smaller than previously estimated using linear models.

  8. Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques

    International Nuclear Information System (INIS)

    Hernandez, V.; Abella, R.; Calvo, J. F.; Jurado-Bruggemann, D.; Sancho, I.; Carrasco, P.

    2015-01-01

    Purpose: Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. Methods: The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100 000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Results: Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Conclusions: Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended

  9. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications.

    Science.gov (United States)

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-08-06

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  10. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  11. Ant-Based Phylogenetic Reconstruction (ABPR: A new distance algorithm for phylogenetic estimation based on ant colony optimization

    Directory of Open Access Journals (Sweden)

    Karla Vittori

    2008-12-01

    Full Text Available We propose a new distance algorithm for phylogenetic estimation based on Ant Colony Optimization (ACO, named Ant-Based Phylogenetic Reconstruction (ABPR. ABPR joins two taxa iteratively based on evolutionary distance among sequences, while also accounting for the quality of the phylogenetic tree built according to the total length of the tree. Similar to optimization algorithms for phylogenetic estimation, the algorithm allows exploration of a larger set of nearly optimal solutions. We applied the algorithm to four empirical data sets of mitochondrial DNA ranging from 12 to 186 sequences, and from 898 to 16,608 base pairs, and covering taxonomic levels from populations to orders. We show that ABPR performs better than the commonly used Neighbor-Joining algorithm, except when sequences are too closely related (e.g., population-level sequences. The phylogenetic relationships recovered at and above species level by ABPR agree with conventional views. However, like other algorithms of phylogenetic estimation, the proposed algorithm failed to recover expected relationships when distances are too similar or when rates of evolution are very variable, leading to the problem of long-branch attraction. ABPR, as well as other ACO-based algorithms, is emerging as a fast and accurate alternative method of phylogenetic estimation for large data sets.

  12. Analysis on the Metrics used in Optimizing Electronic Business based on Learning Techniques

    Directory of Open Access Journals (Sweden)

    Irina-Steliana STAN

    2014-09-01

    Full Text Available The present paper proposes a methodology of analyzing the metrics related to electronic business. The drafts of the optimizing models include KPIs that can highlight the business specific, if only they are integrated by using learning-based techniques. Having set the most important and high-impact elements of the business, the models should get in the end the link between them, by automating business flows. The human resource will be found in the situation of collaborating more and more with the optimizing models which will translate into high quality decisions followed by profitability increase.

  13. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A. [Physics and Astronomy Department, University of California, Los Angeles, California 90095 (United States); Bobrek, M. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6006 (United States)

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.

  14. Optimized Estimation of Surface Layer Characteristics from Profiling Measurements

    Directory of Open Access Journals (Sweden)

    Doreene Kang

    2016-01-01

    Full Text Available New sampling techniques such as tethered-balloon-based measurements or small unmanned aerial vehicles are capable of providing multiple profiles of the Marine Atmospheric Surface Layer (MASL in a short time period. It is desirable to obtain surface fluxes from these measurements, especially when direct flux measurements are difficult to obtain. The profiling data is different from the traditional mean profiles obtained at two or more fixed levels in the surface layer from which surface fluxes of momentum, sensible heat, and latent heat are derived based on Monin-Obukhov Similarity Theory (MOST. This research develops an improved method to derive surface fluxes and the corresponding MASL mean profiles of wind, temperature, and humidity with a least-squares optimization method using the profiling measurements. This approach allows the use of all available independent data. We use a weighted cost function based on the framework of MOST with the cost being optimized using a quasi-Newton method. This approach was applied to seven sets of data collected from the Monterey Bay. The derived fluxes and mean profiles show reasonable results. An empirical bias analysis is conducted using 1000 synthetic datasets to evaluate the robustness of the method.

  15. Advances in estimation methods of vegetation water content based on optical remote sensing techniques

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Quantitative estimation of vegetation water content(VWC) using optical remote sensing techniques is helpful in forest fire as-sessment,agricultural drought monitoring and crop yield estimation.This paper reviews the research advances of VWC retrieval using spectral reflectance,spectral water index and radiative transfer model(RTM) methods.It also evaluates the reli-ability of VWC estimation using spectral water index from the observation data and the RTM.Focusing on two main definitions of VWC-the fuel moisture content(FMC) and the equivalent water thickness(EWT),the retrieval accuracies of FMC and EWT using vegetation water indices are analyzed.Moreover,the measured information and the dataset are used to estimate VWC,the results show there are significant correlations among three kinds of vegetation water indices(i.e.,WSI,NDⅡ,NDWI1640,WI/NDVI) and canopy FMC of winter wheat(n=45).Finally,the future development directions of VWC detection based on optical remote sensing techniques are also summarized.

  16. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques

    International Nuclear Information System (INIS)

    Chao, Ming; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi; Wei, Jie; Li, Tianfang

    2016-01-01

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as  −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. (paper)

  17. Water temperature forecasting and estimation using fourier series and communication theory techniques

    International Nuclear Information System (INIS)

    Long, L.L.

    1976-01-01

    Fourier series and statistical communication theory techniques are utilized in the estimation of river water temperature increases caused by external thermal inputs. An example estimate assuming a constant thermal input is demonstrated. A regression fit of the Fourier series approximation of temperature is then used to forecast daily average water temperatures. Also, a 60-day prediction of daily average water temperature is made with the aid of the Fourier regression fit by using significant Fourier components

  18. Optimal Pipe Size Design for Looped Irrigation Water Supply System Using Harmony Search: Saemangeum Project Area

    Science.gov (United States)

    Lee, Ho Min; Sadollah, Ali

    2015-01-01

    Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6). The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply. PMID:25874252

  19. Optimal Pipe Size Design for Looped Irrigation Water Supply System Using Harmony Search: Saemangeum Project Area

    Directory of Open Access Journals (Sweden)

    Do Guen Yoo

    2015-01-01

    Full Text Available Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6. The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply.

  20. Exploring Embedded Path Capacity Estimation in TCP Receiver

    NARCIS (Netherlands)

    Marcondes, Cesar; Sanadidi, M.Y.; Gerla, Mario; Martinello, Magnos; de Souza Schwartz, Ramon

    2007-01-01

    Accurate estimation of network characteristics, such as capacity, based on non-intrusive measurements is a fundamental desire of several applications. For instance, P2P applications that build overlay networks can use path capacity for optimizing network performance. We present a simple technique to

  1. Optimal placement of FACTS devices using optimization techniques: A review

    Science.gov (United States)

    Gaur, Dipesh; Mathew, Lini

    2018-03-01

    Modern power system is dealt with overloading problem especially transmission network which works on their maximum limit. Today’s power system network tends to become unstable and prone to collapse due to disturbances. Flexible AC Transmission system (FACTS) provides solution to problems like line overloading, voltage stability, losses, power flow etc. FACTS can play important role in improving static and dynamic performance of power system. FACTS devices need high initial investment. Therefore, FACTS location, type and their rating are vital and should be optimized to place in the network for maximum benefit. In this paper, different optimization methods like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) etc. are discussed and compared for optimal location, type and rating of devices. FACTS devices such as Thyristor Controlled Series Compensator (TCSC), Static Var Compensator (SVC) and Static Synchronous Compensator (STATCOM) are considered here. Mentioned FACTS controllers effects on different IEEE bus network parameters like generation cost, active power loss, voltage stability etc. have been analyzed and compared among the devices.

  2. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.

    Science.gov (United States)

    Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J

    2016-03-01

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.

  3. Fitness Estimation Based Particle Swarm Optimization Algorithm for Layout Design of Truss Structures

    Directory of Open Access Journals (Sweden)

    Ayang Xiao

    2014-01-01

    Full Text Available Due to the fact that vastly different variables and constraints are simultaneously considered, truss layout optimization is a typical difficult constrained mixed-integer nonlinear program. Moreover, the computational cost of truss analysis is often quite expensive. In this paper, a novel fitness estimation based particle swarm optimization algorithm with an adaptive penalty function approach (FEPSO-AP is proposed to handle this problem. FEPSO-AP adopts a special fitness estimate strategy to evaluate the similar particles in the current population, with the purpose to reduce the computational cost. Further more, a laconic adaptive penalty function is employed by FEPSO-AP, which can handle multiple constraints effectively by making good use of historical iteration information. Four benchmark examples with fixed topologies and up to 44 design dimensions were studied to verify the generality and efficiency of the proposed algorithm. Numerical results of the present work compared with results of other state-of-the-art hybrid algorithms shown in the literature demonstrate that the convergence rate and the solution quality of FEPSO-AP are essentially competitive.

  4. Analog fault diagnosis by inverse problem technique

    KAUST Repository

    Ahmed, Rania F.

    2011-12-01

    A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.

  5. Innovative Techniques for Estimating Illegal Activities in a Human-Wildlife-Management Conflict

    Science.gov (United States)

    Cross, Paul; St. John, Freya A. V.; Khan, Saira; Petroczi, Andrea

    2013-01-01

    Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles) is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB) in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT); projective questioning (PQ); brief implicit association test (BIAT)) for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management. PMID:23341973

  6. Innovative techniques for estimating illegal activities in a human-wildlife-management conflict.

    Directory of Open Access Journals (Sweden)

    Paul Cross

    Full Text Available Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT; projective questioning (PQ; brief implicit association test (BIAT for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management.

  7. Innovative techniques for estimating illegal activities in a human-wildlife-management conflict.

    Science.gov (United States)

    Cross, Paul; St John, Freya A V; Khan, Saira; Petroczi, Andrea

    2013-01-01

    Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles) is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB) in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT); projective questioning (PQ); brief implicit association test (BIAT)) for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management.

  8. Switching EKF technique for rotor and stator resistance estimation in speed sensorless control of IMs

    International Nuclear Information System (INIS)

    Barut, Murat; Bogosyan, Seta; Gokasan, Metin

    2007-01-01

    High performance speed sensorless control of induction motors (IMs) calls for estimation and control schemes that offer solutions to parameter uncertainties as well as to difficulties involved with accurate flux/velocity estimation at very low and zero speed. In this study, a new EKF based estimation algorithm is proposed for the solution of both problems and is applied in combination with speed sensorless direct vector control (DVC). The technique is based on the consecutive execution of two EKF algorithms, by switching from one algorithm to another at every n sampling periods. The number of sampling periods, n, is determined based on the desired system performance. The switching EKF approach, thus applied, provides an accurate estimation of an increased number of parameters than would be possible with a single EKF algorithm. The simultaneous and accurate estimation of rotor, R r ' and stator, R s resistances, both in the transient and steady state, is an important challenge in speed sensorless IM control and reported studies achieving satisfactory results are few, if any. With the proposed technique in this study, the sensorless estimation of R r ' and R s is achieved in transient and steady state and in both high and low speed operation while also estimating the unknown load torque, velocity, flux and current components. The performance demonstrated by the simulation results at zero speed, as well as at low and high speed operation is very promising when compared with individual EKF algorithms performing either R r ' or R s estimation or with the few other approaches taken in past studies, which require either signal injection and/or a change of algorithms based on the speed range. The results also motivate utilization of the technique for multiple parameter estimation in a variety of control methods

  9. Using support vector machines in the multivariate state estimation technique

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Gross, K.C.

    1999-01-01

    One approach to validate nuclear power plant (NPP) signals makes use of pattern recognition techniques. This approach often assumes that there is a set of signal prototypes that are continuously compared with the actual sensor signals. These signal prototypes are often computed based on empirical models with little or no knowledge about physical processes. A common problem of all data-based models is their limited ability to make predictions on the basis of available training data. Another problem is related to suboptimal training algorithms. Both of these potential shortcomings with conventional approaches to signal validation and sensor operability validation are successfully resolved by adopting a recently proposed learning paradigm called the support vector machine (SVM). The work presented here is a novel application of SVM for data-based modeling of system state variables in an NPP, integrated with a nonlinear, nonparametric technique called the multivariate state estimation technique (MSET), an algorithm developed at Argonne National Laboratory for a wide range of nuclear plant applications

  10. Simultaneous identification of unknown groundwater pollution sources and estimation of aquifer parameters

    Science.gov (United States)

    Datta, Bithin; Chakrabarty, Dibakar; Dhar, Anirban

    2009-09-01

    Pollution source identification is a common problem encountered frequently. In absence of prior information about flow and transport parameters, the performance of source identification models depends on the accuracy in estimation of these parameters. A methodology is developed for simultaneous pollution source identification and parameter estimation in groundwater systems. The groundwater flow and transport simulator is linked to the nonlinear optimization model as an external module. The simulator defines the flow and transport processes, and serves as a binding equality constraint. The Jacobian matrix which determines the search direction in the nonlinear optimization model links the groundwater flow-transport simulator and the optimization method. Performance of the proposed methodology using spatiotemporal hydraulic head values and pollutant concentration measurements is evaluated by solving illustrative problems. Two different decision model formulations are developed. The computational efficiency of these models is compared using two nonlinear optimization algorithms. The proposed methodology addresses some of the computational limitations of using the embedded optimization technique which embeds the discretized flow and transport equations as equality constraints for optimization. Solution results obtained are also found to be better than those obtained using the embedded optimization technique. The performance evaluations reported here demonstrate the potential applicability of the developed methodology for a fairly large aquifer study area with multiple unknown pollution sources.

  11. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  12. Simulation-Based Optimization of Camera Placement in the Context of Industrial Pose Estimation

    DEFF Research Database (Denmark)

    Jørgensen, Troels Bo; Iversen, Thorbjørn Mosekjær; Lindvig, Anders Prier

    2018-01-01

    In this paper, we optimize the placement of a camera in simulation in order to achieve a high success rate for a pose estimation problem. This is achieved by simulating 2D images from a stereo camera in a virtual scene. The stereo images are then used to generate 3D point clouds based on two diff...

  13. Quantum optimization for training support vector machines.

    Science.gov (United States)

    Anguita, Davide; Ridella, Sandro; Rivieccio, Fabio; Zunino, Rodolfo

    2003-01-01

    Refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical classification errors, represent recent and promising approaches to characterize the generalization ability of Support Vector Machines (SVMs). The advantages of those techniques lie in both improving the SVM representation ability and yielding tighter generalization bounds. On the other hand, they often make Quadratic-Programming algorithms no longer applicable, and SVM training cannot benefit from efficient, specialized optimization techniques. The paper considers the application of Quantum Computing to solve the problem of effective SVM training, especially in the case of digital implementations. The presented research compares the behavioral aspects of conventional and enhanced SVMs; experiments in both a synthetic and real-world problems support the theoretical analysis. At the same time, the related differences between Quadratic-Programming and Quantum-based optimization techniques are considered.

  14. Use of tracer technique in estimation of methane (green house gas) from ruminant

    International Nuclear Information System (INIS)

    Singh, G.P.

    1996-01-01

    Several methods developed to estimate the methane emission by ruminant livestock like feed fermentation based technique, using radioisotope as tracer, respiration chamber, etc. have been discussed. 6 refs., 3 figs

  15. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  16. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    Science.gov (United States)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  17. Data Analysis Techniques for Physical Scientists

    Science.gov (United States)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  18. A study of optimization techniques in HDR brachytherapy for the prostate

    Science.gov (United States)

    Pokharel, Ghana Shyam

    . Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was

  19. Evidence-based optimal number of radiotherapy fractions for cancer: A useful tool to estimate radiotherapy demand.

    Science.gov (United States)

    Wong, Karen; Delaney, Geoff P; Barton, Michael B

    2016-04-01

    The recently updated optimal radiotherapy utilisation model estimated that 48.3% of all cancer patients should receive external beam radiotherapy at least once during their disease course. Adapting this model, we constructed an evidence-based model to estimate the optimal number of fractions for notifiable cancers in Australia to determine equipment and workload implications. The optimal number of fractions was calculated based on the frequency of specific clinical conditions where radiotherapy is indicated and the evidence-based recommended number of fractions for each condition. Sensitivity analysis was performed to assess the impact of variables on the model. Of the 27 cancer sites, the optimal number of fractions for the first course of radiotherapy ranged from 0 to 23.3 per cancer patient, and 1.5 to 29.1 per treatment course. Brain, prostate and head and neck cancers had the highest average number of fractions per course. Overall, the optimal number of fractions was 9.4 per cancer patient (range 8.7-10.0) and 19.4 per course (range 18.0-20.7). These results provide valuable data for radiotherapy services planning and comparison with actual practice. The model can be easily adapted by inserting population-specific epidemiological data thus making it applicable to other jurisdictions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    Science.gov (United States)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  1. Estimation of total Effort and Effort Elapsed in Each Step of Software Development Using Optimal Bayesian Belief Network

    Directory of Open Access Journals (Sweden)

    Fatemeh Zare Baghiabad

    2017-09-01

    Full Text Available Accuracy in estimating the needed effort for software development caused software effort estimation to be a challenging issue. Beside estimation of total effort, determining the effort elapsed in each software development step is very important because any mistakes in enterprise resource planning can lead to project failure. In this paper, a Bayesian belief network was proposed based on effective components and software development process. In this model, the feedback loops are considered between development steps provided that the return rates are different for each project. Different return rates help us determine the percentages of the elapsed effort in each software development step, distinctively. Moreover, the error measurement resulted from optimized effort estimation and the optimal coefficients to modify the model are sought. The results of the comparison between the proposed model and other models showed that the model has the capability to highly accurately estimate the total effort (with the marginal error of about 0.114 and to estimate the effort elapsed in each software development step.

  2. Location estimation in wireless sensor networks using spring-relaxation technique.

    Science.gov (United States)

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  3. Review: Optimization methods for groundwater modeling and management

    Science.gov (United States)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  4. Optimizing CT radiation dose based on patient size and image quality: the size-specific dose estimate method

    Energy Technology Data Exchange (ETDEWEB)

    Larson, David B. [Stanford University School of Medicine, Department of Radiology, Stanford, CA (United States)

    2014-10-15

    The principle of ALARA (dose as low as reasonably achievable) calls for dose optimization rather than dose reduction, per se. Optimization of CT radiation dose is accomplished by producing images of acceptable diagnostic image quality using the lowest dose method available. Because it is image quality that constrains the dose, CT dose optimization is primarily a problem of image quality rather than radiation dose. Therefore, the primary focus in CT radiation dose optimization should be on image quality. However, no reliable direct measure of image quality has been developed for routine clinical practice. Until such measures become available, size-specific dose estimates (SSDE) can be used as a reasonable image-quality estimate. The SSDE method of radiation dose optimization for CT abdomen and pelvis consists of plotting SSDE for a sample of examinations as a function of patient size, establishing an SSDE threshold curve based on radiologists' assessment of image quality, and modifying protocols to consistently produce doses that are slightly above the threshold SSDE curve. Challenges in operationalizing CT radiation dose optimization include data gathering and monitoring, managing the complexities of the numerous protocols, scanners and operators, and understanding the relationship of the automated tube current modulation (ATCM) parameters to image quality. Because CT manufacturers currently maintain their ATCM algorithms as secret for proprietary reasons, prospective modeling of SSDE for patient populations is not possible without reverse engineering the ATCM algorithm and, hence, optimization by this method requires a trial-and-error approach. (orig.)

  5. Optimization Techniques for Dimensionally Truncated Sparse Grids on Heterogeneous Systems

    KAUST Repository

    Deftu, A.

    2013-02-01

    Given the existing heterogeneous processor landscape dominated by CPUs and GPUs, topics such as programming productivity and performance portability have become increasingly important. In this context, an important question refers to how can we develop optimization strategies that cover both CPUs and GPUs. We answer this for fastsg, a library that provides functionality for handling efficiently high-dimensional functions. As it can be employed for compressing and decompressing large-scale simulation data, it finds itself at the core of a computational steering application which serves us as test case. We describe our experience with implementing fastsg\\'s time critical routines for Intel CPUs and Nvidia Fermi GPUs. We show the differences and especially the similarities between our optimization strategies for the two architectures. With regard to our test case for which achieving high speedups is a "must" for real-time visualization, we report a speedup of up to 6.2x times compared to the state-of-the-art implementation of the sparse grid technique for GPUs. © 2013 IEEE.

  6. Radiation dose optimization research: Exposure technique approaches in CR imaging – A literature review

    International Nuclear Information System (INIS)

    Seeram, Euclid; Davidson, Rob; Bushong, Stewart; Swan, Hans

    2013-01-01

    The purpose of this paper is to review the literature on exposure technique approaches in Computed Radiography (CR) imaging as a means of radiation dose optimization in CR imaging. Specifically the review assessed three approaches: optimization of kVp; optimization of mAs; and optimization of the Exposure Indicator (EI) in practice. Only papers dating back to 2005 were described in this review. The major themes, patterns, and common findings from the literature reviewed showed that important features are related to radiation dose management strategies for digital radiography include identification of the EI as a dose control mechanism and as a “surrogate for dose management”. In addition the use of the EI has been viewed as an opportunity for dose optimization. Furthermore optimization research has focussed mainly on optimizing the kVp in CR imaging as a means of implementing the ALARA philosophy, and studies have concentrated on mainly chest imaging using different CR systems such as those commercially available from Fuji, Agfa, Kodak, and Konica-Minolta. These studies have produced “conflicting results”. In addition, a common pattern was the use of automatic exposure control (AEC) and the measurement of constant effective dose, and the use of a dose-area product (DAP) meter

  7. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  8. Thermo-economic and environmental analyses based multi-objective optimization of vapor compression–absorption cascaded refrigeration system using NSGA-II technique

    International Nuclear Information System (INIS)

    Jain, Vaibhav; Sachdeva, Gulshan; Kachhwaha, Surendra Singh; Patel, Bhavesh

    2016-01-01

    Highlights: • It addresses multi-objective optimization study on cascaded refrigeration system. • Cascaded system is a promising decarburizing and energy efficient technology. • NSGA-II technique is used for multi-objective optimization. • Total annual product cost and irreversibility rate are simultaneously optimized. - Abstract: Present work optimizes the performance of 170 kW vapor compression–absorption cascaded refrigeration system (VCACRS) based on combined thermodynamic, economic and environmental parameters using Non-dominated Sort Genetic Algorithm-II (NSGA-II) technique. Two objective functions including the total irreversibility rate (as a thermodynamic criterion) and the total product cost (as an economic criterion) of the system are considered simultaneously for multi-objective optimization of VCACRS. The capital and maintenance costs of the system components, the operational cost, and the penalty cost due to CO_2 emission are included in the total product cost of the system. Three optimized systems including a single-objective thermodynamic optimized, a single-objective economic optimized and a multi-objective optimized are analyzed and compared. The results showed that the multi-objective design considers the combined thermodynamic and total product cost criteria better than the two individual single-objective thermodynamic and total product cost optimized designs.

  9. Optimization and characterization of liposome formulation by mixture design.

    Science.gov (United States)

    Maherani, Behnoush; Arab-tehrany, Elmira; Kheirolomoom, Azadeh; Reshetov, Vadzim; Stebe, Marie José; Linder, Michel

    2012-02-07

    This study presents the application of the mixture design technique to develop an optimal liposome formulation by using the different lipids in type and percentage (DOPC, POPC and DPPC) in liposome composition. Ten lipid mixtures were generated by the simplex-centroid design technique and liposomes were prepared by the extrusion method. Liposomes were characterized with respect to size, phase transition temperature, ζ-potential, lamellarity, fluidity and efficiency in loading calcein. The results were then applied to estimate the coefficients of mixture design model and to find the optimal lipid composition with improved entrapment efficiency, size, transition temperature, fluidity and ζ-potential of liposomes. The response optimization of experiments was the liposome formulation with DOPC: 46%, POPC: 12% and DPPC: 42%. The optimal liposome formulation had an average diameter of 127.5 nm, a phase-transition temperature of 11.43 °C, a ζ-potential of -7.24 mV, fluidity (1/P)(TMA-DPH)((¬)) value of 2.87 and an encapsulation efficiency of 20.24%. The experimental results of characterization of optimal liposome formulation were in good agreement with those predicted by the mixture design technique.

  10. Optimal technique for deep breathing exercises after cardiac surgery.

    Science.gov (United States)

    Westerdahl, E

    2015-06-01

    Cardiac surgery patients often develop a restrictive pulmonary impairment and gas exchange abnormalities in the early postoperative period. Chest physiotherapy is routinely prescribed in order to reduce or prevent these complications. Besides early mobilization, positioning and shoulder girdle exercises, various breathing exercises have been implemented as a major component of postoperative care. A variety of deep breathing maneuvres are recommended to the spontaneously breathing patient to reduce atelectasis and to improve lung function in the early postoperative period. Different breathing exercises are recommended in different parts of the world, and there is no consensus about the most effective breathing technique after cardiac surgery. Arbitrary instructions are given, and recommendations on performance and duration vary between hospitals. Deep breathing exercises are a major part of this therapy, but scientific evidence for the efficacy has been lacking until recently, and there is a lack of trials describing how postoperative breathing exercises actually should be performed. The purpose of this review is to provide a brief overview of postoperative breathing exercises for patients undergoing cardiac surgery via sternotomy, and to discuss and suggest an optimal technique for the performance of deep breathing exercises.

  11. Fusion of neural computing and PLS techniques for load estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lu, M.; Xue, H.; Cheng, X. [Northwestern Polytechnical Univ., Xi' an (China); Zhang, W. [Xi' an Inst. of Post and Telecommunication, Xi' an (China)

    2007-07-01

    A method to predict the electric load of a power system in real time was presented. The method is based on neurocomputing and partial least squares (PLS). Short-term load forecasts for power systems are generally determined by conventional statistical methods and Computational Intelligence (CI) techniques such as neural computing. However, statistical modeling methods often require the input of questionable distributional assumptions, and neural computing is weak, particularly in determining topology. In order to overcome the problems associated with conventional techniques, the authors developed a CI hybrid model based on neural computation and PLS techniques. The theoretical foundation for the designed CI hybrid model was presented along with its application in a power system. The hybrid model is suitable for nonlinear modeling and latent structure extracting. It can automatically determine the optimal topology to maximize the generalization. The CI hybrid model provides faster convergence and better prediction results compared to the abductive networks model because it incorporates a load conversion technique as well as new transfer functions. In order to demonstrate the effectiveness of the hybrid model, load forecasting was performed on a data set obtained from the Puget Sound Power and Light Company. Compared with the abductive networks model, the CI hybrid model reduced the forecast error by 32.37 per cent on workday, and by an average of 27.18 per cent on the weekend. It was concluded that the CI hybrid model has a more powerful predictive ability. 7 refs., 1 tab., 3 figs.

  12. Electrical Resistance Imaging of Two-Phase Flow With a Mesh Grouping Technique Based On Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Lee, Bo An; Kim, Bong Seok; Ko, Min Seok; Kim, Kyung Young; Kim, Sin

    2014-01-01

    An electrical resistance tomography (ERT) technique combining the particle swarm optimization (PSO) algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes) comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes). Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images

  13. ELECTRICAL RESISTANCE IMAGING OF TWO-PHASE FLOW WITH A MESH GROUPING TECHNIQUE BASED ON PARTICLE SWARM OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    BO AN LEE

    2014-02-01

    Full Text Available An electrical resistance tomography (ERT technique combining the particle swarm optimization (PSO algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes. Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images.

  14. Electrical Resistance Imaging of Two-Phase Flow With a Mesh Grouping Technique Based On Particle Swarm Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Bo An; Kim, Bong Seok; Ko, Min Seok; Kim, Kyung Young; Kim, Sin [Jeju National Univ., Jeju (Korea, Republic of)

    2014-02-15

    An electrical resistance tomography (ERT) technique combining the particle swarm optimization (PSO) algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes) comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes). Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images.

  15. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  16. Estimates of optimal vitamin D status

    NARCIS (Netherlands)

    Dawson-Hughes, B.; Heaney, R.P.; Holick, M.F.; Lips, P.T.A.M.; Meunier, P.J.; Vieth, R.

    2005-01-01

    Vitamin D has captured attention as an important determinant of bone health, but there is no common definition of optimal vitamin D status. Herein, we address the question: What is the optimal circulating level of 25-hydroxyvitamin D [25(OH)D] for the skeleton? The opinions of the authors on the

  17. A new generation of the optimization techniques offers higher profits, visibility and faster reaction to market conditions

    International Nuclear Information System (INIS)

    Woolstencroft, W.

    2004-01-01

    The pace of change in the energy utility world is accelerating. The new political, environmental, and competitive pressures in all European countries mandate new ways to operate and find efficiencies. We are proposing a lot broader use of optimization technologies as they are starting to be practiced by lead edge energy companies. We will present a holistic case for optimization techniques at the global and local level that are integrated with distributed control systems and each other. They yield a very high degree of transparency, high speed optimization and fast reaction capability with complete profit understanding. This case deals with most of the pressures facing modern utility companies. It is most appropriate for companies that operate a wider variety of generating technologies and which support the central processes like asset management, portfolio optimization, and utilities production planning. We will present best practice examples from industry and give indications of the gains made by those already practicing these techniques. Gains of 3 to 5 % of variable operating costs are standard for fairly small IT and organizational behaviour adjustments. (author)

  18. A Comparison of Regression Techniques for Estimation of Above-Ground Winter Wheat Biomass Using Near-Surface Spectroscopy

    Directory of Open Access Journals (Sweden)

    Jibo Yue

    2018-01-01

    Full Text Available Above-ground biomass (AGB provides a vital link between solar energy consumption and yield, so its correct estimation is crucial to accurately monitor crop growth and predict yield. In this work, we estimate AGB by using 54 vegetation indexes (e.g., Normalized Difference Vegetation Index, Soil-Adjusted Vegetation Index and eight statistical regression techniques: artificial neural network (ANN, multivariable linear regression (MLR, decision-tree regression (DT, boosted binary regression tree (BBRT, partial least squares regression (PLSR, random forest regression (RF, support vector machine regression (SVM, and principal component regression (PCR, which are used to analyze hyperspectral data acquired by using a field spectrophotometer. The vegetation indexes (VIs determined from the spectra were first used to train regression techniques for modeling and validation to select the best VI input, and then summed with white Gaussian noise to study how remote sensing errors affect the regression techniques. Next, the VIs were divided into groups of different sizes by using various sampling methods for modeling and validation to test the stability of the techniques. Finally, the AGB was estimated by using a leave-one-out cross validation with these powerful techniques. The results of the study demonstrate that, of the eight techniques investigated, PLSR and MLR perform best in terms of stability and are most suitable when high-accuracy and stable estimates are required from relatively few samples. In addition, RF is extremely robust against noise and is best suited to deal with repeated observations involving remote-sensing data (i.e., data affected by atmosphere, clouds, observation times, and/or sensor noise. Finally, the leave-one-out cross-validation method indicates that PLSR provides the highest accuracy (R2 = 0.89, RMSE = 1.20 t/ha, MAE = 0.90 t/ha, NRMSE = 0.07, CV (RMSE = 0.18; thus, PLSR is best suited for works requiring high

  19. Techniques for Optimizing Surgical Scars, Part 2: Hypertrophic Scars and Keloids.

    Science.gov (United States)

    Potter, Kathryn; Konda, Sailesh; Ren, Vicky Zhen; Wang, Apphia Lihan; Srinivasan, Aditya; Chilukuri, Suneel

    2017-01-01

    Surgical management of benign or malignant cutaneous tumors may result in noticeable scars that are of great concern to patients, regardless of sex, age, or ethnicity. Techniques to optimize surgical scars are discussed in this three-part review. Part 2 focuses on scar revision for hypertrophic and keloids scars. Scar revision options for hypertrophic and keloid scars include corticosteroids, bleomycin, fluorouracil, verapamil, avotermin, hydrogel scaffold, nonablative fractional lasers, ablative and fractional ablative lasers, pulsed dye laser (PDL), flurandrenolide tape, imiquimod, onion extract, silicone, and scar massage.

  20. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    Science.gov (United States)

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of

  1. Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska

    Science.gov (United States)

    Bonin, J. A.; Chambers, D. P.

    2012-12-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.

  2. Estimating of aquifer parameters from the single-well water-level measurements in response to advancing longwall mine by using particle swarm optimization

    Science.gov (United States)

    Buyuk, Ersin; Karaman, Abdullah

    2017-04-01

    We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.

  3. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Bojan Nemec

    2014-10-01

    Full Text Available High precision Global Navigation Satellite System (GNSS measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier’s neck. A key issue is how to estimate other more relevant parameters of the skier’s body, like the center of mass (COM and ski trajectories. Previously, these parameters were estimated by modeling the skier’s body with an inverted-pendulum model that oversimplified the skier’s body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier’s body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing.

  4. Low-complexity DOA estimation from short data snapshots for ULA systems using the annihilating filter technique

    Science.gov (United States)

    Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali

    2017-12-01

    This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.

  5. Estimation of the optimal number of radiotherapy fractions for breast cancer: A review of the evidence.

    Science.gov (United States)

    Wong, Karen; Delaney, Geoff P; Barton, Michael B

    2015-08-01

    There is variation in radiotherapy fractionation practice, however, there is no evidence-based benchmark for appropriate activity. An evidence-based model was constructed to estimate the optimal number of fractions for the first course of radiotherapy for breast cancer to aid in services planning and performance benchmarking. The published breast cancer radiotherapy utilisation model was adapted. Evidence-based number of fractions was added to each radiotherapy indication. The overall optimal number of fractions was calculated based on the frequency of specific clinical conditions where radiotherapy is indicated and the recommended number of fractions for each condition. Sensitivity analysis was performed to assess the impact of uncertainties on the model. For the entire Australian breast cancer patient population, the estimated optimal number of fractions per patient was 16.8, 14.6, 13.7 and 0.8 for ductal carcinoma in situ, early, advanced and metastatic breast cancer respectively. Overall, the optimal number of fractions per patient was 14.4 (range 14.4-18.7). These results allow comparison with actual practices, and workload prediction to aid in services planning. The model can be easily adapted to other countries by inserting population-specific epidemiological data, and to future changes in cancer incidence, stage distribution and fractionation recommendations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Retrieval of volcanic SO2 from HIRS/2 using optimal estimation

    Science.gov (United States)

    Miles, Georgina M.; Siddans, Richard; Grainger, Roy G.; Prata, Alfred J.; Fisher, Bradford; Krotkov, Nickolay

    2017-07-01

    We present an optimal-estimation (OE) retrieval scheme for stratospheric sulfur dioxide from the High-Resolution Infrared Radiation Sounder 2 (HIRS/2) instruments on the NOAA and MetOp platforms, an infrared radiometer that has been operational since 1979. This algorithm is an improvement upon a previous method based on channel brightness temperature differences, which demonstrated the potential for monitoring volcanic SO2 using HIRS/2. The Prata method is fast but of limited accuracy. This algorithm uses an optimal-estimation retrieval approach yielding increased accuracy for only moderate computational cost. This is principally achieved by fitting the column water vapour and accounting for its interference in the retrieval of SO2. A cloud and aerosol model is used to evaluate the sensitivity of the scheme to the presence of ash and water/ice cloud. This identifies that cloud or ash above 6 km limits the accuracy of the water vapour fit, increasing the error in the SO2 estimate. Cloud top height is also retrieved. The scheme is applied to a case study event, the 1991 eruption of Cerro Hudson in Chile. The total erupted mass of SO2 is estimated to be 2300 kT ± 600 kT. This confirms it as one of the largest events since the 1991 eruption of Pinatubo, and of comparable scale to the Northern Hemisphere eruption of Kasatochi in 2008. This retrieval method yields a minimum mass per unit area detection limit of 3 DU, which is slightly less than that for the Total Ozone Mapping Spectrometer (TOMS), the only other instrument capable of monitoring SO2 from 1979 to 1996. We show an initial comparison to TOMS for part of this eruption, with broadly consistent results. Operating in the infrared (IR), HIRS has the advantage of being able to measure both during the day and at night, and there have frequently been multiple HIRS instruments operated simultaneously for better than daily sampling. If applied to all data from the series of past and future HIRS instruments, this

  7. Location Estimation in Wireless Sensor Networks Using Spring-Relaxation Technique

    Directory of Open Access Journals (Sweden)

    Qing Zhang

    2010-05-01

    Full Text Available Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN. Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  8. A new technique based on Artificial Bee Colony Algorithm for optimal sizing of stand-alone photovoltaic system

    OpenAIRE

    Mohamed, Ahmed F.; Elarini, Mahdi M.; Othman, Ahmed M.

    2013-01-01

    One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) whic...

  9. Optimization of MKID noise performance via readout technique for astronomical applications

    Science.gov (United States)

    Czakon, Nicole G.; Schlaerth, James A.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Glenn, Jason; Golwala, Sunil R.; Hollister, Matt I.; LeDuc, Henry G.; Mazin, Benjamin A.; Maloney, Philip R.; Noroozian, Omid; Nguyen, Hien T.; Sayers, Jack; Siegel, Seth; Vaillancourt, John E.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas

    2010-07-01

    Detectors employing superconducting microwave kinetic inductance detectors (MKIDs) can be read out by measuring changes in either the resonator frequency or dissipation. We will discuss the pros and cons of both methods, in particular, the readout method strategies being explored for the Multiwavelength Sub/millimeter Inductance Camera (MUSIC) to be commissioned at the CSO in 2010. As predicted theoretically and observed experimentally, the frequency responsivity is larger than the dissipation responsivity, by a factor of 2-4 under typical conditions. In the absence of any other noise contributions, it should be easier to overcome amplifier noise by simply using frequency readout. The resonators, however, exhibit excess frequency noise which has been ascribed to a surface distribution of two-level fluctuators sensitive to specific device geometries and fabrication techniques. Impressive dark noise performance has been achieved using modified resonator geometries employing interdigitated capacitors (IDCs). To date, our noise measurement and modeling efforts have assumed an onresonance readout, with the carrier power set well below the nonlinear regime. Several experimental indicators suggested to us that the optimal readout technique may in fact require a higher readout power, with the carrier tuned somewhat off resonance, and that a careful systematic study of the optimal readout conditions was needed. We will present the results of such a study, and discuss the optimum readout conditions as well as the performance that can be achieved relative to BLIP.

  10. A Suboptimal PTS Algorithm Based on Particle Swarm Optimization Technique for PAPR Reduction in OFDM Systems

    Directory of Open Access Journals (Sweden)

    Ho-Lung Hung

    2008-08-01

    Full Text Available A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.

  11. A Suboptimal PTS Algorithm Based on Particle Swarm Optimization Technique for PAPR Reduction in OFDM Systems

    Directory of Open Access Journals (Sweden)

    Lee Shu-Hong

    2008-01-01

    Full Text Available Abstract A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.

  12. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Demosthenous, Milton; Manos, George C.

    1994-01-01

    The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective...... is to obtain an estimate of the free rocking response from the measured random response using the Random Decrement (RDD) Technique, and then estimate the coefficient of restitution from this free response estimate. In the paper this approach is investigated by simulating the response of a single degree...

  13. A review of sex estimation techniques during examination of skeletal remains in forensic anthropology casework.

    Science.gov (United States)

    Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K

    2016-04-01

    Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. DATA MINING WORKSPACE AS AN OPTIMIZATION PREDICTION TECHNIQUE FOR SOLVING TRANSPORT PROBLEMS

    Directory of Open Access Journals (Sweden)

    Anastasiia KUPTCOVA

    2016-09-01

    Full Text Available This article addresses the study related to forecasting with an actual high-speed decision making under careful modelling of time series data. The study uses data-mining modelling for algorithmic optimization of transport goals. Our finding brings to the future adequate techniques for the fitting of a prediction model. This model is going to be used for analyses of the future transaction costs in the frontiers of the Czech Republic. Time series prediction methods for the performance of prediction models in the package of Statistics are Exponential, ARIMA and Neural Network approaches. The primary target for a predictive scenario in the data mining workspace is to provide modelling data faster and with more versatility than the other management techniques.

  15. INCREASING OF PRECISE ESTIMATION OF OPTIMAL CRITERIA BOILER FUNCTIONING

    Directory of Open Access Journals (Sweden)

    Y. M. Skakovsk

    2016-08-01

    Full Text Available Results of laboratory and industrial research allowed offering a way to improve the accuracy of estimation the optimal criterion of boilers' operation depending on fuel quality. Criterion is calculated continuously during boiler operation as heat ratio transmitted in production with superheated steam to the thermal energy obtained by combustion in boiler’s furnace fuel (natural gas .The non-linearity dependence of steam enthalpy from its temperature and pressure are considered when calculating, as well as changes in calorific value of natural gas, depending on variety in nitrogen content therein. The control algorithm and program for Ukrainian PLC MIC-52 are offered. The user selection program implements two searching modes for criterion maximum: automated and automatic. The results are going to be used for upgrading the existing control system on sugar factory.

  16. Dynamical optimization techniques for the calculation of electronic structure in solids

    International Nuclear Information System (INIS)

    Benedek, R.; Min, B.I.; Garner, J.

    1989-01-01

    The method of dynamical simulated annealing, recently introduced by Car and Parrinello, provides a new tool for electronic structure computation as well as for molecular dynamics simulation. In this paper, we explore an optimization technique that is complementary to dynamical simulated annealing, the method of steepest descents (SD). As an illustration, SD is applied to calculate the total energy of diamond-Si, a system previously treated by Car and Parrinello. The adaptation of SD to treat metallic systems is discussed and a numerical application is presented. (author) 18 refs., 3 figs

  17. Experimental evaluation of optimal Vehicle Dynamic Control based on the State Dependent Riccati Equation technique

    NARCIS (Netherlands)

    Alirezaei, M.; Kanarachos, S.A.; Scheepers, B.T.M.; Maurice, J.P.

    2013-01-01

    Development and experimentally evaluation of an optimal Vehicle Dynamic Control (VDC) strategy based on the State Dependent Riccati Equation (SDRE) control technique is presented. The proposed nonlinear controller is based on a nonlinear vehicle model with nonlinear tire characteristics. A novel

  18. Spatially Explicit Estimation of Optimal Light Use Efficiency for Improved Satellite Data Driven Ecosystem Productivity Modeling

    Science.gov (United States)

    Madani, N.; Kimball, J. S.; Running, S. W.

    2014-12-01

    Remote sensing based light use efficiency (LUE) models, including the MODIS (MODerate resolution Imaging Spectroradiometer) MOD17 algorithm are commonly used for regional estimation and monitoring of vegetation gross primary production (GPP) and photosynthetic carbon (CO2) uptake. A common model assumption is that plants in a biome matrix operate at their photosynthetic capacity under optimal climatic conditions. A prescribed biome maximum light use efficiency parameter defines the maximum photosynthetic carbon conversion rate under prevailing climate conditions and is a large source of model uncertainty. Here, we used tower (FLUXNET) eddy covariance measurement based carbon flux data for estimating optimal LUE (LUEopt) over a North American domain. LUEopt was first estimated using tower observed daily carbon fluxes, meteorology and satellite (MODIS) observed fraction of photosynthetically active radiation (FPAR). LUEopt was then spatially interpolated over the domain using empirical models derived from independent geospatial data including global plant traits, surface soil moisture, terrain aspect, land cover type and percent tree cover. The derived LUEopt maps were then used as primary inputs to the MOD17 LUE algorithm for regional GPP estimation; these results were evaluated against tower observations and alternate MOD17 GPP estimates determined using Biome-specific LUEopt constants. Estimated LUEopt shows large spatial variability within and among different land cover classes indicated from a sparse North American tower network. Leaf nitrogen content and soil moisture are two important factors explaining LUEopt spatial variability. GPP estimated from spatially explicit LUEopt inputs shows significantly improved model accuracy against independent tower observations (R2 = 0.76; Mean RMSE plant trait information can explain spatial heterogeneity in LUEopt, leading to improved GPP estimates from satellite based LUE models.

  19. Third molar development: evaluation of nine tooth development registration techniques for age estimations.

    Science.gov (United States)

    Thevissen, Patrick W; Fieuws, Steffen; Willems, Guy

    2013-03-01

    Multiple third molar development registration techniques exist. Therefore the aim of this study was to detect which third molar development registration technique was most promising to use as a tool for subadult age estimation. On a collection of 1199 panoramic radiographs the development of all present third molars was registered following nine different registration techniques [Gleiser, Hunt (GH); Haavikko (HV); Demirjian (DM); Raungpaka (RA); Gustafson, Koch (GK); Harris, Nortje (HN); Kullman (KU); Moorrees (MO); Cameriere (CA)]. Regression models with age as response and the third molar registration as predictor were developed for each registration technique separately. The MO technique disclosed highest R(2) (F 51%, M 45%) and lowest root mean squared error (F 3.42 years; M 3.67 years) values, but differences with other techniques were small in magnitude. The amount of stages utilized in the explored staging techniques slightly influenced the age predictions. © 2013 American Academy of Forensic Sciences.

  20. A Preconditioning Technique for First-Order Primal-Dual Splitting Method in Convex Optimization

    Directory of Open Access Journals (Sweden)

    Meng Wen

    2017-01-01

    Full Text Available We introduce a preconditioning technique for the first-order primal-dual splitting method. The primal-dual splitting method offers a very general framework for solving a large class of optimization problems arising in image processing. The key idea of the preconditioning technique is that the constant iterative parameters are updated self-adaptively in the iteration process. We also give a simple and easy way to choose the diagonal preconditioners while the convergence of the iterative algorithm is maintained. The efficiency of the proposed method is demonstrated on an image denoising problem. Numerical results show that the preconditioned iterative algorithm performs better than the original one.

  1. Pre-optimization of radiotherapy treatment planning: an artificial neural network classification aided technique

    International Nuclear Information System (INIS)

    Hosseini-Ashrafi, M.E.; Bagherebadian, H.; Yahaqi, E.

    1999-01-01

    A method has been developed which, by using the geometric information from treatment sample cases, selects from a given data set an initial treatment plan as a step for treatment plan optimization. The method uses an artificial neural network (ANN) classification technique to select a best matching plan from the 'optimized' ANN database. Separate back-propagation ANN classifiers were trained using 50, 60 and 77 examples for three groups of treatment case classes (up to 21 examples from each class were used). The performance of the classifiers in selecting the correct treatment class was tested using the leave-one-out method; the networks were optimized with respect their architecture. For the three groups used in this study, successful classification fractions of 0.83, 0.98 and 0.93 were achieved by the optimized ANN classifiers. The automated response of the ANN may be used to arrive at a pre-plan where many treatment parameters may be identified and therefore a significant reduction in the steps required to arrive at the optimum plan may be achieved. Treatment planning 'experience' and also results from lengthy calculations may be used for training the ANN. (author)

  2. Self-consistent technique for estimating the dynamic yield strength of a shock-loaded material

    International Nuclear Information System (INIS)

    Asay, J.R.; Lipkin, J.

    1978-01-01

    A technique is described for estimating the dynamic yield stress in a shocked material. This method employs reloading and unloading data from a shocked state along with a general assumption of yield and hardening behavior to estimate the yield stress in the precompressed state. No other data are necessary for this evaluation, and, therefore, the method has general applicability at high shock pressures and in materials undergoing phase transitions. In some special cases, it is also possible to estimate the complete state of stress in a shocked state. Using this method, the dynamic yield strength of aluminum at 2.06 GPa has been estimated to be 0.26 GPa. This value agrees reasonably well with previous estimates

  3. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Science.gov (United States)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  4. Forensic age estimation based on development of third molars: a staging technique for magnetic resonance imaging.

    Science.gov (United States)

    De Tobel, J; Phlypo, I; Fieuws, S; Politis, C; Verstraete, K L; Thevissen, P W

    2017-12-01

    The development of third molars can be evaluated with medical imaging to estimate age in subadults. The appearance of third molars on magnetic resonance imaging (MRI) differs greatly from that on radiographs. Therefore a specific staging technique is necessary to classify third molar development on MRI and to apply it for age estimation. To develop a specific staging technique to register third molar development on MRI and to evaluate its performance for age estimation in subadults. Using 3T MRI in three planes, all third molars were evaluated in 309 healthy Caucasian participants from 14 to 26 years old. According to the appearance of the developing third molars on MRI, descriptive criteria and schematic representations were established to define a specific staging technique. Two observers, with different levels of experience, staged all third molars independently with the developed technique. Intra- and inter-observer agreement were calculated. The data were imported in a Bayesian model for age estimation as described by Fieuws et al. (2016). This approach adequately handles correlation between age indicators and missing age indicators. It was used to calculate a point estimate and a prediction interval of the estimated age. Observed age minus predicted age was calculated, reflecting the error of the estimate. One-hundred and sixty-six third molars were agenetic. Five percent (51/1096) of upper third molars and 7% (70/1044) of lower third molars were not assessable. Kappa for inter-observer agreement ranged from 0.76 to 0.80. For intra-observer agreement kappa ranged from 0.80 to 0.89. However, two stage differences between observers or between staging sessions occurred in up to 2.2% (20/899) of assessments, probably due to a learning effect. Using the Bayesian model for age estimation, a mean absolute error of 2.0 years in females and 1.7 years in males was obtained. Root mean squared error equalled 2.38 years and 2.06 years respectively. The performance to

  5. Effective wind speed estimation: Comparison between Kalman Filter and Takagi-Sugeno observer techniques.

    Science.gov (United States)

    Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst

    2016-05-01

    Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-07-01

    The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.

  7. Optimized scheduling technique of null subcarriers for peak power control in 3GPP LTE downlink.

    Science.gov (United States)

    Cho, Soobum; Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system.

  8. Provincial carbon intensity abatement potential estimation in China: A PSO–GA-optimized multi-factor environmental learning curve method

    International Nuclear Information System (INIS)

    Yu, Shiwei; Zhang, Junjie; Zheng, Shuhong; Sun, Han

    2015-01-01

    This study aims to estimate carbon intensity abatement potential in China at the regional level by proposing a particle swarm optimization–genetic algorithm (PSO–GA) multivariate environmental learning curve estimation method. The model uses two independent variables, namely, per capita gross domestic product (GDP) and the proportion of the tertiary industry in GDP, to construct carbon intensity learning curves (CILCs), i.e., CO 2 emissions per unit of GDP, of 30 provinces in China. Instead of the traditional ordinary least squares (OLS) method, a PSO–GA intelligent optimization algorithm is used to optimize the coefficients of a learning curve. The carbon intensity abatement potentials of the 30 Chinese provinces are estimated via PSO–GA under the business-as-usual scenario. The estimation reveals the following results. (1) For most provinces, the abatement potentials from improving a unit of the proportion of the tertiary industry in GDP are higher than the potentials from raising a unit of per capita GDP. (2) The average potential of the 30 provinces in 2020 will be 37.6% based on the emission's level of 2005. The potentials of Jiangsu, Tianjin, Shandong, Beijing, and Heilongjiang are over 60%. Ningxia is the only province without intensity abatement potential. (3) The total carbon intensity in China weighted by the GDP shares of the 30 provinces will decline by 39.4% in 2020 compared with that in 2005. This intensity cannot achieve the 40%–45% carbon intensity reduction target set by the Chinese government. Additional mitigation policies should be developed to uncover the potentials of Ningxia and Inner Mongolia. In addition, the simulation accuracy of the CILCs optimized by PSO–GA is higher than that of the CILCs optimized by the traditional OLS method. - Highlights: • A PSO–GA-optimized multi-factor environmental learning curve method is proposed. • The carbon intensity abatement potentials of the 30 Chinese provinces are estimated by

  9. Estimation of dynamic reactivity using an H∞ optimal filter with a nonlinear term

    International Nuclear Information System (INIS)

    Suzuki, Katsuo; Watanabe, Koiti

    1996-01-01

    A method of nonlinear filtering is applied to the problem of estimating the dynamic reactivity of a nonlinear reactor system. The nonlinear filtering algorithm developed is a simple modification of a linear H ∞ optimal filter with a nonlinear feedback loop added. The linear filter is designed on the basis of a linearized dynamical system model that consists of linearized point reactor kinetic equations and a reactivity state equation driven by a fictitious signal. The latter is artificially introduced to deal with the reactivity as a state variable. The results of the computer simulation show that the nonlinear filtering algorithm can be applied to estimate the dynamic reactivity of the nonlinear reactor system, even under relatively large reactivity disturbances

  10. Evaluation of small area crop estimation techniques using LANDSAT- and ground-derived data. [South Dakota

    Science.gov (United States)

    Amis, M. L.; Martin, M. V.; Mcguire, W. G.; Shen, S. S. (Principal Investigator)

    1982-01-01

    Studies completed in fiscal year 1981 in support of the clustering/classification and preprocessing activities of the Domestic Crops and Land Cover project. The theme throughout the study was the improvement of subanalysis district (usually county level) crop hectarage estimates, as reflected in the following three objectives: (1) to evaluate the current U.S. Department of Agriculture Statistical Reporting Service regression approach to crop area estimation as applied to the problem of obtaining subanalysis district estimates; (2) to develop and test alternative approaches to subanalysis district estimation; and (3) to develop and test preprocessing techniques for use in improving subanalysis district estimates.

  11. Optimization, formulation, and characterization of multiflavonoids-loaded flavanosome by bulk or sequential technique.

    Science.gov (United States)

    Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida

    2016-01-01

    This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising

  12. Recursive estimation techniques for detection of small objects in infrared image data

    Science.gov (United States)

    Zeidler, J. R.; Soni, T.; Ku, W. H.

    1992-04-01

    This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.

  13. Nonlinear adaptive optimization of biomass productivity in continuous bioreactors

    Energy Technology Data Exchange (ETDEWEB)

    Sauvaire, P; Mellichamp, D A; Agrawal, P [California Univ., Santa Barbara, CA (United States). Dept. of Chemical and Nuclear Engineering

    1991-11-01

    A novel on-line adaptive optimization algorithm is developed and applied to continuous biological reactors. The algorithm makes use of a simple nonlinear estimation model that relates either the cell-mass productivity or the cell-mass concentration to the dilution rate. On-line estimation is used to recursively identify the parameters in the nonlinear process model and to periodically calculate and steer the bioreactor to the dilution rate that yields optimum cell-mass productivity. Thus, the algorithm does not require an accurate process model, locates the optimum dilution rate online, and maintains the bioreactors at this optimum condition at all times. The features of the proposed new algorithm are compared with those of other adaptive optimization techniques presented in the literature. A detailed simulation study using three different microbial system models was conducted to illustrate the performance of the optimization algorithms. (orig.).

  14. The application of mean field theory to image motion estimation.

    Science.gov (United States)

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  15. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  16. Land ECVs from QA4ECV using an optimal estimation framework

    Science.gov (United States)

    Muller, Jan-Peter; Kharbouche, Said; Lewis, Philip; Danne, Olaf; Blessing, Simon; Giering, Ralf; Gobron, Nadine; Lanconelli, Christian; Govaerts, Yves; Schulz, Joerg; Doutriaux-Boucher, Marie; Lattanzio, Alessio; Aoun, Youva

    2017-04-01

    In the ESA-DUE GlobAlbedo project (http://www.GlobAlbedo.org), a 15 year record of land surface albedo was generated from the European VEGETATION & MERIS sensors using optimal estimation. This was based on 3 broadbands (0.4-0.7, 0.7-3, 0.4-3µm) and fused data at level-2 after converting from spectral narrowband to these 3 broadbands with surface BRFs. A 10 year long record of land surface albedo climatology was generated from Collection 5 of the MODIS BRDF product for these same broadbands. This was employed as an a priori estimate for an optimal estimation based retrieval of land surface albedo when there were insufficient samples from the European sensors. This so-called MODIS prior was derived at 1km from the 500m MOD43A1,2 BRDF inputs every 8 days using the QA bits and the method described in the GlobAlbedo ATBD which is available from the website (http://www.globalbedo.org/docs/GlobAlbedo_Albedo_ATBD_V4.12.pdf). In the ESA-STSE WACMOS-ET project, FastOpt generated fapar & LAI based on this GlobAlbedo BRDF with associated per pixel uncertainty using the TIP framework. In the successor EU-FP7-QA4ECV* project, we have developed a 33 year record (1981-2014) of Earth surface spectral and broadband albedo (i.e. including the ocean and sea-ice) using optimal estimation for the land and where available, relevant sensors for "instantaneous" retrievals over the poles (Kharbouche & Muller, this conference). This requires the longest possible land surface spectral and broadband BRDF record that can only be supplied by a 16 year of MODIS Collection 6 BRDFs at 500m but produced on a daily basis. The CEMS Big Data computer at RAL was used to generate 7 spectral bands and 3 broadband BRDF with and without snow and snow_only. We will discuss the progress made since the start of the QA4ECV project on the production of a new fused land surface BRDF/albedo spectral and broadband CDR product based on four European sensors: MERIS, (A)ATSR(2), VEGETATION, PROBA-V and two US sensors

  17. Optimization of colorimetric DET technique for the in situ, two-dimensional measurement of iron(II) distributions in sediment porewaters

    DEFF Research Database (Denmark)

    Bennett, William W.; Teasdale, Peter R.; Welsh, David T.

    2012-01-01

    The recently developed colorimetric diffusive equilibration in thin films (DET) technique for the in situ, high-resolution measurement of iron(II) in marine sediments is optimized to allow measurement of the higher iron concentrations typical of freshwater sediment porewaters. Computer imaging...... the sensitivity of the assay as required; by processing the image with different color channel filters, the sensitivity of the assay can be optimized for lower concentrations (up to 100 μmol L -1) or higher concentrations (up to 2000 μmol L -1) of iron(II), depending on the specific site characteristics......(II) in sediment porewaters. The detection limit of the optimized technique was 4.1 ± 0.3 μmol L -1 iron(II) and relative standard deviations were less than 6%....

  18. ESTIMATION OF INSULATOR CONTAMINATIONS BY MEANS OF REMOTE SENSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    G. Han

    2016-06-01

    Full Text Available The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD, digital elevation model (DEM, land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data. Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.

  19. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Directory of Open Access Journals (Sweden)

    Yongjun Ahn

    Full Text Available The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive

  20. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    Science.gov (United States)

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric