Tian, Jialin; Madaras, Eric I.
2009-01-01
The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.
Estimates of EPSP amplitude based on changes in motoneuron discharge rate and probability.
Powers, Randall K; Türker, K S
2010-10-01
When motor units are discharging tonically, transient excitatory synaptic inputs produce an increase in the probability of spike occurrence and also increase the instantaneous discharge rate. Several researchers have proposed that these induced changes in discharge rate and probability can be used to estimate the amplitude of the underlying excitatory post-synaptic potential (EPSP). We tested two different methods of estimating EPSP amplitude by comparing the amplitude of simulated EPSPs with their effects on the discharge of rat hypoglossal motoneurons recorded in an in vitro brainstem slice preparation. The first estimation method (simplified-trajectory method) is based on the assumptions that the membrane potential trajectory between spikes can be approximated by a 10 mV post-spike hyperpolarization followed by a linear rise to the next spike and that EPSPs sum linearly with this trajectory. We hypothesized that this estimation method would not be accurate due to interspike variations in membrane conductance and firing threshold that are not included in the model and that an alternative method based on estimating the effective distance to threshold would provide more accurate estimates of EPSP amplitude. This second method (distance-to-threshold method) uses interspike interval statistics to estimate the effective distance to threshold throughout the interspike interval and incorporates this distance-to-threshold trajectory into a threshold-crossing model. We found that the first method systematically overestimated the amplitude of small (EPSPs and underestimated the amplitude of large (>5 mV EPSPs). For large EPSPs, the degree of underestimation increased with increasing background discharge rate. Estimates based on the second method were more accurate for small EPSPs than those based on the first model, but estimation errors were still large for large EPSPs. These errors were likely due to two factors: (1) the distance to threshold can only be directly
Ward, B F L; Yost, S A
2013-01-01
We present the current status of the comparisons with the respective data of the predictions of our approach of exact amplitude-based resummation in quantum field theory in two areas of investigation: precision QCD calculations of all four of us as needed for LHC physics and the resummed quantum gravity realization by one of us (B.F.L.W.) of Einstein's theory of general relativity as formulated by Feynman. The agreement between the theoretical predictions and the data exhibited continues to be encouraging.
Landslide Monitoring in Three Gorges Area by Joint Use of Phase Based and Amplitude Based Methods
Shi, Xuguo; Zhang, Lu; Liao, Mingsheng; Balz, Timo
2015-05-01
Landslides are serious geohazards in Three Gorges area, China especially after the impoundment of Three Gorges Reservoir. It is very urgent to monitoring the landslides for early warning or disaster prevention purpose. In this paper, phase based methods such as traditional differential InSAR and small baseline subset method were used to investigate slow moving landslides. Point-like targets offset tracking (PTOT) was used to investigate fast moving landslides. Furthermore, in order to describe the displacement on landslide, two TerraSAR-X datasets obtained from different descending orbits were combined to obtain the three dimensional displacements on Shuping landslides with the PTOT measurements in the azimuth and range direction.
Directory of Open Access Journals (Sweden)
Tengteng Qu
2016-10-01
Full Text Available Early detection and early warning are of great importance in giant landslide monitoring because of the unexpectedness and concealed nature of large-scale landslides. In China, the western mountainous areas are prone to landslides and feature many giant complex landslides, especially following the Wenchuan Earthquake in 2008. This work concentrates on a new technique, known as the “hybrid-SAR technique”, that combines both phase-based and amplitude-based methods to detect and monitor large-scale landslides in Li County, Sichuan Province, southwestern China. This work aims to develop a robust methodological approach to promptly identify diverse landslides with different deformation magnitudes, sliding modes and slope geometries, even when the available satellite data are limited. The phase-based and amplitude-based techniques are used to obtain the landslide displacements from six TerraSAR-X Stripmap descending scenes acquired from November 2014 to March 2015. Furthermore, the application circumstances and influence factors of hybrid-SAR are evaluated according to four aspects: (1 quality of terrain visibility to the radar sensor; (2 landslide deformation magnitude and different sliding mode; (3 impact of dense vegetation cover; and (4 sliding direction sensitivity. The results achieved from hybrid-SAR are consistent with in situ measurements. This new hybrid-SAR technique for complex giant landslide research successfully identified representative movement areas, e.g., an extremely slow earthflow and a creeping region with a displacement rate of 1 cm per month and a typical rotational slide with a displacement rate of 2–3 cm per month downwards and towards the riverbank. Hybrid-SAR allows for a comprehensive and preliminary identification of areas with significant movement and provides reliable data support for the forecasting and monitoring of landslides.
Directory of Open Access Journals (Sweden)
D. Sümeyra Demirkıran
2014-03-01
Full Text Available Concept of age estimation plays an important role on both civil law and regulation of criminal behaviors. In forensic medicine, age estimation is practiced for individual requests as well for request of the court. In this study it is aimed to compile the methods of age estimation and to make recommendations for the solution of the problems encountered. In radiological method the epiphyseal lines of the bones and views of the teeth are used. In order to estimate the age by comparing bone radiographs; Greulich-Pyle Atlas (GPA, Tanner-Whitehouse Atlas (TWA and “Adli Tıpta Yaş Tayini (ATYT” books are used. Bone age is found to be 2 years older averagely than chronologic age, especially in puberty, according to the forensic age estimations described in the ATYT book. For the age estimation with teeth, Demirjian method is used. In time different methods are developed by modifying Demirjian method. However no accurate method was found. Histopathological studies are done on bone marrow cellularity and dermis cells. No correlation was found between histopathoogical findings and choronologic age. Important ethical and legal issues are brought with current age estimation methods especially in teenage period. Therefore it is required to prepare atlases of bone age compatible with our society by collecting the findings of the studies in Turkey. Another recommendation could be to pay attention to the courts of age raising trials of teenage women and give special emphasis on birth and population records
Generalized Agile Estimation Method
Directory of Open Access Journals (Sweden)
Shilpa Bahlerao
2011-01-01
Full Text Available Agile cost estimation process always possesses research prospects due to lack of algorithmic approaches for estimating cost, size and duration. Existing algorithmic approach i.e. Constructive Agile Estimation Algorithm (CAEA is an iterative estimation method that incorporates various vital factors affecting the estimates of the project. This method has lots of advantages but at the same time has some limitations also. These limitations may due to some factors such as number of vital factors and uncertainty involved in agile projects etc. However, a generalized agile estimation may generate realistic estimates and eliminates the need of experts. In this paper, we have proposed iterative Generalized Estimation Method (GEM and presented algorithm based on it for agile with case studies. GEM based algorithm various project domain classes and vital factors with prioritization level. Further, it incorporates uncertainty factor to quantify the risk of project for estimating cost, size and duration. It also provides flexibility to project managers for deciding on number of vital factors, uncertainty level and project domains thereby maintaining the agility.
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Unbiased risk estimation method for covariance estimation
Lescornel, Hélène; Chabriac, Claudie
2011-01-01
We consider a model selection estimator of the covariance of a random process. Using the Unbiased Risk Estimation (URE) method, we build an estimator of the risk which allows to select an estimator in a collection of model. Then, we present an oracle inequality which ensures that the risk of the selected estimator is close to the risk of the oracle. Simulations show the efficiency of this methodology.
Causal Effect Estimation Methods
2014-01-01
Relationship between two popular modeling frameworks of causal inference from observational data, namely, causal graphical model and potential outcome causal model is discussed. How some popular causal effect estimators found in applications of the potential outcome causal model, such as inverse probability of treatment weighted estimator and doubly robust estimator can be obtained by using the causal graphical model is shown. We confine to the simple case of binary outcome and treatment vari...
Software Development Cost Estimation Methods
Directory of Open Access Journals (Sweden)
Bogdan Stepien
2003-01-01
Full Text Available Early estimation of project size and completion time is essential for successful project planning and tracking. Multiple methods have been proposed to estimate software size and cost parameters. Suitability of the estimation methods depends on many factors like software application domain, product complexity, availability of historical data, team expertise etc. Most common and widely used estimation techniques are described and analyzed. Current research trends in software estimation cost are also presented.
Methods of statistical model estimation
Hilbe, Joseph
2013-01-01
Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th
Gohel, Bakul; Tiwary, U. S.; Lahiri, T.
Coronary artery disease or Myocardial Infarction is the leading cause of death and disability in the world. ECG is widely used as a cheap diagnostic tool for diagnosis of coronary artery disease but has low sensitivity with the present criteria based on ST-segment, T wave and Q wave changes. So to increase the sensitivity of the ECG we have introduced relative amplitude based new features of characteristic ‘R’ and ‘S’ ECG-peaks between two leads. Relative amplitude based features shows remarkable capability in discriminating Myocardial Infarction and Healthy pattern using backpropogation neural network classifier yield results with 81.82% sensitivity and 81.82% specificity. Also relative amplitude might be an efficient method in minimizing the effect of body composition on ECG amplitude based features without use of any information from other than ECG
Methods for estimating the semivariogram
DEFF Research Database (Denmark)
Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle
2002-01-01
. In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...... is insensitive to the choice of estimation method, but also that the uncertainties of predictions were reduced when applying maximum likelihood....
Exact Amplitude--Based Resummation QCD Predictions and LHC Data
Ward, B F L; Yost, S A
2014-01-01
We present the current status of the comparisons with the respective data of the predictions of our approach of exact amplitude-based resummation in quantum field theory as applied to precision QCD calculations as needed for LHC physics, using the MC Herwiri1.031. The agreement between the theoretical predictions and the data exhibited continues to be encouraging.
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Tunnel Cost-Estimating Methods.
1981-10-01
8 ae1e 066 c LINING CALCULATES THE LINING COSTS AND THE FORMWORK COST FOR A 982928 ees C TUNNEL OR SHAFT SEGMENT 682636 0066...AD-AIO . 890 ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURGETC F/B 13/13 TUNNEL COST-ESTIMATING METNDS(U) OCT 81 R D BENNETT UNCLASSIFIED WES...TR/L-81-101-3lEEEEEE EIIIl-IIIIIIIu IIIIEIIIEIIIIE llllEEEEllEEEI EEEEEEEEEIIII C EllTE-CHNICAL RGPORT GL-81-10 LI10 TUNNEL COST-ESTIMATING METHODS by
SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE
Institute of Scientific and Technical Information of China (English)
Ming HAN; Yuanyao DING
2004-01-01
This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.
Methods of Estimating Strategic Intentions
1982-05-01
of events, coding categories. A V 2. Weighting Data: polIcy capturIng, Bayesian methods, correlation and variance analysis. 3. Characterizing Data...memory aids, fuzzy sets, factor analysis. 4. Assessing Covariations: actuarial models, backcasting . bootstrapping. 5. Cause and Effect Assessment...causae search, causal analysis, search trees, stepping analysts, hypothesis, regression analysis. 6. Predictions: Backcast !ng, boot strapping, decision
Age Estimation Methods in Forensic Odontology
Directory of Open Access Journals (Sweden)
Phuwadon Duangto
2016-12-01
Full Text Available Forensically, age estimation is a crucial step for biological identification. Currently, there are many methods with variable accuracy to predict the age for dead or living persons such as a physical examination, radiographs of the left hand, and dental assessment. Age estimation using radiographic tooth development has been found to be an accurate method because it is mainly genetically influenced and less affected by nutritional and environmental factors. The Demirjian et al. method has long been the most commonly used for dental age estimation using radiological technique in many populations. This method, based on tooth developmental changes, is an easy-to-apply method since different stages of tooth development is clearly defined. The aim of this article is to elaborate age estimation by using tooth development with a focus on the Demirjian et al. method.
Digital Forensics Analysis of Spectral Estimation Methods
Mataracioglu, Tolga
2011-01-01
Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message. In today's world, it is widely used in order to secure the information. In this paper, the traditional spectral estimation methods are introduced. The performance analysis of each method is examined by comparing all of the spectral estimation methods. Finally, from utilizing those performance analyses, a brief pros and cons of the spectral estimation methods are given. Also we give a steganography demo by hiding information into a sound signal and manage to pull out the information (i.e, the true frequency of the information signal) from the sound by means of the spectral estimation methods.
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Bayesian Inference Methods for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand
2013-01-01
This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...
Methods for estimation loads transported by rivers
Directory of Open Access Journals (Sweden)
T. S. Smart
1999-01-01
Full Text Available Ten methods for estimating the loads of constituents in a river were tested using data from the River Don in North-East Scotland. By treating loads derived from flow and concentration data collected every 2 days as a truth to be predicted, the ten methods were assessed for use when concentration data are collected fortnightly or monthly by sub-sampling from the original data. Estimates of coefficients of variation, bias and mean squared errors of the methods were compared; no method consistently outperformed all others and different methods were appropriate for different constituents. The widely used interpolation methods can be improved upon substantially by modelling the relationship of concentration with flow or seasonality but only if these relationships are strong enough.
Statistical Method of Estimating Nigerian Hydrocarbon Reserves
Directory of Open Access Journals (Sweden)
Jeffrey O. Oseh
2015-01-01
Full Text Available Hydrocarbon reserves are basic to planning and investment decisions in Petroleum Industry. Therefore its proper estimation is of considerable importance in oil and gas production. The estimation of hydrocarbon reserves in the Niger Delta Region of Nigeria has been very popular, and very successful, in the Nigerian oil and gas industry for the past 50 years. In order to fully estimate the hydrocarbon potentials in Nigerian Niger Delta Region, a clear understanding of the reserve geology and production history should be acknowledged. Reserves estimation of most fields is often performed through Material Balance and Volumetric methods. Alternatively a simple Estimation Model and Least Squares Regression may be useful or appropriate. This model is based on extrapolation of additional reserve due to exploratory drilling trend and the additional reserve factor which is due to revision of the existing fields. This Estimation model used alongside with Linear Regression Analysis in this study gives improved estimates of the fields considered, hence can be used in other Nigerian Fields with recent production history
Parameter estimation methods for chaotic intercellular networks.
Mariño, Inés P; Ullner, Ekkehard; Zaikin, Alexey
2013-01-01
We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
A simple method to estimate interwell autocorrelation
Energy Technology Data Exchange (ETDEWEB)
Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
Contour Estimation by Array Processing Methods
Directory of Open Access Journals (Sweden)
Bourennane Salah
2006-01-01
Full Text Available This work is devoted to the estimation of rectilinear and distorted contours in images by high-resolution methods. In the case of rectilinear contours, it has been shown that it is possible to transpose this image processing problem to an array processing problem. The existing straight line characterization method called subspace-based line detection (SLIDE leads to models with orientations and offsets of straight lines as the desired parameters. Firstly, a high-resolution method of array processing leads to the orientation of the lines. Secondly, their offset can be estimated by either the well-known method of extension of the Hough transform or another method, namely, the variable speed propagation scheme, that belongs to the array processing applications field. We associate it with the method called "modified forward-backward linear prediction" (MFBLP. The signal generation process devoted to straight lines retrieval is retained for the case of distorted contours estimation. This issue is handled for the first time thanks to an inverse problem formulation and a phase model determination. The proposed method is initialized by means of the SLIDE algorithm.
Lifetime estimation methods in power transformer insulation
Directory of Open Access Journals (Sweden)
Mohammad Ali Taghikhani
2012-10-01
Full Text Available Mineral oil in the power transformer has an important role in the cooling, insulation aging and chemical reactions such as oxidation. Oil temperature increases will cause quality loss. The oil should be regularly control in necessary time. Studies have been done on power transformers oils that are used in different age in Iranian power grid to identify the true relationship between age and other characteristics of power transformer oil. In this paper the first method to estimate the life of power transformer insulation (oil is based on Arrhenius law. The Arrhenius law can provide loss of power transformer oil quality and estimates remaining life. The second method that is studies to estimate the life of power transformer is the paper insulation life prediction at temperature160 ° C.
A comprehensive estimation method for enterprise capability
Directory of Open Access Journals (Sweden)
Tetiana Kuzhda
2015-11-01
Full Text Available In today’s highly competitive business world, the need for efficient enterprise capability management is greater than ever. As more enterprises begin to compete on a global scale, the effective use of enterprise capability will become imperative for them to improve their business activities. The definition of socio-economic capability of the enterprise has been given and the main components of enterprise capability have been pointed out. The comprehensive method to estimate enterprise capability that takes into account both social and economic components has been offered. The methodical approach concerning integrated estimation of the enterprise capability has been developed. Novelty deals with the inclusion of summary measure of the social component of enterprise capability to define the integrated index of enterprise capability. The practical significance of methodological approach is that the method allows assessing the enterprise capability comprehensively through combining two kinds of estimates – social and economic and converts them into a single integrated indicator. It provides a comprehensive approach to socio-economic estimation of enterprise capability, sets a formal basis for making decisions and helps allocate enterprise resources reasonably. Practical implementation of this method will affect the current condition and trends of the enterprise, help to make forecasts and plans for its development and capability efficient use.
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
1995-01-01
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Parameter estimation methods for chaotic intercellular networks.
Directory of Open Access Journals (Sweden)
Inés P Mariño
Full Text Available We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
Point estimation of root finding methods
2008-01-01
This book sets out to state computationally verifiable initial conditions for predicting the immediate appearance of the guaranteed and fast convergence of iterative root finding methods. Attention is paid to iterative methods for simultaneous determination of polynomial zeros in the spirit of Smale's point estimation theory, introduced in 1986. Some basic concepts and Smale's theory for Newton's method, together with its modifications and higher-order methods, are presented in the first two chapters. The remaining chapters contain the recent author's results on initial conditions guaranteing convergence of a wide class of iterative methods for solving algebraic equations. These conditions are of practical interest since they depend only on available data, the information of a function whose zeros are sought and initial approximations. The convergence approach presented can be applied in designing a package for the simultaneous approximation of polynomial zeros.
The estimation method of GPS instrumental biases
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A model of estimating the global positioning system (GPS) instrumental biases and the methods to calculate the relative instrumental biases of satellite and receiver are presented. The calculated results of GPS instrumental biases, the relative instrumental biases of satellite and receiver, and total electron content (TEC) are also shown. Finally, the stability of GPS instrumental biases as well as that of satellite and receiver instrumental biases are evaluated, indicating that they are very stable during a period of two months and a half.
Lifetime estimation methods in power transformer insulation
Mohammad Ali Taghikhani
2012-01-01
Mineral oil in the power transformer has an important role in the cooling, insulation aging and chemical reactions such as oxidation. Oil temperature increases will cause quality loss. The oil should be regularly control in necessary time. Studies have been done on power transformers oils that are used in different age in Iranian power grid to identify the true relationship between age and other characteristics of power transformer oil. In this paper the first method to estimate the life of p...
Advancing methods for global crop area estimation
King, M. L.; Hansen, M.; Adusei, B.; Stehman, S. V.; Becker-Reshef, I.; Ernst, C.; Noel, J.
2012-12-01
Cropland area estimation is a challenge, made difficult by the variety of cropping systems, including crop types, management practices, and field sizes. A MODIS derived indicator mapping product (1) developed from 16-day MODIS composites has been used to target crop type at national scales for the stratified sampling (2) of higher spatial resolution data for a standardized approach to estimate cultivated area. A global prototype is being developed using soybean, a global commodity crop with recent LCLUC dynamic and a relatively unambiguous spectral signature, for the United States, Argentina, Brazil, and China representing nearly ninety percent of soybean production. Supervised classification of soy cultivated area is performed for 40 km2 sample blocks using time-series, Landsat imagery. This method, given appropriate data for representative sampling with higher spatial resolution, represents an efficient and accurate approach for large area crop type estimation. Results for the United States sample blocks have exhibited strong agreement with the National Agricultural Statistics Service's (NASS's) Cropland Data Layer (CDL). A confusion matrix showed a 91.56% agreement and a kappa of .67 between the two products. Field measurements and RapidEye imagery have been collected for the USA, Brazil and Argentina in further assessing product accuracies. The results of this research will demonstrate the value of MODIS crop type indicator products and Landsat sample data in estimating soybean cultivated area at national scales, enabling an internally consistent global assessment of annual soybean production.
Estimating Predictability Redundancy and Surrogate Data Method
Pecen, L
1995-01-01
A method for estimating theoretical predictability of time series is presented, based on information-theoretic functionals---redundancies and surrogate data technique. The redundancy, designed for a chosen model and a prediction horizon, evaluates amount of information between a model input (e.g., lagged versions of the series) and a model output (i.e., a series lagged by the prediction horizon from the model input) in number of bits. This value, however, is influenced by a method and precision of redundancy estimation and therefore it is a) normalized by maximum possible redundancy (given by the precision used), and b) compared to the redundancies obtained from two types of the surrogate data in order to obtain reliable classification of a series as either unpredictable or predictable. The type of predictability (linear or nonlinear) and its level can be further evaluated. The method is demonstrated using a numerically generated time series as well as high-frequency foreign exchange data and the theoretical ...
On methods of estimating cosmological bulk flows
Nusser, Adi
2015-01-01
We explore similarities and differences between several estimators of the cosmological bulk flow, $\\bf B$, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of $\\bf B$ as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring $\\bf B$ for either of these definitions which coincide only for a constant velocity field. We focus on the Wiener Filtering (WF, Hoffman et al. 2015) and the Constrained Minimum Variance (CMV,Feldman et al. 2010) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute $\\bf B$ in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer $\\bf B$ directly from the observed velocities for the second definition of $\\bf B$. The WF ...
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
DEFF Research Database (Denmark)
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
Statistical methods of estimating mining costs
Long, K.R.
2011-01-01
Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.
System and method for traffic signal timing estimation
Dumazert, Julien
2015-12-30
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
A Modified Extended Bayesian Method for Parameter Estimation
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper presents a modified extended Bayesian method for parameter estimation. In this method the mean value of the a priori estimation is taken from the values of the estimated parameters in the previous iteration step. In this way, the parameter covariance matrix can be automatically updated during the estimation procedure, thereby avoiding the selection of an empirical parameter. Because the extended Bayesian method can be regarded as a Tikhonov regularization, this new method is more stable than both the least-squares method and the maximum likelihood method. The validity of the proposed method is illustrated by two examples: one based on simulated data and one based on real engineering data.
ICA Model Order Estimation Using Clustering Method
Directory of Open Access Journals (Sweden)
P. Sovka
2007-12-01
Full Text Available In this paper a novel approach for independent component analysis (ICA model order estimation of movement electroencephalogram (EEG signals is described. The application is targeted to the brain-computer interface (BCI EEG preprocessing. The previous work has shown that it is possible to decompose EEG into movement-related and non-movement-related independent components (ICs. The selection of only movement related ICs might lead to BCI EEG classification score increasing. The real number of the independent sources in the brain is an important parameter of the preprocessing step. Previously, we used principal component analysis (PCA for estimation of the number of the independent sources. However, PCA estimates only the number of uncorrelated and not independent components ignoring the higher-order signal statistics. In this work, we use another approach - selection of highly correlated ICs from several ICA runs. The ICA model order estimation is done at significance level ÃŽÂ± = 0.05 and the model order is less or more dependent on ICA algorithm and its parameters.
Quantum Estimation Methods for Quantum Illumination.
Sanz, M; Las Heras, U; García-Ripoll, J J; Solano, E; Di Candia, R
2017-02-17
Quantum illumination consists in shining quantum light on a target region immersed in a bright thermal bath with the aim of detecting the presence of a possible low-reflective object. If the signal is entangled with the receiver, then a suitable choice of the measurement offers a gain with respect to the optimal classical protocol employing coherent states. Here, we tackle this detection problem by using quantum estimation techniques to measure the reflectivity parameter of the object, showing an enhancement in the signal-to-noise ratio up to 3 dB with respect to the classical case when implementing only local measurements. Our approach employs the quantum Fisher information to provide an upper bound for the error probability, supplies the concrete estimator saturating the bound, and extends the quantum illumination protocol to non-Gaussian states. As an example, we show how Schrödinger's cat states may be used for quantum illumination.
Methods for estimating production and utilization of paper birch saplings
US Fish and Wildlife Service, Department of the Interior — Development of technique to estimate browse production and utilization. Developed a set of methods for estimating annual production and utilization of paper birch...
Enhancing Use Case Points Estimation Method Using Soft Computing Techniques
Nassif, Ali Bou; Capretz, Luiz Fernando; Ho, Danny
2016-01-01
Software estimation is a crucial task in software engineering. Software estimation encompasses cost, effort, schedule, and size. The importance of software estimation becomes critical in the early stages of the software life cycle when the details of software have not been revealed yet. Several commercial and non-commercial tools exist to estimate software in the early stages. Most software effort estimation methods require software size as one of the important metric inputs and consequently,...
Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram\\'{e}r-Rao...... estimation. Moreover, simulations on real-life data indicate that the NLS and aNLS methods are applicable even when reverberation is present and the noise is not white Gaussian....
A fusion method for estimate of trajectory
Institute of Scientific and Technical Information of China (English)
吴翊; 朱炬波
1999-01-01
The multiple station method is important in missile and space tracking system. A fusion method is presented. Based on the theory of multiple tracking, and starting with the investigation of precision of location by a single station, a recognition model for occasion system error is constructed, and a principle for preventing pollution by occasion system error is presented. Theoretical analysis and simulation results prove the proposed method correct.
Directory of Open Access Journals (Sweden)
Ashot Davtian
2011-05-01
Full Text Available Two methods for the estimation of number per unit volume NV of spherical particles are discussed: the (physical disector (Sterio, 1984 and Saltykov's estimator (Saltykov, 1950; Fullman, 1953. A modification of Saltykov's estimator is proposed which reduces the variance. Formulae for bias and variance are given for both disector and improved Saltykov estimator for the case of randomly positioned particles. They enable the comparison of the two estimators with respect to their precision in terms of mean squared error.
Exact Amplitude-Based Resummation in Quantum Field Theory: Recent Results
Ward, B F L
2012-01-01
We present the current status of the application of our approach of exact amplitude-based resummation in quantum field theory to two areas of investigation: precision QCD calculations of all three of us as needed for LHC physics and the resummed quantum gravity realization by one of us (B.F.L.W.) of Feynman's formulation of Einstein's theory of general relativity. We discuss recent results as they relate to experimental observations. There is reason for optimism in the attendant comparison of theory and experiment.
PERFORMANCE ANALYSIS OF METHODS FOR ESTIMATING ...
African Journals Online (AJOL)
2014-12-31
Dec 31, 2014 ... analysis revealed that the MLM was the most accurate model ..... obtained using the empirical method as the same formula is used. ..... and applied meteorology, American meteorological society, October 1986, vol.25, pp.
portfolio optimization based on nonparametric estimation methods
Directory of Open Access Journals (Sweden)
mahsa ghandehari
2017-03-01
Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.
Advancing Methods for Estimating Cropland Area
King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.
2014-12-01
Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.
Thermodynamic properties of organic compounds estimation methods, principles and practice
Janz, George J
1967-01-01
Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the
Analysis and estimation of risk management methods
Directory of Open Access Journals (Sweden)
Kankhva Vadim Sergeevich
2016-05-01
Full Text Available At the present time risk management is an integral part of state policy in all the countries with developed market economy. Companies dealing with consulting services and implementation of the risk management systems carve out a niche. Unfortunately, conscious preventive risk management in Russia is still far from the level of standardized process of a construction company activity, which often leads to scandals and disapproval in case of unprofessional implementation of projects. The authors present the results of the investigation of the modern understanding of the existing methodology classification and offer the authorial concept of classification matrix of risk management methods. Creation of the developed matrix is based on the analysis of the method in the context of incoming and outgoing transformed information, which may include different elements of risk control stages. So the offered approach allows analyzing the possibilities of each method.
Optical method of atomic ordering estimation
Energy Technology Data Exchange (ETDEWEB)
Prutskij, T. [Instituto de Ciencias, BUAP, Privada 17 Norte, No 3417, col. San Miguel Huyeotlipan, Puebla, Pue. (Mexico); Attolini, G. [IMEM/CNR, Parco Area delle Scienze 37/A - 43010, Parma (Italy); Lantratov, V.; Kalyuzhnyy, N. [Ioffe Physico-Technical Institute, 26 Polytekhnicheskaya, St Petersburg 194021, Russian Federation (Russian Federation)
2013-12-04
It is well known that within metal-organic vapor-phase epitaxy (MOVPE) grown semiconductor III-V ternary alloys atomically ordered regions are spontaneously formed during the epitaxial growth. This ordering leads to bandgap reduction and to valence bands splitting, and therefore to anisotropy of the photoluminescence (PL) emission polarization. The same phenomenon occurs within quaternary semiconductor alloys. While the ordering in ternary alloys is widely studied, for quaternaries there have been only a few detailed experimental studies of it, probably because of the absence of appropriate methods of its detection. Here we propose an optical method to reveal atomic ordering within quaternary alloys by measuring the PL emission polarization.
Autoregressive Methods for Spectral Estimation from Interferograms.
1986-09-19
Forman/Steele/Vanasse [12] phase filter approach, which approximately removes the linear phase distortion introduced into the interferogram by retidation...band interferogram for the spectrum to be analyzed. The symmetrizing algorithm, based on the Forman/Steele/Vanasse method [12] computes a phase filter from
Novel method for quantitative estimation of biofilms
DEFF Research Database (Denmark)
Syal, Kirtimaan
2017-01-01
Biofilm protects bacteria from stress and hostile environment. Crystal violet (CV) assay is the most popular method for biofilm determination adopted by different laboratories so far. However, biofilm layer formed at the liquid-air interphase known as pellicle is extremely sensitive to its washin...
System and method for correcting attitude estimation
Josselson, Robert H. (Inventor)
2010-01-01
A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.
Control and estimation methods over communication networks
Mahmoud, Magdi S
2014-01-01
This book provides a rigorous framework in which to study problems in the analysis, stability and design of networked control systems. Four dominant sources of difficulty are considered: packet dropouts, communication bandwidth constraints, parametric uncertainty, and time delays. Past methods and results are reviewed from a contemporary perspective, present trends are examined, and future possibilities proposed. Emphasis is placed on robust and reliable design methods. New control strategies for improving the efficiency of sensor data processing and reducing associated time delay are presented. The coverage provided features: · an overall assessment of recent and current fault-tolerant control algorithms; · treatment of several issues arising at the junction of control and communications; · key concepts followed by their proofs and efficient computational methods for their implementation; and · simulation examples (including TrueTime simulations) to...
Bayesian methods to estimate urban growth potential
Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.
2017-01-01
Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.
METHOD ON ESTIMATION OF DRUG'S PENETRATED PARAMETERS
Institute of Scientific and Technical Information of China (English)
刘宇红; 曾衍钧; 许景锋; 张梅
2004-01-01
Transdermal drug delivery system (TDDS) is a new method for drug delivery. The analysis of plenty of experiments in vitro can lead to a suitable mathematical model for the description of the process of the drug's penetration through the skin, together with the important parameters that are related to the characters of the drugs.After the research work of the experiments data,a suitable nonlinear regression model was selected. Using this model, the most important parameter-penetrated coefficient of 20 drugs was computed.In the result one can find, this work supports the theory that the skin can be regarded as singular membrane.
Understanding Rasch measurement: estimation methods for Rasch measures.
Linacre, J M
1999-01-01
Rasch parameter estimation methods can be classified as non-interative and iterative. Non-iterative methods include the normal approximation algorithm (PROX) for complete dichotomous data. Iterative methods fall into 3 types. Datum-by-datum methods include Gaussian least-squares, minimum chi-square, and the pairwise (PAIR) method. Marginal methods without distributional assumptions include conditional maximum-likelihood estimation (CMLE), joint maximum-likelihood estimation (JMLE) and log-linear approaches. Marginal methods with distributional assumptions include marginal maximum-likelihood estimation (MMLE) and the normal approximation algorithm (PROX) for missing data. Estimates from all methods are characterized by standard errors and quality-control fit statistics. Standard errors can be local (defined relative to the measure of a particular item) or general (defined relative to the abstract origin of the scale). They can also be ideal (as though the data fit the model) or inflated by the misfit to the model present in the data. Five computer programs, implementing different estimation methods, produce statistically equivalent estimates. Nevertheless, comparing estimates from different programs requires care.
Robust time and frequency domain estimation methods in adaptive control
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Methods of gas hydrate concentration estimation with field examples
Digital Repository Service at National Institute of Oceanography (India)
Kumar, D.; Dash, R.; Dewangan, P.
different methods of gas hydrate concentration estimation that make use of data from the measurements of the seismic properties, electrical resistivity, chlorinity, porosity, density, and temperature are summarized in this paper. We demonstrate the methods...
A least squares estimation method for the linear learning model
B. Wierenga (Berend)
1978-01-01
textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a
Carbon footprint: current methods of estimation.
Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker
2011-07-01
Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
Joint 2-D DOA and Noncircularity Phase Estimation Method
Directory of Open Access Journals (Sweden)
Wang Ling
2012-03-01
Full Text Available Classical joint estimation methods need large calculation quantity and multidimensional search. In order to avoid these shortcoming, a novel joint two-Dimension (2-D Direction Of Arrival (DOA and noncircularity phase estimation method based on three orthogonal linear arrays is proposed. The problem of 3-D parameter estimation can be transformed to three parallel 2-D parameter estimation according to the characteristic of three orthogonal linear arrays. Further more, the problem of 2-D parameter estimation can be transformed to 1-D parameter estimation by using the rotational invariance property among signal subspace and orthogonal property of noise subspace at the same time in every subarray. Ultimately, the algorithm can realize joint estimation and pairing parameters by one eigen-decomposition of extended covariance matrix. The proposed algorithm can be applicable for low SNR and small snapshot scenarios, and can estiame 2(M −1 signals. Simulation results verify that the proposed algorithm is effective.
A Fast LMMSE Channel Estimation Method for OFDM Systems
Directory of Open Access Journals (Sweden)
Zhou Wen
2009-01-01
Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.
A novel TOA estimation method with effective NLOS error reduction
Institute of Scientific and Technical Information of China (English)
ZHANG Yi-heng; CUI Qi-mei; LI Yu-xiang; ZHANG Ping
2008-01-01
It is well known that non-line-of-sight (NLOS)error has been the major factor impeding the enhancement ofaccuracy for time of arrival (TOA) estimation and wirelesspositioning. This article proposes a novel method of TOAestimation effectively reducing the NLOS error by 60%,comparing with the traditional timing and synchronizationmethod. By constructing the orthogonal training sequences,this method converts the traditional TOA estimation to thedetection of the first arrival path (FAP) in the NLOS multipathenvironment, and then estimates the TOA by the round-triptransmission (RTT) technology. Both theoretical analysis andnumerical simulations prove that the method proposed in thisarticle achieves better performance than the traditional methods.
Estimating Tree Height-Diameter Models with the Bayesian Method
Directory of Open Access Journals (Sweden)
Xiongqing Zhang
2014-01-01
Full Text Available Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS and the maximum likelihood method (ML. The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Research on the estimation method for Earth rotation parameters
Yao, Yibin
2008-12-01
In this paper, the methods of earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in details. To estimate ERP, two different ways are involved: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. With the IGS daily SINEX files produced by GPS tracking stations can be used to estimate ERP. The parameter transformation method can simplify the process. The process result indicates that the systemic error will exist in the estimated ERP by only using GPS observations. As to the daily GPS SINEX files, why the distinct systemic error is exist in the ERP, or whether this systemic error will affect other parameters estimation, and what its influenced magnitude being, it needs further study in the future.
An evaluation of methods for estimating decadal stream loads
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between
Evaluating maximum likelihood estimation methods to determine the hurst coefficients
Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.
1999-12-01
A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.
Statistical methods for cosmological parameter selection and estimation
Liddle, Andrew R
2009-01-01
The estimation of cosmological parameters from precision observables is an important industry with crucial ramifications for particle physics. This article discusses the statistical methods presently used in cosmological data analysis, highlighting the main assumptions and uncertainties. The topics covered are parameter estimation, model selection, multi-model inference, and experimental design, all primarily from a Bayesian perspective.
Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence
Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.
2008-01-01
This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…
WAVELET BASED SPECTRAL CORRELATION METHOD FOR DPSK CHIP RATE ESTIMATION
Institute of Scientific and Technical Information of China (English)
Li Yingxiang; Xiao Xianci; Tai Hengming
2004-01-01
A wavelet-based spectral correlation algorithm to detect and estimate BPSK signal chip rate is proposed. Simulation results show that the proposed method can correctly estimate the BPSK signal chip rate, which may be corrupted by the quadratic characteristics of the spectral correlation function, in a low SNR environment.
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Performance of sampling methods to estimate log characteristics for wildlife.
Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton
2004-01-01
Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...
A Novel Monopulse Angle Estimation Method for Wideband LFM Radars
Directory of Open Access Journals (Sweden)
Yi-Xiong Zhang
2016-06-01
Full Text Available Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP of monopulse. In wideband radars, linear frequency modulated (LFM signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF. Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars.
Estimation of pump operational state with model-based methods
Energy Technology Data Exchange (ETDEWEB)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina [Institute of Energy Technology, Lappeenranta University of Technology, P.O. Box 20, FI-53851 Lappeenranta (Finland); Kestilae, Juha [ABB Drives, P.O. Box 184, FI-00381 Helsinki (Finland)
2010-06-15
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently. (author)
Grade-Average Method: A Statistical Approach for Estimating ...
African Journals Online (AJOL)
Grade-Average Method: A Statistical Approach for Estimating Missing Value for Continuous Assessment Marks. ... Journal of the Nigerian Association of Mathematical Physics. Journal Home · ABOUT ... Open Access DOWNLOAD FULL TEXT ...
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
A new FOA estimation method in SAR/GALILEO system
Liu, Gang; He, Bing; Li, Jilin
2007-11-01
The European Galileo Plan will include the Search and Rescue (SAR) transponder which will become part of the future MEOSAR (Medium earth orbit Search and Rescue) system, the new SAR system can improve localization accuracy through measuring the frequency of arrival (FOA) and time of arrival (TOA) of beacons, the FOA estimation is one of the most important part. In this paper, we aim to find a good FOA algorithm with minimal estimation error, which must be less than 0.1Hz. We propose a new method called Kay algorithm for the SAR/GALILEO system by comparing some frequency estimation methods and current methods using in the COAPAS-SARSAT system and analyzing distress beacon in terms of signal structure, spectrum characteristic. The simulation proves that the Kay method for FOA estimation is better.
A bootstrap method for estimating uncertainty of water quality trends
Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura
2015-01-01
Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.
Methods of multicriterion estimations in system total quality management
Directory of Open Access Journals (Sweden)
Nikolay V. Diligenskiy
2011-05-01
Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.
Method of moments estimation of GO-GARCH models
Boswijk, H.P.; van der Weide, R.
2009-01-01
We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to
A LEVEL-VALUE ESTIMATION METHOD FOR SOLVING GLOBAL OPTIMIZATION
Institute of Scientific and Technical Information of China (English)
WU Dong-hua; YU Wu-yang; TIAN Wei-wen; ZHANG Lian-sheng
2006-01-01
A level-value estimation method was illustrated for solving the constrained global optimization problem. The equivalence between the root of a modified variance equation and the optimal value of the original optimization problem is shown. An alternate algorithm based on the Newton's method is presented and the convergence of its implementable approach is proved. Preliminary numerical results indicate that the method is effective.
Methods for Estimating the Ultimate Bearing Capacity of Layered Foundations
Institute of Scientific and Technical Information of China (English)
袁凡凡; 闫澍旺
2003-01-01
The Meyerhof and Hanna′s(M-H) method to estimate the ultimate bearing capacity of layered foundations was improved. The experimental results of the load tests in Tianjin New Harbor were compared with predictions with the method recommended by the code for the foundations of harbor engineering, i.e. Hansen′s method and the improved M-H method. The results of the comparisons implied that the code and the improved M-H method could give a better prediction.
Using the Mercy Method for Weight Estimation in Indian Children
Directory of Open Access Journals (Sweden)
Gitanjali Batmanabane MD, PhD
2015-01-01
Full Text Available This study was designed to compare the performance of a new weight estimation strategy (Mercy Method with 12 existing weight estimation methods (APLS, Best Guess, Broselow, Leffler, Luscombe-Owens, Nelson, Shann, Theron, Traub-Johnson, Traub-Kichen in children from India. Otherwise healthy children, 2 months to 16 years, were enrolled and weight, height, humeral length (HL, and mid-upper arm circumference (MUAC were obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights and the slope, intercept, and Pearson correlation coefficient estimated. Agreement between estimated weight and actual weight was determined using Bland–Altman plots with log-transformation. Predictive performance of each method was assessed using mean error (ME, mean percentage error (MPE, and root mean square error (RMSE. Three hundred seventy-five children (7.5 ± 4.3 years, 22.1 ± 12.3 kg, 116.2 ± 26.3 cm participated in this study. The Mercy Method (MM offered the best correlation between actual and estimated weight when compared with the other methods (r2 = .967 vs .517-.844. The MM also demonstrated the lowest ME, MPE, and RMSE. Finally, the MM estimated weight within 20% of actual for nearly all children (96% as opposed to the other methods for which these values ranged from 14% to 63%. The MM performed extremely well in Indian children with performance characteristics comparable to those observed for US children in whom the method was developed. It appears that the MM can be used in Indian children without modification, extending the utility of this weight estimation strategy beyond Western populations.
Recent Developments in the Methods of Estimating Shooting Distance
Directory of Open Access Journals (Sweden)
Arie Zeichner
2002-01-01
Full Text Available A review of developments during the past 10 years in the methods of estimating shooting distance is provided. This review discusses the examination of clothing targets, cadavers, and exhibits that cannot be processed in the laboratory. The methods include visual/microscopic examinations, color tests, and instrumental analysis of the gunshot residue deposits around the bullet entrance holes. The review does not cover shooting distance estimation from shotguns that fired pellet loads.
Recursive algorithm for the two-stage EFOP estimation method
Institute of Scientific and Technical Information of China (English)
LUO GuiMing; HUANG Jian
2008-01-01
A recursive algorithm for the two-stage empirical frequency-domain optimal param-eter (EFOP) estimation method Was proposed. The EFOP method was a novel sys-tem identificallon method for Black-box models that combines time-domain esti-mation and frequency-domain estimation. It has improved anti-disturbance perfor-mance, and could precisely identify models with fewer sample numbers. The two-stage EFOP method based on the boot-strap technique was generally suitable for Black-box models, but it was an iterative method and takes too much computation work so that it did not work well online. A recursive algorithm was proposed for dis-turbed stochastic systems. Some simulation examples are included to demonstrate the validity of the new method.
Economic Consequences of DifferentLean Percentage Estimation Methods
Directory of Open Access Journals (Sweden)
Zdravko Tolušić
2003-06-01
Full Text Available The economic effects of the choice of two different lean percentage estimation methods and terminal sire breed were investigated on 53 pig carcasses divided in two groups. In the 1st group were progeny of Pietrain used as terminal sire (n=25 and in 2nd the progeny of Large White terminal sire. It was found that the breed of terminal sire haven.t had influence on cold carcass weight and fat thickness measured for TP method of lean percentage estimation. Inclusion of Pietrain as terminal sire had influence on MLD thickness measured for TP and INS methods which was significantly higher, while fat thickness measured for instrumental method was significantly lower (p<0.01. Carcasses of the same group had higher lean percentage estimated by TP and INS methods (p<0.05 and p<0.01, resp.. Also, different methods of lean percentage estimation resulted in different classification of the carcasses into SEUROP classes. The choice of the lean percentage estimation method had no significant effect on the price of the carcasses from 2nd group which had Large White as terminal sire, while in pig carcasses from the 1st group (Pietrain as terminal sire, the choice of lean percentage method of estimation determined the price of the carcasses, and by this also economic surplus (or loss of the producers. It is concluded that both methods are equally applicable in the case of Large White crossbreeds, while caution should be taken in the case of pig carcasses originated from Pietrain as terminal sire because carcasses of such pigs reached higher prices when estimated by instrumental method.
Multiadaptive Galerkin Methods for ODEs III: A Priori Error Estimates
Logg, Anders
2012-01-01
The multiadaptive continuous/discontinuous Galerkin methods mcG(q) and mdG(q) for the numerical solution of initial value problems for ordinary differential equations are based on piecewise polynomial approximation of degree q on partitions in time with time steps which may vary for different components of the computed solution. In this paper, we prove general order a priori error estimates for the mcG(q) and mdG(q) methods. To prove the error estimates, we represent the error in terms of a discrete dual solution and the residual of an interpolant of the exact solution. The estimates then follow from interpolation estimates, together with stability estimates for the discrete dual solution.
RELATIVE CAMERA POSE ESTIMATION METHOD USING OPTIMIZATION ON THE MANIFOLD
Directory of Open Access Journals (Sweden)
C. Cheng
2017-05-01
Full Text Available To solve the problem of relative camera pose estimation, a method using optimization with respect to the manifold is proposed. Firstly from maximum-a-posteriori (MAP model to nonlinear least squares (NLS model, the general state estimation model using optimization is derived. Then the camera pose estimation model is applied to the general state estimation model, while the parameterization of rigid body transformation is represented by Lie group/algebra. The jacobian of point-pose model with respect to Lie group/algebra is derived in detail and thus the optimization model of rigid body transformation is established. Experimental results show that compared with the original algorithms, the approaches with optimization can obtain higher accuracy both in rotation and translation, while avoiding the singularity of Euler angle parameterization of rotation. Thus the proposed method can estimate relative camera pose with high accuracy and robustness.
A Channelization-Based DOA Estimation Method for Wideband Signals.
Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-07-04
In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.
A Channelization-Based DOA Estimation Method for Wideband Signals
Directory of Open Access Journals (Sweden)
Rui Guo
2016-07-01
Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.
Simultaneous estimation of esomeprazole and domperidone by UV spectrophotometric method
Directory of Open Access Journals (Sweden)
Prabu S
2008-01-01
Full Text Available A novel, simple, sensitive and rapid spectrophotometric method has been developed for simultaneous estimation of esomeprazole and domperidone. The method involved solving simultaneous equations based on measurement of absorbance at two wavelengths, 301 nm and 284 nm, ′λ max of esomeprazole and domperidone respectively. Beer′s law was obeyed in the concentration range of 5-20 µg/ml and 8-30 µg/ml for esomeprazole and domperidone respectively. The method was found to be precise, accurate, and specific. The proposed method was successfully applied to estimation of esomeprazole and domperidone in combined solid dosage form.
A Computationally Efficient Method for Polyphonic Pitch Estimation
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Na, Seong-Won; Kallivokas, Loukas F.
2008-03-01
In this article we discuss a formal framework for casting the inverse problem of detecting the location and shape of an insonified scatterer embedded within a two-dimensional homogeneous acoustic host, in terms of a partial-differential-equation-constrained optimization approach. We seek to satisfy the ensuing Karush-Kuhn-Tucker first-order optimality conditions using boundary integral equations. The treatment of evolving boundary shapes, which arise naturally during the search for the true shape, resides on the use of total derivatives, borrowing from recent work by Bonnet and Guzina [1-4] in elastodynamics. We consider incomplete information collected at stations sparsely spaced at the assumed obstacle’s backscattered region. To improve on the ability of the optimizer to arrive at the global optimum we: (a) favor an amplitude-based misfit functional; and (b) iterate over both the frequency- and wave-direction spaces through a sequence of problems. We report numerical results for sound-hard objects with shapes ranging from circles, to penny- and kite-shaped, including obstacles with arbitrarily shaped non-convex boundaries.
A Group Contribution Method for Estimating Cetane and Octane Numbers
Energy Technology Data Exchange (ETDEWEB)
Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group
2016-07-28
Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.
A new method for parameter estimation in nonlinear dynamical equations
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
An improved method for estimating the frequency correlation function
Chelli, Ali
2012-04-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE
Institute of Scientific and Technical Information of China (English)
NI Zhongxin; FEI Heliang
2005-01-01
In reliability theory and survival analysis,the problem of point estimation based on the censored sample has been discussed in many literatures.However,most of them are focused on MLE,BLUE etc;little work has been done on the moment-method estimation in censoring case.To make the method of moment estimation systematic and unifiable,in this paper,the moment-method estimators(abbr.MEs) and modified momentmethod estimators(abbr.MMEs) of the parameters based on type I and type Ⅱ censored samples are put forward involving mean residual lifetime. The strong consistency and other properties are proved. To be worth mentioning,in the exponential distribution,the proposed moment-method estimators are exactly MLEs. By a simulation study,in the view point of bias and mean square of error,we show that the MEs and MMEs are better than MLEs and the "pseudo complete sample" technique introduced in Whitten et al.(1988).And the superiority of the MEs is especially conspicuous,when the sample is heavily censored.
Adaptive Methods for Permeability Estimation and Smart Well Management
Energy Technology Data Exchange (ETDEWEB)
Lien, Martha Oekland
2005-04-01
The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement
Training Methods for Image Noise Level Estimation on Wavelet Components
Directory of Open Access Journals (Sweden)
A. De Stefano
2004-12-01
Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.
The estimation of the measurement results with using statistical methods
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Aircraft Combat Survivability Estimation and Synthetic Tradeoff Methods
Institute of Scientific and Technical Information of China (English)
LI Shu-lin; LI Shou-an; LI Wei-ji; LI Dong-xia; FENG Feng
2005-01-01
A new concept is proposed that susceptibility, vulnerability, reliability, maintainability and supportability should be essential factors of aircraft combat survivability. A weight coefficient method and a synthetic method are proposed to estimate aircraft combat survivability based on the essential factors. Considering that it takes cost to enhance aircraft combat survivability, a synthetic tradeoff model between aircraft combat survivability and life cycle cost is built. The aircraft combat survivability estimation methods and synthetic tradeoff with a life cycle cost model will be helpful for aircraft combat survivability design and enhancement.
SPECTROPHOTOMETRIC METHOD FOR ESTIMATION OF RABEPRAZOLE SODIUM IN TABLETS
Directory of Open Access Journals (Sweden)
Gulshan Pandey
2013-03-01
Full Text Available A simple, rapid, accurate, economical and reproducible spectrophotometric method for estimation of rabeprazole sodium (RAB has been developed. The method employs estimation by straight line equation obtained from calibration curve of rabeprazole sodium. Analysis was performed at 284.0nm which is absorbance maximum of the said drug in 20% v/v aqueous methanol as solvent. The method obeys Beer’s law between 4.08 – 24.5 µg/. Results of analysis were validated statistically by ICH guidelines 1996.
A correlation method of detecting and estimating interactions of QTLs
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
More and more studies demonstrate that a great deal of interactions among the quantitative trait loci (QTLs) are far more than those detected by single markers. A correlation method was proposed for estimating the interactions of multiple QTLs detected by multi-markers in several mapping populations. Genetic implication of this method and usage were discussed.
Estimation of arsenic in nail using silver diethyldithiocarbamate method
Directory of Open Access Journals (Sweden)
Habiba Akhter Bhuiyan
2015-08-01
Full Text Available Spectrophotometric method of arsenic estimation in nails has four steps: a washing of nails, b digestion of nails, c arsenic generation, and finally d reading absorbance using spectrophotometer. Although the method is a cheapest one, widely used and effective, it is time consuming, laborious and need caution while using four acids.
A SPECTROPHOTOMETRIC METHOD TO ESTIMATE PIPERINE IN PIPER SPECIES
1998-01-01
A Simple, rapid and economical procedure for estimation of piperine by UV Spectrophotometer in different piper species was developed and is described. The method is based on method is based on the identification of piperine by TLC and on the ultra violet absorbance maxima in alcohol at 328 nm.
New Completeness Methods for Estimating Exoplanet Discoveries by Direct Detection
Brown, Robert A
2010-01-01
We report new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list, and where stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission & its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization), & an estimate of the occurrence rate of planets (eta). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, & we prove the accuracy of this new approach. Our methods provide the tools to define a mission for a particular science goal, for example defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built & how successive observations can be selected. Our approa...
Variance estimation in neutron coincidence counting using the bootstrap method
Energy Technology Data Exchange (ETDEWEB)
Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)
2015-09-11
In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.
Evaluation of non cyanide methods for hemoglobin estimation
Directory of Open Access Journals (Sweden)
Vinaya B Shah
2011-01-01
Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers
Apparatus and method for velocity estimation in synthetic aperture imaging
DEFF Research Database (Denmark)
2003-01-01
The invention relates to an apparatus for flow estimation using synthetic aperture imaging. The method uses a Synthetic Transmit Aperture, but unlike previous approaches a new frame is created after every pulse emission. In receive mode parallel beam forming is implemented. The beam formed RF data......). The update signals are used in the velocity estimation processor (8) to correlate the individual measurements to obtain the displacement between high-resolution images and thereby determine the velocity....
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Applicability of available methods for incidence estimation among blood donors
Institute of Scientific and Technical Information of China (English)
Shtmian Zou; Edward P.Notari IV; Roger Y.Dodd
2010-01-01
@@ Abstract Incidence rates of major transfusion transmissible viral infections have been estimated threugh widely used sereconversion approaches and recently developed methods.A quality database for blood donors and donations with the capacity to track donation history of each donor is the basis for incidence estimation and many other epidemiological studies.Depending on available data,difierent ways have been used to determine incidence rates based on conversion from uninfected to infected status among repeat donors.
Method and system for estimating herbage uptake of an animal
DEFF Research Database (Denmark)
2011-01-01
The invention relates to a method and a system for estimating the feeding value or the amount of consumed herbage of grazing animals. The estimated herbage uptake is based on measured and possibly estimated data which is supplied as input data to a mathematical model. Measured input data may...... by the model and possibly provided as output data. Measurements may be obtained by a sensor module carried by the animal and the measurements may be wirelessly transmitted from the sensor module to a receiver, possibly via relay transceivers....
Plant-available soil water capacity: estimation methods and implications
Directory of Open Access Journals (Sweden)
Bruno Montoani Silva
2014-04-01
Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.
Subset Simulation Method for Rare Event Estimation: An Introduction
2015-01-01
This paper provides a detailed introductory description of Subset Simulation, an advanced stochastic simulation method for estimation of small probabilities of rare failure events. A simple and intuitive derivation of the method is given along with the discussion on its implementation. The method is illustrated with several easy-to-understand examples. For demonstration purposes, the MATLAB code for the considered examples is provided. The reader is assumed to be familiar only with elementary...
Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling
Institute of Scientific and Technical Information of China (English)
ZHOU Dan; LI Chengrong; WANG Zhongdong
2013-01-01
Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ～ 1 000 and a censoring rate of 90％,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90％ confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90％ confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.
A Maximum-Entropy Method for Estimating the Spectrum
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Based on the maximum-entropy (ME) principle, a new power spectral estimator for random waves is derived in the form of ~S(ω)=(a/8)-H2(2π)d+1ω-(d+2)exp[-b(2π/ω)n], by solving a variational problem subject to some quite general constraints. This robust method is comprehensive enough to describe the wave spectra even in extreme wave conditions and is superior to periodogram method that is not suitable to process comparatively short or intensively unsteady signals for its tremendous boundary effect and some inherent defects of FFT. Fortunately, the newly derived method for spectral estimation works fairly well, even though the sample data sets are very short and unsteady, and the reliability and efficiency of this spectral estimator have been preliminarily proved.
Fast LCMV-based Methods for Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll
2013-01-01
Recently, optimal linearly constrained minimum variance (LCMV) filtering methods have been applied to fundamental frequency estimation. Such estimators often yield preferable performance but suffer from being computationally cumbersome as the resulting cost functions are multimodal with narrow...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...... be efficiently updated when new observations become available. The resulting time-recursive updating can reduce the computational complexity even further. The experimental results show that the performances of the proposed methods are comparable or better than that of other competing methods in terms of spectral...
Accurate photometric redshift probability density estimation - method comparison and application
Rau, Markus Michael; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben
2015-01-01
We introduce an ordinal classification algorithm for photometric redshift estimation, which vastly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, that can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitudes less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular Neural Network code (ANNz). In our use case, this improvemen...
An Estimation Method for number of carrier frequency
Directory of Open Access Journals (Sweden)
Xiong Peng
2015-01-01
Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.
The Lyapunov dimension and its estimation via the Leonov method
Energy Technology Data Exchange (ETDEWEB)
Kuznetsov, N.V., E-mail: nkuznetsov239@gmail.com
2016-06-03
Highlights: • Survey on effective analytical approach for Lyapunov dimension estimation, proposed by Leonov, is presented. • Invariance of Lyapunov dimension under diffeomorphisms and its connection with Leonov method are demonstrated. • For discrete-time dynamical systems an analog of Leonov method is suggested. - Abstract: Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985–90), and the numerical calculation of the Lyapunov exponents and dimension.
Simplified triangle method for estimating evaporative fraction over soybean crops
Silva-Fuzzo, Daniela Fernanda; Rocha, Jansle Vieira
2016-10-01
Accurate estimates are emerging with technological advances in remote sensing, and the triangle method has demonstrated to be a useful tool for the estimation of evaporative fraction (EF). The purpose of this study was to estimate the EF using the triangle method at the regional level. We used data from the Moderate Resolution Imaging Spectroradiometer orbital sensor, referring to indices of surface temperature and vegetation index for a 10-year period (2002/2003 to 2011/2012) of cropping seasons in the state of Paraná, Brazil. The triangle method has shown considerable results for the EF, and the validation of the estimates, as compared to observed data of climatological water balance, showed values >0.8 for modified "d" of Wilmott and R2 values between 0.6 and 0.7 for some counties. The errors were low for all years analyzed, and the test showed that the estimated data are very close to the observed data. Based on statistical validation, we can say that the triangle method is a consistent tool, is useful as it uses only images of remote sensing as variables, and can provide support for monitoring large-scale agroclimatic, specially for countries of great territorial dimensions, such as Brazil, which lacks a more dense network of meteorological ground stations, i.e., the country does not appear to cover a large field for data.
Estimating seismic demand parameters using the endurance time method
Institute of Scientific and Technical Information of China (English)
Ramin MADARSHAHIAN; Homayoon ESTEKANCHI; Akbar MAHVASHMOHAMMADI
2011-01-01
The endurance time (ET) method is a time history based dynamic analysis in which structures are subjected to gradually intensifying excitations and their performances are judged based on their responses at various excitation levels.Using this method,the computational effort required for estimating probable seismic demand parameters can be reduced by an order of magnitude.Calculation of the maximum displacement or target displacement is a basic requirement for estimating performance based on structural design.The purpose of this paper is to compare the results of the nonlinear ET method with the nonlinear static pushover (NSP) method of FEMA 356 by evaluating performances and target displacements of steel frames.This study will lead to a deeper insight into the capabilities and limitations of the ET method.The results are further compared with those of the standard nonlinear response history analysis.We conclude that results from the ET analysis are in proper agreement with those from standard procedures.
A review of action estimation methods for galactic dynamics
Sanders, Jason L.; Binney, James
2016-04-01
We review the available methods for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The methods are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, methods are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent methods rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the methods and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new method for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent methods do not offer sufficient accuracy and the most suitable method is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
GEOMETRIC METHOD OF SEQUENTIAL ESTIMATION RELATED TO MULTINOMIAL DISTRIBUTION MODELS
Institute of Scientific and Technical Information of China (English)
WEIBOCHENG; LISHOUYE
1995-01-01
In 1980's differential geometric methods are successfully used to study curved expomential families and normal nonlinear regression models.This paper presents a new geometric structure to study multinomial distribution models which contain a set of nonlinear parameters.Based on this geometric structure,the suthors study several asymptotic properties for sequential estimation.The bias,the variance and the information loss of the sequential estimates are given from geomentric viewpoint,and a limit theorem connected with the observed and expected Fisher information is obtained in terms of curvatvre measures.The results show that the sequential estimation procednce has some better properties which are generally impossible for nonsequential estimation procedures.
The deposit size frequency method for estimating undiscovered uranium deposits
McCammon, R.B.; Finch, W.I.
1993-01-01
The deposit size frequency (DSF) method has been developed as a generalization of the method that was used in the National Uranium Resource Evaluation (NURE) program to estimate the uranium endowment of the United States. The DSF method overcomes difficulties encountered during the NURE program when geologists were asked to provide subjective estimates of (1) the endowed fraction of an area judged favorable (factor F) for the occurrence of undiscovered uranium deposits and (2) the tons of endowed rock per unit area (factor T) within the endowed fraction of the favorable area. Because the magnitudes of factors F and T were unfamiliar to nearly all of the geologists, most geologists responded by estimating the number of undiscovered deposits likely to occur within the favorable area and the average size of these deposits. The DSF method combines factors F and T into a single factor (F??T) that represents the tons of endowed rock per unit area of the undiscovered deposits within the favorable area. Factor F??T, provided by the geologist, is the estimated number of undiscovered deposits per unit area in each of a number of specified deposit-size classes. The number of deposit-size classes and the size interval of each class are based on the data collected from the deposits in known (control) areas. The DSF method affords greater latitude in making subjective estimates than the NURE method and emphasizes more of the everyday experience of exploration geologists. Using the DSF method, new assessments have been made for the "young, organic-rich" surficial uranium deposits in Washington and idaho and for the solution-collapse breccia pipe uranium deposits in the Grand Canyon region in Arizona and adjacent Utah. ?? 1993 Oxford University Press.
Global parameter estimation methods for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Poovathingal Suresh
2010-08-01
Full Text Available Abstract Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter
Paradigms and commonalities in atmospheric source term estimation methods
Bieringer, Paul E.; Young, George S.; Rodriguez, Luna M.; Annunzio, Andrew J.; Vandenberghe, Francois; Haupt, Sue Ellen
2017-05-01
Modeling the downwind hazard area resulting from the unknown release of an atmospheric contaminant requires estimation of the source characteristics of a localized source from concentration or dosage observations and use of this information to model the subsequent transport and dispersion of the contaminant. This source term estimation problem is mathematically challenging because airborne material concentration observations and wind data are typically sparse and the turbulent wind field chaotic. Methods for addressing this problem fall into three general categories: forward modeling, inverse modeling, and nonlinear optimization. Because numerous methods have been developed on various foundations, they often have a disparate nomenclature. This situation poses challenges to those facing a new source term estimation problem, particularly when selecting the best method for the problem at hand. There is, however, much commonality between many of these methods, especially within each category. Here we seek to address the difficulties encountered when selecting an STE method by providing a synthesis of the various methods that highlights commonalities, potential opportunities for component exchange, and lessons learned that can be applied across methods.
Comparing Implementations of Estimation Methods for Spatial Econometrics
Directory of Open Access Journals (Sweden)
Roger Bivand
2015-02-01
Full Text Available Recent advances in the implementation of spatial econometrics model estimation techniques have made it desirable to compare results, which should correspond between implementations across software applications for the same data. These model estimation techniques are associated with methods for estimating impacts (emanating effects, which are also presented and compared. This review constitutes an up-to-date comparison of generalized method of moments and maximum likelihood implementations now available. The comparison uses the cross-sectional US county data set provided by Drukker, Prucha, and Raciborski (2013d. The comparisons will be cast in the context of alternatives using the MATLAB Spatial Econometrics toolbox, Stata's user-written sppack commands, Python with PySAL and R packages including spdep, sphet and McSpatial.
Comparison of teh basic methods used in estimateing hydrocarbon resources
Energy Technology Data Exchange (ETDEWEB)
Miller, B.M.
1975-04-01
The most critical basis for the formulation of a national energy policy is an understanding of the extent of our energy resources. To plan for the rational exploration and development of these resources, an estimate of the amounts of hydrocarbon resources that remain available for recovery must be made. The methods of Hubbert, Zapp, Hendricks, Moore, and Pelto are selected for review, with the basic assumptions behind each technique briefly characterized for comparative purposes. The advantages and disadvantages of each method are analyzed and compared. Two current systems being investigated in the Survey's Resource Appraisal Group are the Accelerated National Oil and Gas Resource Evaluation II (ANOGRE II) and the Hydrocarbon Province Analog System. The concepts and approaches employed in estimating the future availability of hydrocarbon resources have led to considerable misunderstanding and highly divergent results. The objective of this investigation is to formulate a realistic procedure of evaluating and estimating our national and worldwide hydrocarbon resources.
A method of complex background estimation in astronomical images
Popowicz, Adam
2016-01-01
In this paper, we present a novel approach to the estimation of strongly varying backgrounds in astronomical images by means of small objects removal and subsequent missing pixels interpolation. The method is based on the analysis of a pixel local neighborhood and utilizes the morphological distance transform. In contrast to popular background estimation techniques, our algorithm allows for accurate extraction of complex structures, like galaxies or nebulae. Moreover, it does not require multiple tuning parameters, since it relies on physical properties of CCD image sensors - the gain and the read-out noise characteristics. The comparison with other widely used background estimators revealed higher accuracy of the proposed technique. The superiority of the novel method is especially significant for the most challenging fluctuating backgrounds. The size of filtered out objects is tunable, therefore the algorithm may eliminate a wide range of foreground structures, including the dark current impulses, cosmic ra...
Improved Phasor Estimation Method for Dynamic Voltage Restorer Applications
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Farhangi, Shahrokh; Iman-Eini, Hossein;
2015-01-01
The dynamic voltage restorer (DVR) is a series compensator for distribution system applications, which protects sensitive loads against voltage sags by fast voltage injection. The DVR must estimate the magnitude and phase of the measured voltages to achieve the desired performance. This paper...... proposes a phasor parameter estimation algorithm based on a recursive variable and fixed data window least error squares (LES) method for the DVR control system. The proposed algorithm, in addition to decreasing the computational burden, improves the frequency response of the control scheme based...... on the fixed data window LES method. The DVR control system based on the proposed algorithm provides a better compromise between the estimation speed and accuracy of the voltage and current signals and can be implemented using a simple and low-cost processor. The results of the studies indicate...
A Novel Uncoded SER/BER Estimation Method
Directory of Open Access Journals (Sweden)
Mahesh Patel
2015-06-01
Full Text Available Due to the rapidly increasing data speed requirement, it has become essential to smartly utilize the available frequency spectrum. In wireless communications systems, channel quality parameters are often used to enable resource allocation techniques that improve system capacity and user quality. The uncoded bit or symbol error rate (SER is specified as an important parameter in the second and third generation partnership project (3GPP. Nonetheless, techniques to estimate the uncoded SER are usually not much published. This paper introduces a novel uncoded bit error rate (BER estimation method using the accurate-bits sequence of the new channel codes over the AWGN channel. Here, we have used the new channel codes as a forward error correction coding scheme for our communication system. This paper also presents the simulation results to demonstrate and compare the estimation accuracy of the proposed method over the AWGN channel.
Methods for estimating uncertainty in factor analytic solutions
Directory of Open Access Journals (Sweden)
P. Paatero
2013-08-01
Full Text Available EPA PMF version 5.0 and the underlying multilinear engine executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS, displacement of factor elements (DISP, and bootstrap enhanced by displacement of factor elements (BS-DISP. The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.
Power Network Parameter Estimation Method Based on Data Mining Technology
Institute of Scientific and Technical Information of China (English)
ZHANG Qi-ping; WANG Cheng-min; HOU Zhi-fian
2008-01-01
The parameter values which actually change with the circumstances, weather and load level etc.produce great effect to the result of state estimation. A new parameter estimation method based on data mining technology was proposed. The clustering method was used to classify the historical data in supervisory control and data acquisition (SCADA) database as several types. The data processing technology was impliedto treat the isolated point, missing data and yawp data in samples for classified groups. The measurement data which belong to each classification were introduced to the linear regression equation in order to gain the regression coefficient and actual parameters by the least square method. A practical system demonstrates the high correctness, reliability and strong practicability of the proposed method.
Method for estimating spin-spin interactions from magnetization curves
Tamura, Ryo; Hukushima, Koji
2017-02-01
We develop a method to estimate the spin-spin interactions in the Hamiltonian from the observed magnetization curve by machine learning based on Bayesian inference. In our method, plausible spin-spin interactions are determined by maximizing the posterior distribution, which is the conditional probability of the spin-spin interactions in the Hamiltonian for a given magnetization curve with observation noise. The conditional probability is obtained with the Markov chain Monte Carlo simulations combined with an exchange Monte Carlo method. The efficiency of our method is tested using synthetic magnetization curve data, and the results show that spin-spin interactions are estimated with a high accuracy. In particular, the relevant terms of the spin-spin interactions are successfully selected from the redundant interaction candidates by the l1 regularization in the prior distribution.
Estimation of water percolation by different methods using TDR
Directory of Open Access Journals (Sweden)
Alisson Jadavi Pereira da Silva
2014-02-01
Full Text Available Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR in a drainage lysimeter. We used Darcy's law with K(θ functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ predicted by the method of Hillel et al. (1972 provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980, Sisson et al. (1980 and van Genuchten (1980 underestimated water percolation.
Accurate position estimation methods based on electrical impedance tomography measurements
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less
Comparison of density estimation methods for astronomical datasets
Ferdosi, B.J.; Buddelmeijer, H.; Trager, S.C.; Wilkinson, M.H.F.; Roerdink, J.B.T.M.
2011-01-01
Context. Galaxies are strongly influenced by their environment. Quantifying the galaxy density is a difficult but critical step in studying the properties of galaxies. Aims. We aim to determine differences in density estimation methods and their applicability in astronomical problems. We study the p
Comparing different methods for estimating radiation dose to the conceptus
Energy Technology Data Exchange (ETDEWEB)
Lopez-Rendon, X.; Dedulle, A. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); Walgraeve, M.S.; Woussen, S.; Zhang, G. [University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Bosmans, H. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Zanca, F. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); GE Healthcare, Buc (France)
2017-02-15
To compare different methods available in the literature for estimating radiation dose to the conceptus (D{sub conceptus}) against a patient-specific Monte Carlo (MC) simulation and a commercial software package (CSP). Eight voxel models from abdominopelvic CT exams of pregnant patients were generated. D{sub conceptus} was calculated with an MC framework including patient-specific longitudinal tube current modulation (TCM). For the same patients, dose to the uterus, D{sub uterus}, was calculated as an alternative for D{sub conceptus}, with a CSP that uses a standard-size, non-pregnant phantom and a generic TCM curve. The percentage error between D{sub uterus} and D{sub conceptus} was studied. Dose to the conceptus and percent error with respect to D{sub conceptus} was also estimated for three methods in the literature. The percentage error ranged from -15.9% to 40.0% when comparing MC to CSP. When comparing the TCM profiles with the generic TCM profile from the CSP, differences were observed due to patient habitus and conceptus position. For the other methods, the percentage error ranged from -30.1% to 13.5% but applicability was limited. Estimating an accurate D{sub conceptus} requires a patient-specific approach that the CSP investigated cannot provide. Available methods in the literature can provide a better estimation if applicable to patient-specific cases. (orig.)
Methods for Estimating Capacities of Gaussian Quantum Channels
Pilyavets, Oleg V; Mancini, Stefano
2009-01-01
We present a perturbative approach to the problem of estimating capacities of Gaussian quantum channels. It relies on the expansion of the von Neumann entropy of Gaussian states as a function of the symplectic eigenvalues of the quadratures covariance matrices. We apply this method to the classical capacity of a lossy bosonic channel for both the memory and memoryless cases.
A fast alternating projection method for complex frequency estimation
Andersson, Fredrik; Ivert, Per-Anders
2011-01-01
The problem of approximating a sampled function using sums of a fixed number of complex exponentials is considered. We use alternating projections between fixed rank matrices and Hankel matrices to obtain such an approximation. Convergence, convergence rates and error estimates for this technique are proven, and fast algorithms are developed. We compare the numerical results obtain with the MUSIC and ESPRIT methods.
Hydrological drought. Processes and estimation methods for streamflow and groundwater
Tallaksen, L.; Lanen, van H.A.J.
2004-01-01
Hydrological drought is a textbook for university students, practising hydrologists and researchers. The main scope of this book is to provide the reader with a comprehensive review of processes and estimation methods for streamflow and groundwater drought. It includes a qualitative conceptual
A study of methods to estimate debris flow velocity
Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.
2008-01-01
Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.
Lidar method to estimate emission rates from extended sources
Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...
Development of a method to estimate coal pillar loading
CSIR Research Space (South Africa)
Roberts, DP
2002-09-01
Full Text Available Final Report Development of a method to estimate coal pillar loading DP Roberts, JN van der Merwe, I Canbulat EJ Sellers and S Coetzer Research Agency : CSIR Miningtek Project No : COL 709 Date : September 2002 Report No : 2001-0651 2 Executive...
Joint Pitch and DOA Estimation Using the ESPRIT method
DEFF Research Database (Denmark)
Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom
2015-01-01
In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... method is illustrated on a synthetic signal as well as real-life recorded data....
Marasek, K; Nowicki, A
1994-01-01
The performance of three spectral techniques (FFT, AR Burg and ARMA) for maximum frequency estimation of the Doppler spectra is described. Different definitions of fmax were used: frequency at which spectral power decreases down to 0.1 of its maximum value, modified threshold crossing method (MTCM) and novel geometrical method. "Goodness" and efficiency of estimators were determined by calculating the bias and the standard deviation of the estimated maximum frequency of the simulated Doppler spectra with known statistics. The power of analysed signals was assumed to have the exponential distribution function. The SNR ratios were changed over the range from 0 to 20 dB. Different spectrum envelopes were generated. A Gaussian envelope approximated narrow band spectral processes (P. W. Doppler) and rectangular spectra were used to simulate a parabolic flow insonified with C. W. Doppler. The simulated signals were generated out of 3072-point records with sampling frequency of 20 kHz. The AR and ARMA models order selections were done independently according to Akaike Information Criterion (AIC) and Singular Value Decomposition (SVD). It was found that the ARMA model, computed according to SVD criterion, had the best overall performance and produced results with the smallest bias and standard deviation. In general AR(SVD) was better than AR(AIC). The geometrical method of fmax estimation was found to be more accurate than other tested methods, especially for narrow band signals.
Nonparametric methods for drought severity estimation at ungauged sites
Sadri, S.; Burn, D. H.
2012-12-01
The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.
Adaptive Spectral Estimation Methods in Color Flow Imaging.
Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Avdal, Jorgen; Lovstakken, Lasse
2016-11-01
Clutter rejection for color flow imaging (CFI) remains a challenge due to either a limited amount of temporal samples available or nonstationary tissue clutter. This is particularly the case for interleaved CFI and B-mode acquisitions. Low velocity blood signal is attenuated along with the clutter due to the long transition band of the available clutter filters, causing regions of biased mean velocity estimates or signal dropouts. This paper investigates how adaptive spectral estimation methods, Capon and blood iterative adaptive approach (BIAA), can be used to estimate the mean velocity in CFI without prior clutter filtering. The approach is based on confining the clutter signal in a narrow spectral region around the zero Doppler frequency while keeping the spectral side lobes below the blood signal level, allowing for the clutter signal to be removed by thresholding in the frequency domain. The proposed methods are evaluated using computer simulations, flow phantom experiments, and in vivo recordings from the common carotid and jugular vein of healthy volunteers. Capon and BIAA methods could estimate low blood velocities, which are normally attenuated by polynomial regression filters, and may potentially give better estimation of mean velocities for CFI at a higher computational cost. The Capon method decreased the bias by 81% in the transition band of the used polynomial regression filter for small packet size ( N=8 ) and low SNR (5 dB). Flow phantom and in vivo results demonstrate that the Capon method can provide color flow images and flow profiles with lower variance and bias especially in the regions close to the artery walls.
Three Different Methods of Estimating LAI in a Small Watershed
Speckman, H. N.; Ewers, B. E.; Beverly, D.
2015-12-01
Leaf area index (LAI) is a critical input of models that improve predictive understanding of ecology, hydrology, and climate change. Multiple techniques exist to quantify LAI, most of which are labor intensive, and all often fail to converge on similar estimates. . Recent large-scale bark beetle induced mortality greatly altered LAI, which is now dominated by younger and more metabolically active trees compared to the pre-beetle forest. Tree mortality increases error in optical LAI estimates due to the lack of differentiation between live and dead branches in dense canopy. Our study aims to quantify LAI using three different LAI methods, and then to compare the techniques to each other and topographic drivers to develop an effective predictive model of LAI. This study focuses on quantifying LAI within a small (~120 ha) beetle infested watershed in Wyoming's Snowy Range Mountains. The first technique estimated LAI using in-situ hemispherical canopy photographs that were then analyzed with Hemisfer software. The second LAI estimation technique was use of the Kaufmann 1982 allometrerics from forest inventories conducted throughout the watershed, accounting for stand basal area, species composition, and the extent of bark beetle driven mortality. The final technique used airborne light detection and ranging (LIDAR) first DMS returns, which were used to estimating canopy heights and crown area. LIDAR final returns provided topographical information and were then ground-truthed during forest inventories. Once data was collected, a fractural analysis was conducted comparing the three methods. Species composition was driven by slope position and elevation Ultimately the three different techniques provided very different estimations of LAI, but each had their advantage: estimates from hemisphere photos were well correlated with SWE and snow depth measurements, forest inventories provided insight into stand health and composition, and LIDAR were able to quickly and
Methods for Measuring and Estimating Methane Emission from Ruminants
Directory of Open Access Journals (Sweden)
Jørgen Madsen
2012-04-01
Full Text Available This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments.
Moment-Based Method to Estimate Image Affine Transform
Institute of Scientific and Technical Information of China (English)
FENG Guo-rui; JIANG Ling-ge
2005-01-01
The estimation of affine transform is a crucial problem in the image recognition field. This paper resorted to some invariant properties under translation, rotation and scaling, and proposed a simple method to estimate the affine transform kernel of the two-dimensional gray image. Maps, applying to the original, produce some correlative points that can accurately reflect the affine transform feature of the image. Furthermore, unknown variables existing in the kernel of the transform are calculated. The whole scheme only refers to one-order moment,therefore, it has very good stability.
New Power Estimation Methods for Highly Overloaded Synchronous CDMA Systems
Nashtaali, Damoun; Pad, Pedram; Moghadasi, Seyed Reza; Marvasti, Farokh
2011-01-01
In CDMA systems, the received user powers vary due to moving distance of users. Thus, the CDMA receivers consist of two stages. The first stage is the power estimator and the second one is a Multi-User Detector (MUD). Conventional methods for estimating the user powers are suitable for underor fully-loaded cases (when the number of users is less than or equal to the spreading gain). These methods fail to work for overloaded CDMA systems because of high interference among the users. Since the bandwidth is becoming more and more valuable, it is worth considering overloaded CDMA systems. In this paper, an optimum user power estimation for over-loaded CDMA systems with Gaussian inputs is proposed. We also introduce a suboptimum method with lower complexity whose performance is very close to the optimum one. We shall show that the proposed methods work for highly over-loaded systems (up to m(m + 1) =2 users for a system with only m chips). The performance of the proposed methods is demonstrated by simulations. In ...
Parameter estimation method for blurred cell images from fluorescence microscope
He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin
2016-10-01
Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.
Seasonal adjustment methods and real time trend-cycle estimation
Bee Dagum, Estela
2016-01-01
This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...
Method to Estimate the Dissolved Air Content in Hydraulic Fluid
Hauser, Daniel M.
2011-01-01
In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated
Dental age estimation using Willems method: A digital orthopantomographic study
Directory of Open Access Journals (Sweden)
Rezwana Begum Mohammed
2014-01-01
Full Text Available In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA in different age groups and to evaluate the possible correlation between DA and chronological age (CA in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88. The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P 0.05. Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05. Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA.
Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation
Directory of Open Access Journals (Sweden)
Sharad Damodar Gore
2009-10-01
Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.
ACCELERATED METHODS FOR ESTIMATING THE DURABILITY OF PLAIN BEARINGS
Directory of Open Access Journals (Sweden)
Myron Czerniec
2014-09-01
Full Text Available The paper presents methods for determining the durability of slide bearings. The developed methods enhance the calculation process by even 100000 times, compared to the accurate solution obtained with the generalized cumulative model of wear. The paper determines the accuracy of results for estimating the durability of bearings depending on the size of blocks of constant conditions of contact interaction between the shaft with small out-of-roundedness and the bush with a circular contour. The paper gives an approximate dependence for determining accurate durability using either a more accurate or an additional method.
A method for density estimation based on expectation identities
Peralta, Joaquín; Loyola, Claudia; Loguercio, Humberto; Davis, Sergio
2017-06-01
We present a simple and direct method for non-parametric estimation of a one-dimensional probability density, based on the application of the recent conjugate variables theorem. The method expands the logarithm of the probability density ln P(x|I) in terms of a complete basis and numerically solves for the coefficients of the expansion using a linear system of equations. No Monte Carlo sampling is needed. We present preliminary results that show the practical usefulness of the method for modeling statistical data.
Methods of Mmax Estimation East of the Rocky Mountains
Wheeler, Russell L.
2009-01-01
Several methods have been used to estimate the magnitude of the largest possible earthquake (Mmax) in parts of the Central and Eastern United States and adjacent Canada (CEUSAC). Each method has pros and cons. The largest observed earthquake in a specified area provides an unarguable lower bound on Mmax in the area. Beyond that, all methods are undermined by the enigmatic nature of geologic controls on the propagation of large CEUSAC ruptures. Short historical-seismicity records decrease the defensibility of several methods that are based on characteristics of small areas in most of CEUSAC. Methods that use global tectonic analogs of CEUSAC encounter uncertainties in understanding what 'analog' means. Five of the methods produce results that are inconsistent with paleoseismic findings from CEUSAC seismic zones or individual active faults.
An Adaptive Background Subtraction Method Based on Kernel Density Estimation
Directory of Open Access Journals (Sweden)
Mignon Park
2012-09-01
Full Text Available In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Directory of Open Access Journals (Sweden)
Julius Hannink
2017-08-01
Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data
Vegetation index methods for estimating evapotranspiration by remote sensing
Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.
2010-01-01
Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.
A Subspace Method for Dynamical Estimation of Evoked Potentials
Directory of Open Access Journals (Sweden)
Stefanos D. Georgiadis
2007-01-01
Full Text Available It is a challenge in evoked potential (EP analysis to incorporate prior physiological knowledge for estimation. In this paper, we address the problem of single-channel trial-to-trial EP characteristics estimation. Prior information about phase-locked properties of the EPs is assesed by means of estimated signal subspace and eigenvalue decomposition. Then for those situations that dynamic fluctuations from stimulus-to-stimulus could be expected, prior information can be exploited by means of state-space modeling and recursive Bayesian mean square estimation methods (Kalman filtering and smoothing. We demonstrate that a few dominant eigenvectors of the data correlation matrix are able to model trend-like changes of some component of the EPs, and that Kalman smoother algorithm is to be preferred in terms of better tracking capabilities and mean square error reduction. We also demonstrate the effect of strong artifacts, particularly eye blinks, on the quality of the signal subspace and EP estimates by means of independent component analysis applied as a prepossessing step on the multichannel measurements.
Fault Tolerant Matrix Pencil Method for Direction of Arrival Estimation
Yerriswamy, T; 10.5121/sipij.2011.2306
2011-01-01
Continuing to estimate the Direction-of-arrival (DOA) of the signals impinging on the antenna array, even when a few elements of the underlying Uniform Linear Antenna Array (ULA) fail to work will be of practical interest in RADAR, SONAR and Wireless Radio Communication Systems. This paper proposes a new technique to estimate the DOAs when a few elements are malfunctioning. The technique combines Singular Value Thresholding (SVT) based Matrix Completion (MC) procedure with the Direct Data Domain (D^3) based Matrix Pencil (MP) Method. When the element failure is observed, first, the MC is performed to recover the missing data from failed elements, and then the MP method is used to estimate the DOAs. We also, propose a very simple technique to detect the location of elements failed, which is required to perform MC procedure. We provide simulation studies to demonstrate the performance and usefulness of the proposed technique. The results indicate a better performance, of the proposed DOA estimation scheme under...
Richardson, John G.
2009-11-17
An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.
Estimation of Otoacoustic Emision Signals by Using Synchroneous Averaging Method
Directory of Open Access Journals (Sweden)
Linas Sankauskas
2011-08-01
Full Text Available The study presents the investigation results of synchronous averaging method and its application in estimation of impulse evoked otoacoustic emission signals (IEOAE. The method was analyzed using synthetic and real signals. Synthetic signals were modeled as the mixtures of deterministic component with noise realizations. Two types of noise were used: normal (Gaussian and transient impulses dominated (Laplacian. Signal to noise ratio was used as the signal quality measure after processing. In order to account varying amplitude of deterministic component in the realizations weighted averaging method was investigated. Results show that the performance of synchronous averaging method is very similar in case of both types of noise Gaussian and Laplacian. Weighted averaging method helps to cope with varying deterministic component or noise level in case of nonhomogenous ensembles as is the case in IEOAE signal.Article in Lithuanian
Simple Method for Soil Moisture Estimation from Sentinel-1 Data
Gilewski, Pawei Grzegorz; Kedzior, Mateusz Andrzej; Zawadzki, Jaroslaw
2016-08-01
In this paper, authors calculated high resolution volumetric soil moisture (SM) by means of the Sentinel- 1 data for the Kampinos National Park in Poland and verified obtained results.To do so, linear regression coefficients (LRC) between in-situ SM measurements and Sentinel-1 radar backscatter values were calculated. Next, LRC were applied to obtain SM estimates from Sentinel-1 data. Sentinel-1 SM was verified against in-situ measurements and low-resolution SMOS SM estimates using Pearson's linear correlation coefficient. Simple SM retrieval method from radar data used in this study gives better results for meadows and when Sentinel-1 data in VH polarisation are used.Further research should be conducted to prove usefulness of proposed method.
Adaptive error covariances estimation methods for ensemble Kalman filters
Energy Technology Data Exchange (ETDEWEB)
Zhen, Yicun, E-mail: zhen@math.psu.edu [Department of Mathematics, The Pennsylvania State University, University Park, PA 16802 (United States); Harlim, John, E-mail: jharlim@psu.edu [Department of Mathematics and Department of Meteorology, The Pennsylvania State University, University Park, PA 16802 (United States)
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
An extension of relational methods in mortality estimations
Directory of Open Access Journals (Sweden)
2001-06-01
Full Text Available Actuaries and demographers have a long tradition of utilising collateral data to improve mortality estimates. Three main approaches have been used to accomplish the improvement- mortality laws, model life tables, and relational methods. The present paper introduces a regression model that incorporates all of the beneficial principles from each of these approaches. The model is demonstrated on mortality data pertaining to various groups of life insured people in Sweden.
Interpretation of the method of images in estimating superconducting levitation
Energy Technology Data Exchange (ETDEWEB)
Perez-Diaz, Jose Luis [Departamento de Ingenieria Mecanica, Universidad Carlos III de Madrid, Butarque 15, E28911 Leganes (Spain)], E-mail: jlperez@ing.uc3m.es; Garcia-Prada, Juan Carlos [Departamento de Ingenieria Mecanica, Universidad Carlos III de Madrid, Butarque 15, E28911 Leganes (Spain)
2007-12-01
Among different papers devoted to superconducting levitation of a permanent magnet over a superconductor using the method of images, there is a discrepancy of a factor of two when estimating the lift force. This is not a minor matter but an interesting fundamental question that contributes to understanding the physical phenomena of 'imaging' on a superconductor surface. We solve it, make clear the physical behavior underlying it, and suggest the reinterpretation of some previous experiments.
Interpretation of the method of images in estimating superconducting levitation
Perez-Diaz, Jose Luis; Garcia-Prada, Juan Carlos
2007-12-01
Among different papers devoted to superconducting levitation of a permanent magnet over a superconductor using the method of images, there is a discrepancy of a factor of two when estimating the lift force. This is not a minor matter but an interesting fundamental question that contributes to understanding the physical phenomena of "imaging" on a superconductor surface. We solve it, make clear the physical behavior underlying it, and suggest the reinterpretation of some previous experiments.
Non-destructive methods to estimate physical aging of plywood
Bobadilla Maldonado, Ignacio; Santirso, María Cristina; Herrero Giner, Daniel; Esteban Herrero, Miguel; Iñiguez Gonzalez, Guillermo
2011-01-01
This paper studies the relationship between aging, physical changes and the results of non-destructive testing of plywood. 176 pieces of plywood were tested to analyze their actual and estimated density using non-destructive methods (screw withdrawal force and ultrasound wave velocity) during a laboratory aging test. From the results of statistical analysis it can be concluded that there is a strong relationship between the non-destructive measurements carried out, and the decline in the phys...
Quantum mechanical method for estimating ionicity of spinel ferrites
Energy Technology Data Exchange (ETDEWEB)
Ji, D.H. [Hebei Advanced Thin Films Laboratory, Department of Physics, Hebei Normal University, Shijiazhuang City 050024 (China); Tang, G.D., E-mail: tanggd@mail.hebtu.edu.cn [Hebei Advanced Thin Films Laboratory, Department of Physics, Hebei Normal University, Shijiazhuang City 050024 (China); Li, Z.Z.; Hou, X.; Han, Q.J.; Qi, W.H.; Liu, S.R.; Bian, R.R. [Hebei Advanced Thin Films Laboratory, Department of Physics, Hebei Normal University, Shijiazhuang City 050024 (China)
2013-01-15
The ionicity (0.879) of cubic spinel ferrite Fe{sub 3}O{sub 4} has been determined, using both experimental magnetization and density of state calculations from the density functional theory. Furthermore, a quantum mechanical estimation method for the ionicity of spinel ferrites is proposed by comparing the results from Phillips' ionicity. On the basis of this, ionicities of the spinel ferrites MFe{sub 2}O{sub 4} (M=Mn, Fe, Co, Ni, Cu) are calculated. As an application, the ion distribution at (A) and [B] sites of (A)[B]{sub 2}O{sub 4} spinel ferrites MFe{sub 2}O{sub 4} (M=Fe, Co, Ni, Cu) are calculated using current ionicity values. - Highlights: Black-Right-Pointing-Pointer The ionicity of Fe{sub 3}O{sub 4} was determined as 0.879 by the density functional theory. Black-Right-Pointing-Pointer The ionicities of spinel ferrites were estimated by a quantum mechanical method. Black-Right-Pointing-Pointer A quantum mechanical method estimating ionicity is suitable for II-VI compounds. Black-Right-Pointing-Pointer The ion distributions of MFe{sub 2}O{sub 4} are calculated by current ionicities values.
A new method to estimate genetic gain in annual crops
Directory of Open Access Journals (Sweden)
Flávio Breseghello
1998-12-01
Full Text Available The genetic gain obtained by breeding programs to improve quantitative traits may be estimated by using data from regional trials. A new statistical method for this estimate is proposed and includes four steps: a joint analysis of regional trial data using a generalized linear model to obtain adjusted genotype means and covariance matrix of these means for the whole studied period; b calculation of the arithmetic mean of the adjusted genotype means, exclusively for the group of genotypes evaluated each year; c direct year comparison of the arithmetic means calculated, and d estimation of mean genetic gain by regression. Using the generalized least squares method, a weighted estimate of mean genetic gain during the period is calculated. This method permits a better cancellation of genotype x year and genotype x trial/year interactions, thus resulting in more precise estimates. This method can be applied to unbalanced data, allowing the estimation of genetic gain in series of multilocational trials.Os ganhos genéticos obtidos pelo melhoramento de caracteres quantitativos podem ser estimados utilizando resultados de ensaios regionais de avaliação de linhagens e cultivares. Um novo método estatístico para esta estimativa é proposto, o qual consiste em quatro passos: a análise conjunta da série de dados dos ensaios regionais através de um modelo linear generalizado de forma a obter as médias ajustadas dos genótipos e a matriz de covariâncias destas médias; b para o grupo de genótipos avaliados em cada ano, cálculo da média aritmética das médias ajustadas obtidas na análise conjunta; c comparação direta dos anos, conforme as médias aritméticas obtidas, e d estimativa de um ganho genético médio, por regressão. Aplicando-se o método de quadrados mínimos generalizado, é calculada uma estimativa ponderada do ganho genético médio no período. Este método permite um melhor cancelamento das interações genótipo x ano e gen
New method of estimation of cosmic ray nucleus energy
Korotkova, N A; Postnikov, E B; Roganova, T M; Sveshnikova, L G; Turundaevskij, A N
2002-01-01
The new approach to estimation of primary cosmic nucleus energy is presented. It is based on measurement of spatial density of secondary particles, originated in nuclear interactions in the target and strengthened by thin converter layer. The proposed method allows creation of relatively lightweight apparatus of large square with large geometrical factor and can be applied in satellite and balloon experiments for all nuclei in a wide energy range of 10 sup 1 sup 1 -10 sup 1 sup 6 eV/particle. The physical basis of the method, full Monte Carlo simulation, the field of application are presented
Dynamic systems models new methods of parameter and state estimation
2016-01-01
This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...
High-dimensional Sparse Inverse Covariance Estimation using Greedy Methods
Johnson, Christopher C; Ravikumar, Pradeep
2011-01-01
In this paper we consider the task of estimating the non-zero pattern of the sparse inverse covariance matrix of a zero-mean Gaussian random vector from a set of iid samples. Note that this is also equivalent to recovering the underlying graph structure of a sparse Gaussian Markov Random Field (GMRF). We present two novel greedy approaches to solving this problem. The first estimates the non-zero covariates of the overall inverse covariance matrix using a series of global forward and backward greedy steps. The second estimates the neighborhood of each node in the graph separately, again using greedy forward and backward steps, and combines the intermediate neighborhoods to form an overall estimate. The principal contribution of this paper is a rigorous analysis of the sparsistency, or consistency in recovering the sparsity pattern of the inverse covariance matrix. Surprisingly, we show that both the local and global greedy methods learn the full structure of the model with high probability given just $O(d\\log...
METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS
Directory of Open Access Journals (Sweden)
V. Panteleev Andrei
2017-01-01
Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and
Laser heating method for estimation of carbon nanotube purity
Terekhov, S. V.; Obraztsova, E. D.; Lobach, A. S.; Konov, V. I.
A new method of a carbon nanotube purity estimation has been developed on the basis of Raman spectroscopy. The spectra of carbon soot containing different amounts of nanotubes were registered under heating from a probing laser beam with a step-by-step increased power density. The material temperature in the laser spot was estimated from a position of the tangential Raman mode demonstrating a linear thermal shift (-0.012 cm-1/K) from the position 1592 cm-1 (at room temperature). The rate of the material temperature rise versus the laser power density (determining the slope of a corresponding graph) appeared to correlate strongly with the nanotube content in the soot. The influence of the experimental conditions on the slope value has been excluded via a simultaneous measurement of a reference sample with a high nanotube content (95 vol.%). After the calibration (done by a comparison of the Raman and the transmission electron microscopy data for the nanotube percentage in the same samples) the Raman-based method is able to provide a quantitative purity estimation for any nanotube-containing material.
A projection and density estimation method for knowledge discovery.
Stanski, Adam; Hellwich, Olaf
2012-01-01
A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.
A projection and density estimation method for knowledge discovery.
Directory of Open Access Journals (Sweden)
Adam Stanski
Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.
Methods for cost estimation in software project management
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
Concordance analysis between estimation methods of milk fatty acid content.
Rodriguez, Mary Ana Petersen; Petrini, Juliana; Ferreira, Evandro Maia; Mourão, Luciana Regina Mangeti Barreto; Salvian, Mayara; Cassoli, Laerte Dagher; Pires, Alexandre Vaz; Machado, Paulo Fernando; Mourão, Gerson Barreto
2014-08-01
Considering the milk fatty acid influence on human health, the aim of this study was to compare gas chromatography (GC) and Fourier transform infrared (FTIR) spectroscopy for the determination of these compounds. Fatty acid content (g/100g of fat) were obtained by both methods and compared through Pearson's correlation, linear Bayesian regression, and the Bland-Altman method. Despite the high correlations between the measurements (r=0.60-0.92), the regression coefficient values indicated higher measures for palmitic acid, oleic acid, unsaturated and monounsaturated fatty acids and lower values for stearic acid, saturated and polyunsaturated fatty acids estimated by GC in comparison to FTIR results. This inequality was confirmed in the Bland-Altman test, with an average bias varying from -8.65 to 6.91g/100g of fat. However, the inclusion of 94% of the samples into the concordance limits suggested that the variability of the differences between the methods was constant throughout the range of measurement. Therefore, despite the inequality between the estimates, the methods displayed the same pattern of milk fat composition, allowing similar conclusions about the milk samples under evaluation.
Molecular-clock methods for estimating evolutionary rates and timescales.
Ho, Simon Y W; Duchêne, Sebastián
2014-12-01
The molecular clock presents a means of estimating evolutionary rates and timescales using genetic data. These estimates can lead to important insights into evolutionary processes and mechanisms, as well as providing a framework for further biological analyses. To deal with rate variation among genes and among lineages, a diverse range of molecular-clock methods have been developed. These methods have been implemented in various software packages and differ in their statistical properties, ability to handle different models of rate variation, capacity to incorporate various forms of calibrating information and tractability for analysing large data sets. Choosing a suitable molecular-clock model can be a challenging exercise, but a number of model-selection techniques are available. In this review, we describe the different forms of evolutionary rate heterogeneity and explain how they can be accommodated in molecular-clock analyses. We provide an outline of the various clock methods and models that are available, including the strict clock, local clocks, discrete clocks and relaxed clocks. Techniques for calibration and clock-model selection are also described, along with methods for handling multilocus data sets. We conclude our review with some comments about the future of molecular clocks.
A simple method to estimate fractal dimension of mountain surfaces
Kolwankar, Kiran M
2014-01-01
Fractal surfaces are ubiquitous in nature as well as in the sciences. The examples range from the cloud boundaries to the corroded surfaces. Fractal dimension gives a measure of the irregularity in the object under study. We present a simple method to estimate the fractal dimension of mountain surface. We propose to use easily available satellite images of lakes for this purpose. The fractal dimension of the boundary of a lake, which can be extracted using image analysis softwares, can be determined easily which gives the estimate of the fractal dimension of the mountain surface and hence a quantitative characterization of the irregularity of the topography of the mountain surface. This value will be useful in validating models of mountain formation
Estimating return on investment in translational research: methods and protocols.
Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind
2013-12-01
Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.
Exact Group Sequential Methods for Estimating a Binomial Proportion
Directory of Open Access Journals (Sweden)
Zhengjia Chen
2013-01-01
Full Text Available We first review existing sequential methods for estimating a binomial proportion. Afterward, we propose a new family of group sequential sampling schemes for estimating a binomial proportion with prescribed margin of error and confidence level. In particular, we establish the uniform controllability of coverage probability and the asymptotic optimality for such a family of sampling schemes. Our theoretical results establish the possibility that the parameters of this family of sampling schemes can be determined so that the prescribed level of confidence is guaranteed with little waste of samples. Analytic bounds for the cumulative distribution functions and expectations of sample numbers are derived. Moreover, we discuss the inherent connection of various sampling schemes. Numerical issues are addressed for improving the accuracy and efficiency of computation. Computational experiments are conducted for comparing sampling schemes. Illustrative examples are given for applications in clinical trials.
Estimating bacterial diversity for ecological studies: methods, metrics, and assumptions.
Directory of Open Access Journals (Sweden)
Julia Birtel
Full Text Available Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5. Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques.
A review of the methods for neuronal response latency estimation
DEFF Research Database (Denmark)
Levakovaa, Marie; Tamborrino, Massimiliano; Ditlevsen, Susanne;
2015-01-01
Neuronal response latency is usually vaguely defined as the delay between the stimulus onset and the beginning of the response. It contains important information for the understanding of the temporal code. For this reason, the detection of the response latency has been extensively studied...... in the last twenty years, yielding different estimation methods. They can be divided into two classes, one of them including methods based on detecting an intensity change in the firing rate profile after the stimulus onset and the other containing methods based on detection of spikes evoked...... by the stimulation using interspike intervals and spike times. The aim of this paper is to present a review of the main techniques proposed in both classes, highlighting their advantages and shortcomings....
Robustness of Modal Parameter Estimation Methods Applied to Lightweight Structures
DEFF Research Database (Denmark)
Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard
2013-01-01
of nominally identical test subjects. However, the literature on modal testing of timber structures is rather limited and the applicability and robustness of dierent curve tting methods for modal analysis of such structures is not described in detail. The aim of this paper is to investigate the robustness...... of two parameter estimation methods built into the commercial modal testing software B&K Pulse Re ex Advanced Modal Analysis. The investigations are done by means of frequency response functions generated from a nite-element model and subjected to articial noise before being analyzed with Pulse Re ex...... and the Polyreference Time method are fairly robust and well suited for the structure being analyzed....
Robustness of Modal Parameter Estimation Methods Applied to Lightweight Structures
DEFF Research Database (Denmark)
Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard
2013-01-01
of nominally identical test subjects. However, the literature on modal testing of timber structures is rather limited and the applicability and robustness of dierent curve tting methods for modal analysis of such structures is not described in detail. The aim of this paper is to investigate the robustness....... The ability to handle closely spaced modes and broad frequency ranges is investigated for a numerical model of a lightweight junction under dierent signal-to-noise ratios. The selection of both excitation points and response points are discussed. It is found that both the Rational Fraction Polynomial-Z method...... of two parameter estimation methods built into the commercial modal testing software B&K Pulse Re ex Advanced Modal Analysis. The investigations are done by means of frequency response functions generated from a nite-element model and subjected to articial noise before being analyzed with Pulse Re ex...
Estimation of Lamotrigine by RP-HPLC Method
Directory of Open Access Journals (Sweden)
D. Anantha Kumar
2010-01-01
Full Text Available A rapid and reproducible reverse phase high performance liquid chromatographic method has been developed for the estimation of lamotrigine in its pure form as well as in pharmaceutical dosage forms. Chromatography was carried out on a Luna C18 column using a mixture of potassium dihydrogen phosphate buffer (pH 7.3 and methanol in a ratio of 60:40 v/v as the mobile phase at a flow rate of 1.0 mL/min. The detection was done at 305 nm. The retention time of the drug was 6.1 min. The method produced linear responses in the concentration range of 10 to 70 μg/mL of lamotrigine. The method was found to be reproducible for analysis of the drug in tablet dosage forms.
A method for estimating optical properties of dusty cloud
Institute of Scientific and Technical Information of China (English)
Tianhe Wang; Jianping Huang
2009-01-01
Based on the scattering properties of nonspherical dust aerosol,a new method is developed for retrieving dust aerosol optical depths of dusty clouds.The dusty clouds are defined as the hybrid system of dust plume and cloud.The new method is based on transmittance measurements from surface-based instruments multi-filter rotating shadowband radiometer(MFRSR)and cloud parameters from lidar measurements.It uses the difference of absorption between dust aerosols and water droplets for distinguishing and estimating the optical properties of dusts and clouds,respectively.This new retrieval method is not sensitive to the retrieval error of cloud properties and the maximum absolute deviations of dust aerosol and total optical depths for thin dusty cloud retrieval algorithm are only 0.056 and 0.1.respectively,for given possible uncertainties.The retrieval error for thick dusty cloud mainly depends on lidar-based total dusty cloud properties.
Crossover Method for Interactive Genetic Algorithms to Estimate Multimodal Preferences
Directory of Open Access Journals (Sweden)
Misato Tanaka
2013-01-01
Full Text Available We apply an interactive genetic algorithm (iGA to generate product recommendations. iGAs search for a single optimum point based on a user’s Kansei through the interaction between the user and machine. However, especially in the domain of product recommendations, there may be numerous optimum points. Therefore, the purpose of this study is to develop a new iGA crossover method that concurrently searches for multiple optimum points for multiple user preferences. The proposed method estimates the locations of the optimum area by a clustering method and then searches for the maximum values of the area by a probabilistic model. To confirm the effectiveness of this method, two experiments were performed. In the first experiment, a pseudouser operated an experiment system that implemented the proposed and conventional methods and the solutions obtained were evaluated using a set of pseudomultiple preferences. With this experiment, we proved that when there are multiple preferences, the proposed method searches faster and more diversely than the conventional one. The second experiment was a subjective experiment. This experiment showed that the proposed method was able to search concurrently for more preferences when subjects had multiple preferences.
Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2
Energy Technology Data Exchange (ETDEWEB)
Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.
1994-07-01
of complex issues that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.
Networked Estimation with an Area-Triggered Transmission Method
Directory of Open Access Journals (Sweden)
Young Soo Suh
2008-02-01
Full Text Available This paper is concerned with the networked estimation problem in which sensordata are transmitted over the network. In the event-driven sampling scheme known aslevel-crossing or send-on-delta, sensor data are transmitted to the estimator node if thedifference between the current sensor value and the last transmitted one is greater than agiven threshold. The event-driven sampling generally requires less transmission than thetime-driven one. However, the transmission rate of the send-on-delta method becomeslarge when the sensor noise is large since sensor data variation becomes large due to thesensor noise. Motivated by this issue, we propose another event-driven sampling methodcalled area-triggered in which sensor data are sent only when the integral of differencesbetween the current sensor value and the last transmitted one is greater than a giventhreshold. Through theoretical analysis and simulation results, we show that in the certaincases the proposed method not only reduces data transmission rate but also improvesestimation performance in comparison with the conventional event-driven method.
Probabilistic seismic hazard assessment of Italy using kernel estimation methods
Zuccolo, Elisa; Corigliano, Mirko; Lai, Carlo G.
2013-07-01
A representation of seismic hazard is proposed for Italy based on the zone-free approach developed by Woo (BSSA 86(2):353-362, 1996a), which is based on a kernel estimation method governed by concepts of fractal geometry and self-organized seismicity, not requiring the definition of seismogenic zoning. The purpose is to assess the influence of seismogenic zoning on the results obtained for the probabilistic seismic hazard analysis (PSHA) of Italy using the standard Cornell's method. The hazard has been estimated for outcropping rock site conditions in terms of maps and uniform hazard spectra for a selected site, with 10 % probability of exceedance in 50 years. Both spectral acceleration and spectral displacement have been considered as ground motion parameters. Differences in the results of PSHA between the two methods are compared and discussed. The analysis shows that, in areas such as Italy, characterized by a reliable earthquake catalog and in which faults are generally not easily identifiable, a zone-free approach can be considered a valuable tool to address epistemic uncertainty within a logic tree framework.
CME Velocity and Acceleration Error Estimates Using the Bootstrap Method
Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji
2017-08-01
The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs ( e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.
Singularity of Some Software Reliability Models and Parameter Estimation Method
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
Comparative study on parameter estimation methods for attenuation relationships
Sedaghati, Farhad; Pezeshk, Shahram
2016-12-01
In this paper, the performance and advantages and disadvantages of various regression methods to derive coefficients of an attenuation relationship have been investigated. A database containing 350 records out of 85 earthquakes with moment magnitudes of 5-7.6 and Joyner-Boore distances up to 100 km in Europe and the Middle East has been considered. The functional form proposed by Ambraseys et al (2005 Bull. Earthq. Eng. 3 1-53) is selected to compare chosen regression methods. Statistical tests reveal that although the estimated parameters are different for each method, the overall results are very similar. In essence, the weighted least squares method and one-stage maximum likelihood perform better than the other considered regression methods. Moreover, using a blind weighting matrix or a weighting matrix related to the number of records would not yield in improving the performance of the results. Further, to obtain the true standard deviation, the pure error analysis is necessary. Assuming that the correlation between different records of a specific earthquake exists, the one-stage maximum likelihood considering the true variance acquired by the pure error analysis is the most preferred method to compute the coefficients of a ground motion predication equation.
A method for sex estimation using the proximal femur.
Curate, Francisco; Coelho, João; Gonçalves, David; Coelho, Catarina; Ferreira, Maria Teresa; Navega, David; Cunha, Eugénia
2016-09-01
The assessment of sex is crucial to the establishment of a biological profile of an unidentified skeletal individual. The best methods currently available for the sexual diagnosis of human skeletal remains generally rely on the presence of well-preserved pelvic bones, which is not always the case. Postcranial elements, including the femur, have been used to accurately estimate sex in skeletal remains from forensic and bioarcheological settings. In this study, we present an approach to estimate sex using two measurements (femoral neck width [FNW] and femoral neck axis length [FNAL]) of the proximal femur. FNW and FNAL were obtained in a training sample (114 females and 138 males) from the Luís Lopes Collection (National History Museum of Lisbon). Logistic regression and the C4.5 algorithm were used to develop models to predict sex in unknown individuals. Proposed cross-validated models correctly predicted sex in 82.5-85.7% of the cases. The models were also evaluated in a test sample (96 females and 96 males) from the Coimbra Identified Skeletal Collection (University of Coimbra), resulting in a sex allocation accuracy of 80.1-86.2%. This study supports the relative value of the proximal femur to estimate sex in skeletal remains, especially when other exceedingly dimorphic skeletal elements are not accessible for analysis.
Directory of Open Access Journals (Sweden)
D.O. Smallwood
1996-01-01
Full Text Available It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
Stability Estimates for ℎ- Spectral Element Methods for Elliptic Problems
Indian Academy of Sciences (India)
Pravir Dutt; Satyendra Tomar; B V Rathish Kumar
2002-11-01
In a series of papers of which this is the first we study how to solve elliptic problems on polygonal domains using spectral methods on parallel computers. To overcome the singularities that arise in a neighborhood of the corners we use a geometrical mesh. With this mesh we seek a solution which minimizes a weighted squared norm of the residuals in the partial differential equation and a fractional Sobolev norm of the residuals in the boundary conditions and enforce continuity by adding a term which measures the jump in the function and its derivatives at inter-element boundaries, in an appropriate fractional Sobolev norm, to the functional being minimized. Since the second derivatives of the actual solution are not square integrable in a neighborhood of the corners we have to multiply the residuals in the partial differential equation by an appropriate power of $r_k$, where $r_k$ measures the distance between the point and the vertex $A_k$ in a sectoral neighborhood of each of these vertices. In each of these sectoral neighborhoods we use a local coordinate system $(_k, _k)$ where $_k = ln r_k$ and $(r_k, _k)$ are polar coordinates with origin at $A_k$, as first proposed by Kondratiev. We then derive differentiability estimates with respect to these new variables and a stability estimate for the functional we minimize. In [6] we will show that we can use the stability estimate to obtain parallel preconditioners and error estimates for the solution of the minimization problem which are nearly optimal as the condition number of the preconditioned system is polylogarithmic in , the number of processors and the number of degrees of freedom in each variable on each element. Moreover if the data is analytic then the error is exponentially small in .
A method for obtaining time-periodic Lp estimates
Kyed, Mads; Sauer, Jonas
2017-01-01
We introduce a method for showing a prioriLp estimates for time-periodic, linear, partial differential equations set in a variety of domains such as the whole space, the half space and bounded domains. The method is generic and can be applied to a wide range of problems. We demonstrate it on the heat equation. The main idea is to replace the time axis with a torus in order to reformulate the problem on a locally compact abelian group and to employ Fourier analysis on this group. As a by-product, maximal Lp regularity for the corresponding initial-value problem follows without the notion of R-boundedness. Moreover, we introduce the concept of a time-periodic fundamental solution.
Current methods for estimating the rate of photorespiration in leaves.
Busch, F A
2013-07-01
Photorespiration is a process that competes with photosynthesis, in which Rubisco oxygenates, instead of carboxylates, its substrate ribulose 1,5-bisphosphate. The photorespiratory metabolism associated with the recovery of 3-phosphoglycerate is energetically costly and results in the release of previously fixed CO2. The ability to quantify photorespiration is gaining importance as a tool to help improve plant productivity in order to meet the increasing global food demand. In recent years, substantial progress has been made in the methods used to measure photorespiration. Current techniques are able to measure multiple aspects of photorespiration at different points along the photorespiratory C2 cycle. Six different methods used to estimate photorespiration are reviewed, and their advantages and disadvantages discussed.
A new assimilation method with physical mechanism to estimate evapotranspiration
Ye, Wen; Xu, Xinyi
2016-04-01
The accurate estimation of regional evapotranspiration has been a research hotspot in the field of hydrology and water resources both in domestic and abroad. A new assimilation method with physical mechanism was proposed to estimate evapotranspiration, which was easier to apply. Based on the evapotranspiration (ET) calculating method with soil moisture recurrence relations in the Distributed Time Variant Gain Model (DTVGM) and Ensemble Kalman Filter (EnKF), it constructed an assimilation system for recursive calculation of evapotranspiration in combination with "observation value" by the retrieval data of evapotranspiration through the Two-Layer Remote Sensing Model. By updating the filter in the model with assimilated evapotranspiration, synchronization correction to the model estimation was achieved and more accurate time continuous series values of evapotranspiration were obtained. Through the verification of observations in Xiaotangshan Observatory and hydrological stations in the basin, the correlation coefficient of remote sensing inversion evapotranspiration and actual evapotranspiration reaches as high as 0.97, and the NS efficiency coefficient of DTVGM model was 0.80. By using the typical daily evapotranspiration from Remote Sensing and the data from DTVGM Model, we assimilated the hydrological simulation processes with DTVGM Model in Shahe Basin in Beijing to obtain continuous evapotranspiration time series. The results showed that the average relative error between the remote sensing values and DTVGM simulations is about 12.3%, and for the value between remote sensing retrieval data and assimilation values is 4.5%, which proved that the assimilation results of Ensemble Kalman Filter (EnKF) were closer to the "real" data, and was better than the evapotranspiration simulated by DTVGM without any improvement. Keyword Evapotranspiration assimilation Ensemble Kalman Filter Distributed hydrological model Two-Layer Remote Sensing Model
Brocca, Luca; Pellarin, Thierry; Crow, Wade T.; Ciabatta, Luca; Massari, Christian; Ryu, Dongryeol; Su, Chun-Hsu; Rüdiger, Christoph; Kerr, Yann
2016-10-01
Remote sensing of soil moisture has reached a level of maturity and accuracy for which the retrieved products can be used to improve hydrological and meteorological applications. In this study, the soil moisture product from the Soil Moisture and Ocean Salinity (SMOS) satellite is used for improving satellite rainfall estimates obtained from the Tropical Rainfall Measuring Mission multisatellite precipitation analysis product (TMPA) using three different "bottom up" techniques: SM2RAIN, Soil Moisture Analysis Rainfall Tool, and Antecedent Precipitation Index Modification. The implementation of these techniques aims at improving the well-known "top down" rainfall estimate derived from TMPA products (version 7) available in near real time. Ground observations provided by the Australian Water Availability Project are considered as a separate validation data set. The three algorithms are calibrated against the gauge-corrected TMPA reanalysis product, 3B42, and used for adjusting the TMPA real-time product, 3B42RT, using SMOS soil moisture data. The study area covers the entire Australian continent, and the analysis period ranges from January 2010 to November 2013. Results show that all the SMOS-based rainfall products improve the performance of 3B42RT, even at daily time scale (differently from previous investigations). The major improvements are obtained in terms of estimation of accumulated rainfall with a reduction of the root-mean-square error of more than 25%. Also, in terms of temporal dynamic (correlation) and rainfall detection (categorical scores) the SMOS-based products provide slightly better results with respect to 3B42RT, even though the relative performance between the methods is not always the same. The strengths and weaknesses of each algorithm and the spatial variability of their performances are identified in order to indicate the ways forward for this promising research activity. Results show that the integration of bottom up and top down approaches
Estimation of citicoline sodium in tablets by difference spectrophotometric method
Directory of Open Access Journals (Sweden)
Sagar Suman Panda
2013-01-01
Full Text Available Aim: The present work deals with development and validation of a novel, precise, and accurate spectrophotometric method for the estimation of citicoline sodium (CTS in tablets. This spectrophotometric method is based on the principle that CTS shows two different forms that differs in the absorption spectra in basic and acidic medium. Materials and Methods: The present work was being carried out on Shimadzu 1800 Double Beam UV-visible spectrophotometer. Difference spectra were generated using 10 mm quartz cells over the range of 200-400 nm. Solvents used were 0.1 M NaOH and 0.1 M HCl. Results: The maxima and minima in the difference spectra of CTS were found to be 239 nm and 283 nm, respectively. Amplitude was calculated from the maxima and minima of spectrum. The drug follows linearity in the range of 1-50 μ/ml (R 2 = 0.999. The average % recovery from the tablet formulation was found to be 98.47%. The method was validated as per International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use: ICH Q2(R1 Validation of Analytical Procedures: Text and Methodology guidelines. Conclusion: This method is simple and inexpensive. Hence it can be applied for determination of the drug in pharmaceutical dosage forms.
Application of Common Mid-Point Method to Estimate Asphalt
Zhao, Shan; Al-Aadi, Imad
2015-04-01
3-D radar is a multi-array stepped-frequency ground penetration radar (GPR) that can measure at a very close sampling interval in both in-line and cross-line directions. Constructing asphalt layers in accordance with specified thicknesses is crucial for pavement structure capacity and pavement performance. Common mid-point method (CMP) is a multi-offset measurement method that can improve the accuracy of the asphalt layer thickness estimation. In this study, the viability of using 3-D radar to predict asphalt concrete pavement thickness with an extended CMP method was investigated. GPR signals were collected on asphalt pavements with various thicknesses. Time domain resolution of the 3-D radar was improved by applying zero-padding technique in the frequency domain. The performance of the 3-D radar was then compared to that of the air-coupled horn antenna. The study concluded that 3-D radar can be used to predict asphalt layer thickness using CMP method accurately when the layer thickness is larger than 0.13m. The lack of time domain resolution of 3-D radar can be solved by frequency zero-padding. Keywords: asphalt pavement thickness, 3-D Radar, stepped-frequency, common mid-point method, zero padding.
Study on color difference estimation method of medicine biochemical analysis
Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun
2006-01-01
The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.
Dental age estimation in Brazilian HIV children using Willems' method.
de Souza, Rafael Boschetti; da Silva Assunção, Luciana Reichert; Franco, Ademir; Zaroni, Fábio Marzullo; Holderbaum, Rejane Maria; Fernandes, Ângela
2015-12-01
The notification of the Human Immunodeficiency Virus (HIV) in Brazilian children was first reported in 1984. Since that time more than 21 thousand children became infected. Approximately 99.6% of the children aged less than 13 years old are vertically infected. In this context, most of the children are abandoned after birth, or lose their relatives in a near future, growing with uncertain identification. The present study aims to estimate the dental age of Brazilian HIV patients in face of healthy patients paired by age and gender. The sample consisted of 160 panoramic radiographs of male (n: 80) and female (n: 80) patients aged between 4 and 15 years (mean age: 8.88 years), divided into HIV (n: 80) and control (n: 80) groups. The sample was analyzed by three trained examiners, using Willems' method, 2001. Intraclass Correlation Coefficient (ICC) was applied to test intra- and inter-examiner agreement, and Student paired t-test was used to determine the age association between HIV and control groups. Intra-examiner (ICC: from 0.993 to 0.997) and inter-examiner (ICC: from 0.991 to 0.995) agreement tests indicated high reproducibility of the method between the examiners (Page estimation of both HIV and healthy children with unknown age.
A Method to Estimate Shear Quality Factor of Hard Rocks
Wang, Xin; Cai, Ming
2017-07-01
Attenuation has a large influence on ground motion intensity. Quality factors are used to measure wave attenuation in a medium and they are often difficult to estimate due to many factors such as the complex geology and underground mining environment. This study investigates the effect of attenuation on seismic wave propagation and ground motion using an advanced numerical tool—SPECFEM2D. A method, which uses numerical modeling and site-specific scaling laws, is proposed to estimate the shear quality factor of hard rocks in underground mines. In the numerical modeling, the seismic source is represented by a moment tensor model and the considered medium is isotropic and homogeneous. Peak particle velocities along the strongest wave motion direction are compared with that from a design scaling law. Based on the field data that were used to derive a semi-empirical design scaling law, it is demonstrated that a shear quality factor of 60 seems to be a representative for the hard rocks in deep mines to consider the attenuation effect of seismic wave propagation. Using the proposed method, reasonable shear quality factors of hard rocks can be obtained and this, in turn, will assist accurate ground motion determination for mine design.
Probabilistic seismic loss estimation via endurance time method
Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.
2017-01-01
Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.
A QUALITATIVE METHOD TO ESTIMATE HSI DISPLAY COMPLEXITY
Directory of Open Access Journals (Sweden)
JACQUES HUGO
2013-04-01
Full Text Available There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.
Error-space estimate method for generalized synergic target tracking
Institute of Scientific and Technical Information of China (English)
Ming CEN; Chengyu FU; Ke CHEN; Xingfa LIU
2009-01-01
To improve the tracking accuracy and stability of an optic-electronic target tracking system,the concept of generalized synergic target and an algorithm named error-space estimate method is presented.In this algo-rithm,the motion of target is described by guide data and guide errors,and then the maneuver of the target is separated into guide data and guide errors to reduce the maneuver level.Then state estimate is implemented in target state-space and error-space respectively,and the prediction data of target position are acquired by synthe-sizing the filtering data from target state-space according to kinematic model and the prediction data from error-space according to guide error model.Differing from typ-ical multi-model method,the kinematic and guide error models work concurrently rather than switch between models.Experiment results show that the performance of the algorithm is better than Kalman filter and strong tracking filter at the same maneuver level.
Public-Private Investment Partnerships: Efficiency Estimation Methods
Directory of Open Access Journals (Sweden)
Aleksandr Valeryevich Trynov
2016-06-01
Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in
Estimated Accuracy of Three Common Trajectory Statistical Methods
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h
Estimation of Anthocyanin Content of Berries by NIR Method
Zsivanovits, G.; Ludneva, D.; Iliev, A.
2010-01-01
Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.
Computational methods estimating uncertainties for profile reconstruction in scatterometry
Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.
2008-04-01
The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.
Estimating Earth's modal Q with epicentral stacking method
Chen, X.; Park, J. J.
2014-12-01
The attenuation rates of Earth's normal modes are the most important constraints on the anelastic state of Earth's deep interior. Yet current measurements of Earth's attenuation rates suffer from 3 sources of biases: the mode coupling effect, the beating effect, and the background noise, which together lead to significant uncertainties in the attenuation rates. In this research, we present a new technique to estimate the attenuation rates of Earth's normal modes - the epicentral stacking method. Rather than using the conventional geographical coordinate system, we instead deal with Earth's normal modes in the epicentral coordinate system, in which only 5 singlets rather than 2l+1 are excited. By stacking records from the same events at a series of time lags, we are able to recover the time-varying amplitudes of the 5 excited singlets, and thus measure their attenuation rates. The advantage of our method is that it enhances the SNR through stacking and minimizes the background noise effect, yet it avoids the beating effect problem commonly associated with the conventional multiplet stacking method by singling out the singlets. The attenuation rates measured from our epicentral stacking method seem to be reliable measurements in that: a) the measured attenuation rates are generally consistent among the 10 large events we used, except for a few events with unexplained larger attenuation rates; b) the line for the log of singlet amplitudes and time lag is very close to a straight line, suggesting an accurate estimation of attenuation rate. The Q measurements from our method are consistently lower than previous modal Q measurements, but closer to the PREM model. For example, for mode 0S25 whose Coriolis force coupling is negligible, our measured Q is between 190 to 210 depending on the event, while the PREM modal Q of 0S25 is 205, and previous modal Q measurements are as high as 242. The difference between our results and previous measurements might be due to the lower
Institute of Scientific and Technical Information of China (English)
Liu Xiqiang; Zhou Huilan; Li Hong; Gai Dianguang
2000-01-01
Based on the propagation characteristics of shear wave in the anisotropic layers, thecorrelation among several splitting shear-wave identification methods hasbeen studied. Thispaper puts forward the method estimating splitting shear-wave phases and its reliability byusing of the assumption that variance of noise and useful signal data obey normaldistribution. To check the validity of new method, the identification results and errorestimation corresponding to 95% confidence level by analyzing simulation signals have beengiven.
An extended stochastic method for seismic hazard estimation
Directory of Open Access Journals (Sweden)
A. K. Abd el-aal
2015-12-01
Full Text Available In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003 "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635–676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s−2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s−2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.
A TRMM Rainfall Estimation Method Applicable to Land Areas
Prabhakara, C.; Iacovazzi, R.; Weinman, J.; Dalu, G.
1999-01-01
Methods developed to estimate rain rate on a footprint scale over land with the satellite-borne multispectral dual-polarization Special Sensor Microwave Imager (SSM/1) radiometer have met with limited success. Variability of surface emissivity on land and beam filling are commonly cited as the weaknesses of these methods. On the contrary, we contend a more significant reason for this lack of success is that the information content of spectral and polarization measurements of the SSM/I is limited. because of significant redundancy. As a result, the complex nature and vertical distribution C, of frozen and melting ice particles of different densities, sizes, and shapes cannot resolved satisfactorily. Extinction in the microwave region due to these complex particles can mask the extinction due to rain drops. Because of these reasons, theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. To illustrate the weakness of these models, as an example we can consider the brightness temperature measurement made by the radiometer in the 85 GHz channel (T85). Models indicate that T85 should be inversely related to the rain rate, because of scattering. However, rain rate derived from 15-minute rain gauges on land indicate that this is not true in a majority of footprints. This is also supported by the ship-borne radar observations of rain in the Tropical Oceans and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA-COARE) region over the ocean. Based on these observations. we infer that theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. We do not follow the above path of rain retrieval on a footprint scale. Instead, we depend on the limited ability of the microwave radiometer to detect the presence of rain. This capability is useful to determine the rain area in a mesoscale region. We find in a given rain event that this rain area is closely related to the mesoscale-average rain rate
METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE
Directory of Open Access Journals (Sweden)
Татьяна Александровна Коркина
2013-08-01
Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11
[Methods for the estimation of the renal function].
Fontseré Baldellou, Néstor; Bonal I Bastons, Jordi; Romero González, Ramón
2007-10-13
The chronic kidney disease represents one of the pathologies with greater incidence and prevalence in the present sanitary systems. The ambulatory application of different methods that allow a suitable detection, monitoring and stratification of the renal functionalism is of crucial importance. On the basis of the vagueness obtained by means of the application of the serum creatinine, a set of predictive equations for the estimation of the glomerular filtration rate have been developed. Nevertheless, it is essential for the physician to know its limitations, in situations of normal renal function and hyperfiltration, certain associate pathologies and extreme situations of nutritional status and age. In these cases, the application of the isotopic techniques for the calculation of the renal function is more recommendable.
Performance of Deconvolution Methods in Estimating CBOC-Modulated Signals
Directory of Open Access Journals (Sweden)
Danai Skournetou
2011-01-01
Full Text Available Multipath propagation is one of the most difficult error sources to compensate in global navigation satellite systems due to its environment-specific nature. In order to gain a better understanding of its impact on the received signal, the establishment of a theoretical performance limit can be of great assistance. In this paper, we derive the Cramer Rao lower bounds (CRLBs, where in one case, the unknown parameter vector corresponds to any of the three multipath signal parameters of carrier phase, code delay, and amplitude, and in the second case, all possible combinations of joint parameter estimation are considered. Furthermore, we study how various channel parameters affect the computed CRLBs, and we use these bounds to compare the performance of three deconvolution methods: least squares, minimum mean square error, and projection onto convex space. In all our simulations, we employ CBOC modulation, which is the one selected for future Galileo E1 signals.
Colorimetry method for estimation of glycine, alanine and isoleucine
Directory of Open Access Journals (Sweden)
Shah S
2007-01-01
Full Text Available A simple and sensitive colorimetry method has been developed for estimation of amino acids glycine, alanine and isoleucine. Amino acids were derivatized with dichlone in presence of sodium bicarbonate. Amino acids showed maximum absorbance at 470 nm. The method was validated in terms of linearity (5-25 µg/ml for glycine, alanine and isoleucine, precision (intra-day variation 0.13-0.78, 0.22-1.29, 0.58-2.52% and inter-day variation 0.52-2.49, 0.43-3.12, 0.58- 4.48% for glycine, alanine and isoleucine respectively, accuracy (91.43-98.86, 96.26-105.99 and 95.73-104.82 for glycine, alanine and isoleucine respectively, limit of detection (0.6, 1 and 1 µg/ml for glycine, alanine and isoleucine respectively and limit of quantification (5 µg/ml for glycine, alanine and isoleucine. The method was found to be simple and sensitive.
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
Directory of Open Access Journals (Sweden)
Leontine Alkema
Full Text Available BACKGROUND: In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME published an update of the estimates of the under-five mortality rate (U5MR and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. METHODS: We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. FINDINGS: Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. CONCLUSIONS: The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues.
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
Background In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. Methods We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Findings Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. Conclusions The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues. PMID:25013954
A new rapid method for rockfall energies and distances estimation
Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric
2016-04-01
Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies
Inverse method for estimating respiration rates from decay time series
Directory of Open Access Journals (Sweden)
D. C. Forney
2012-03-01
Full Text Available Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters; a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
Inverse method for estimating respiration rates from decay time series
Directory of Open Access Journals (Sweden)
D. C. Forney
2012-09-01
Full Text Available Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates, which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters: a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...
Evaluating simplified methods for liquefaction assessment for loss estimation
Kongar, Indranil; Rossetto, Tiziana; Giovinazzi, Sonia
2017-06-01
Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at
Pipeline heating method based on optimal control and state estimation
Energy Technology Data Exchange (ETDEWEB)
Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu
2010-07-01
In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem
A method for estimating vegetation change over time and space
Institute of Scientific and Technical Information of China (English)
LIMaihe; NorbertKraeuchi
2003-01-01
Plant diversity is used as an indicator of the well-being of vegetation and ecological systems. Human activities and global change drive vegetation change in composition and competition of species through plant invasions and replacement of existing species on a given scale. However, species diversity indices do not consider the effects of invasions on the diversity value and on the functions of ecosystems. On the other hand, the existing methods for diversity index can not be used directly for cross-scale evaluation of vegetation data. Therefore, we proposed a 3-dimensional model derived from the logistic equation for estimating vegetation change, using native and non-native plant diversity. The two variables, based on the current and the theoretical maximum diversity of native plants on a given scale, and the result of the model are relative values without units, and are therefore scale-independent. Hence, this method developed can be used directly for cross-scale evaluations nf vegetation data, and indirectly for estimatinu ecosvstem or environmental chanue.
A Method for Estimation of Death Tolls in Disastrous Earthquake
Pai, C.; Tien, Y.; Teng, T.
2004-12-01
Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on
Estimation methods of eco-environmental water requirements: Case study
Institute of Scientific and Technical Information of China (English)
YANG Zhifeng; CUI Baoshan; LIU Jingling
2005-01-01
Supplying water to the ecological environment with certain quantity and quality is significant for the protection of diversity and the realization of sustainable development. The conception and connotation of eco-environmental water requirements, including the definition of the conception, the composition and characteristics of eco-environmental water requirements, are evaluated in this paper. The classification and estimation methods of eco-environmental water requirements are then proposed. On the basis of the study on the Huang-Huai-Hai Area, the present water use, the minimum and suitable water requirement are estimated and the corresponding water shortage is also calculated. According to the interrelated programs, the eco-environmental water requirements in the coming years (2010, 2030, 2050) are estimated. The result indicates that the minimum and suitable eco-environmental water requirements fluctuate with the differences of function setting and the referential standard of water resources, and so as the water shortage. Moreover, the study indicates that the minimum eco-environmental water requirement of the study area ranges from 2.84×1010m3 to 1.02×1011m3, the suitable water requirement ranges from 6.45×1010m3 to 1.78×1011m3, the water shortage ranges from 9.1×109m3 to 2.16×1010m3 under the minimum water requirement, and it is from 3.07×1010m3 to 7.53×1010m3 under the suitable water requirement. According to the different values of the water shortage, the water priority can be allocated. The ranges of the eco-environmental water requirements in the three coming years (2010, 2030, 2050) are 4.49×1010m3-1.73×1011m3, 5.99×10m3?2.09×1011m3, and 7.44×1010m3-2.52×1011m3, respectively.
Infrared thermography method for fast estimation of phase diagrams
Energy Technology Data Exchange (ETDEWEB)
Palomo Del Barrio, Elena [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Cadoret, Régis [Centre National de la Recherche Scientifique, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Daranlot, Julien [Solvay, Laboratoire du Futur, 178 Av du Dr Schweitzer, 33608 Pessac (France); Achchaq, Fouzia, E-mail: fouzia.achchaq@u-bordeaux.fr [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France)
2016-02-10
Highlights: • Infrared thermography is proposed to determine phase diagrams in record time. • Phase boundaries are detected by means of emissivity changes during heating. • Transition lines are identified by using Singular Value Decomposition techniques. • Different binary systems have been used for validation purposes. - Abstract: Phase change materials (PCM) are widely used today in thermal energy storage applications. Pure PCMs are rarely used because of non adapted melting points. Instead of them, mixtures are preferred. The search of suitable mixtures, preferably eutectics, is often a tedious and time consuming task which requires the determination of phase diagrams. In order to accelerate this screening step, a new method for estimating phase diagrams in record time (1–3 h) has been established and validated. A sample composed by small droplets of mixtures with different compositions (as many as necessary to have a good coverage of the phase diagram) deposited on a flat substrate is first prepared and cooled down to ambient temperature so that all droplets crystallize. The plate is then heated at constant heating rate up to a sufficiently high temperature for melting all the small crystals. The heating process is imaged by using an infrared camera. An appropriate method based on singular values decomposition technique has been developed to analyze the recorded images and to determine the transition lines of the phase diagram. The method has been applied to determine several simple eutectic phase diagrams and the reached results have been validated by comparison with the phase diagrams obtained by Differential Scanning Calorimeter measurements and by thermodynamic modelling.
Directory of Open Access Journals (Sweden)
Neeraj Tiwari
2014-06-01
Full Text Available Under inclusion probability proportional to size (IPPS sampling, the exact secondorder inclusion probabilities are often very difficult to obtain, and hence variance of the Horvitz- Thompson estimator and Sen-Yates-Grundy estimate of the variance of Horvitz-Thompson estimator are difficult to compute. Hence the researchers developed some alternative variance estimators based on approximations of the second-order inclusion probabilities in terms of the first order inclusion probabilities. We have numerically compared the performance of the various alternative approximate variance estimators using the split method of sample selection
A non-destructive method for estimating onion leaf area
Directory of Open Access Journals (Sweden)
Córcoles J.I.
2015-06-01
Full Text Available Leaf area is one of the most important parameters for characterizing crop growth and development, and its measurement is useful for examining the effects of agronomic management on crop production. It is related to interception of radiation, photosynthesis, biomass accumulation, transpiration and gas exchange in crop canopies. Several direct and indirect methods have been developed for determining leaf area. The aim of this study is to develop an indirect method, based on the use of a mathematical model, to compute leaf area in an onion crop using non-destructive measurements with the condition that the model must be practical and useful as a Decision Support System tool to improve crop management. A field experiment was conducted in a 4.75 ha commercial onion plot irrigated with a centre pivot system in Aguas Nuevas (Albacete, Spain, during the 2010 irrigation season. To determine onion crop leaf area in the laboratory, the crop was sampled on four occasions between 15 June and 15 September. At each sampling event, eight experimental plots of 1 m2 were used and the leaf area for individual leaves was computed using two indirect methods, one based on the use of an automated infrared imaging system, LI-COR-3100C, and the other using a digital scanner EPSON GT-8000, obtaining several images that were processed using Image J v 1.43 software. A total of 1146 leaves were used. Before measuring the leaf area, 25 parameters related to leaf length and width were determined for each leaf. The combined application of principal components analysis and cluster analysis for grouping leaf parameters was used to reduce the number of variables from 25 to 12. The parameter derived from the product of the total leaf length (L and the leaf diameter at a distance of 25% of the total leaf length (A25 gave the best results for estimating leaf area using a simple linear regression model. The model obtained was useful for computing leaf area using a non
An ultrasonic guided wave method to estimate applied biaxial loads
Shi, Fan; Michaels, Jennifer E.; Lee, Sang Jun
2012-05-01
Guided waves propagating in a homogeneous plate are known to be sensitive to both temperature changes and applied stress variations. Here we consider the inverse problem of recovering homogeneous biaxial stresses from measured changes in phase velocity at multiple propagation directions using a single mode at a specific frequency. Although there is no closed form solution relating phase velocity changes to applied stresses, prior results indicate that phase velocity changes can be closely approximated by a sinusoidal function with respect to angle of propagation. Here it is shown that all sinusoidal coefficients can be estimated from a single uniaxial loading experiment. The general biaxial inverse problem can thus be solved by fitting an appropriate sinusoid to measured phase velocity changes versus propagation angle, and relating the coefficients to the unknown stresses. The phase velocity data are obtained from direct arrivals between guided wave transducers whose direct paths of propagation are oriented at different angles. This method is applied and verified using sparse array data recorded during a fatigue test. The additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of transducer pairs. Results show that applied stresses can be successfully recovered from the measured changes in guided wave signals.
Filtering Method for Location Estimation of an Underwater Robot
Directory of Open Access Journals (Sweden)
Nak Yong Ko
2014-05-01
Full Text Available This paper describes an application of extended Kalman filter(EKF for localization of an underwater robot. For the application, linearized model of robot motion and sensor measurement are derived. Like usual EKF, the method is recursion of two main steps: the time update(or prediction and measurement update. The measurement update uses exteroceptive sensors such as four acoustic beacons and a pressure sensor. The four beacons provide four range data from these beacons to the robot and pressure sensor does the depth data of the robot. One of the major contributions of the paper is suggestion of two measurement update approaches. The first approach corrects the predicted states using the measurement data individually. The second one corrects the predicted state using the measurement data collectively. The simulation analysis shows that EKF outperforms least squares or odometry based dead-reckoning in the precision and robustness of the estimation. Also, EKF with collective measurement update brings out better accuracy than the EKF with individual measurement update.
Application of age estimation methods based on teeth eruption: how easy is Olze method to use?
De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C
2014-09-01
The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.
Estimation under Multicollinearity: A Comparative Approach Using Monte Carlo Methods
Directory of Open Access Journals (Sweden)
D. A. Agunbiade
2010-01-01
Full Text Available Problem statement: A comparative investigation was done experimentally for 6 different Estimation Techniques of a just-identified simultaneous three-equation econometric model with three multi-collinear exogenous variables. Approach: The aim is to explore in depth the effects of the problems of multicollinearity and examine the sensitivity of findings to increasing sample sizes and increasing number of replications using the mean and total absolute bias statistics. Results: Findings revealed that the estimates are virtually identical for three estimators: LIML, 2SLS and ILS, while the performances of the other categories are not uniformly affected by the three levels of multicollinearity considered. It was also observed that while the frequency distribution of average parameter estimates was rather symmetric under the OLS, the other estimators was either negatively or positively skewed with no clear pattern. Conclusion: The study had established that L2ILS estimators are best for estimating parameters of data plagued by the lower open interval negative level of multicollinearity while FIML and OLS respectively rank highest for estimating parameters of data characterized by closed interval and upper categories level of multicollinearity.
Estimation methods for bioaccumulation in risk assessment of organic chemicals.
Jager, D.T.; Hamers, T.
1997-01-01
The methodology for estimating bioaccumulation of organic chemicals is evaluated. This study is limited to three types of organisms: fish, earthworms and plants (leaf crops, root crops and grass). We propose a simple mechanistic model for estimating BCFs which performs well against measured data. To
Estimation methods for bioaccumulation in risk assessment of organic chemicals
Jager DT; Hamers T; ECO
1997-01-01
The methodology for estimating bioaccumulation of organic chemicals is evaluated. This study is limited to three types of organisms: fish, earthworms and plants (leaf crops, root crops and grass). We propose a simple mechanistic model for estimating BCFs which performs well against measured data. To
Stability over Time of Different Methods of Estimating School Performance
Dumay, Xavier; Coe, Rob; Anumendem, Dickson Nkafu
2014-01-01
This paper aims to investigate how stability varies with the approach used in estimating school performance in a large sample of English primary schools. The results show that (a) raw performance is considerably more stable than adjusted performance, which in turn is slightly more stable than growth model estimates; (b) schools' performance…
Accuracy of a new bedside method for estimation of circulating blood volume
DEFF Research Database (Denmark)
Christensen, P; Waever Rasmussen, J; Winther Henneberg, S
1993-01-01
To evaluate the accuracy of a modification of the carbon monoxide method of estimating the circulating blood volume.......To evaluate the accuracy of a modification of the carbon monoxide method of estimating the circulating blood volume....
Comparison of Accelerometry Methods for Estimating Physical Activity.
Kerr, Jacqueline; Marinac, Catherine R; Ellis, Katherine; Godbole, Suneeta; Hipp, Aaron; Glanz, Karen; Mitchell, Jonathan; Laden, Francine; James, Peter; Berrigan, David
2017-03-01
This study aimed to compare physical activity estimates across different accelerometer wear locations, wear time protocols, and data processing techniques. A convenience sample of middle-age to older women wore a GT3X+ accelerometer at the wrist and hip for 7 d. Physical activity estimates were calculated using three data processing techniques: single-axis cut points, raw vector magnitude thresholds, and machine learning algorithms applied to the raw data from the three axes. Daily estimates were compared for the 321 women using generalized estimating equations. A total of 1420 d were analyzed. Compliance rates for the hip versus wrist location only varied by 2.7%. All differences between techniques, wear locations, and wear time protocols were statistically different (P machine-learned algorithm found 74% of participants with 150 min of walking/running per week. The wrist algorithms found 59% and 60% of participants with 150 min of physical activity per week using the raw vector magnitude and machine-learned techniques, respectively. When the wrist device was worn overnight, up to 4% more participants met guidelines. Estimates varied by 52% across techniques and by as much as 41% across wear locations. Findings suggest that researchers should be cautious when comparing physical activity estimates from different studies. Efforts to standardize accelerometry-based estimates of physical activity are needed. A first step might be to report on multiple procedures until a consensus is achieved.
Variational methods to estimate terrestrial ecosystem model parameters
Delahaies, Sylvain; Roulstone, Ian
2016-04-01
Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.
A New Pitch Estimation Method Based on AMDF
Directory of Open Access Journals (Sweden)
Huan Zhao
2013-10-01
Full Text Available In this paper, a new modified average magnitude difference function (MAMDF is proposed which is robust for noise-corrupt speech pitch estimation. The traditional technology in pitch estimation can easily give rise to the problem of detecting error pitch period. And their estimation performance behaves badly with the occurrence of background noise. In the process of calculation on speech samples, MAMDF presented in this paper has the property of strengthening the characteristic of pitch period and reducing the influence of background noise. And therefore, MAMDF can not only decrease the disadvantage brought by the decreasing tendency of pitch period but also overcome the error caused by severe variation between neighboring samples. The experiment which is implemented in CSTR database shows that MAMDF is greatly superior to AMDF and CAMDF both in clean speech environment and noisy speech environment, representing prominent precision and robustness in pitch estimation
Estimating Incision Depth in Martian Valleys: Comparing Two Methods
Luo, W.; Howard, A. D.; Trischan, J.
2011-03-01
Regional variation of valley network (VN) depths may be informative of past climatic variation across Mars. Both black top hat transformation and search radius approach provide reasonable estimates of VN depths, but require careful interpretation.
Application of Multiple Imputation Method for Missing Data Estimation
Ser,Gazel
2011-01-01
The existence of missing observation in the data collected particularly in different fields of study cause researchers to make incorrect decisions at analysis stage and in generalizations of the results. Problems and solutions which are possible to be encountered at the estimation stage of missing observations were emphasized in this study. In estimating the missing observations, missing observations were assumed to be missing at random and Markov Chain Monte Carlo technique and mul...
Forms and Estimation Methods of Panel Recursive Dynamic Systems
Ghassan, Hassan B.
2000-01-01
The purpose of this paper is to study the model belongs to the family of structural equation models with data varying both across individuals (sectors) and in time. A complete theoretical analysis is developed in this work for the case of a dynamic recursive structure. Maximum likelihood estimation and SUR-GLS “Seemingly Unrelated Regressions-Generalized Least Square” estimators (iterated or not, with proper instruments and with Taylor’s transformation) are carefully used. These last converge...
Empirical Analysis of Value-at-Risk Estimation Methods Using Extreme Value Theory
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
This paper investigates methods of value-at-risk (VaR) estimation using extreme value theory (EVT). Itcompares two different estimation methods, 。two-step subsample bootstrap" based on moment estimation and maximumlikelihood estimation (MLE), according to their theoretical bases and computation procedures. Then, the estimationresults are analyzed together with those of normal method and empirical method. The empirical research of foreignexchange data shows that the EVT methods have good characters in estimating VaR under extreme conditions and"two-step subsample bootstrap" method is preferable to MLE.
Improving methods estimation of the investment climate of the country
Directory of Open Access Journals (Sweden)
E. V. Ryabinin
2016-01-01
the most objective assessment of the investment climate in the country in order to build their strategies market functioning. The article describes two methods to obtain an estimate of the investment climate, a fundamental and expertise. Studies have shown that the fundamental method provides the most accurate and objective assessment of, but not all of the investment potential factors can be subjected to mathematical evaluation. The use of expert opinion on the practice of subjectivity difficult to experts, so its use requires special care. In modern economic practice it proved that the investment climate elements directly affect the investment decisions of companies. Improving the investment climate assessment methodology, it allows you to build the most optimal form of cooperation between investors from the host country. In today’s political tensions, this path requires clear cooperation of subjects, both in the domestic and international level. However, now, these measures will avoid the destabilization of Russia’s relations with foreign investors.
Iterative methods for distributed parameter estimation in parabolic PDE
Energy Technology Data Exchange (ETDEWEB)
Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
Fast, moment-based estimation methods for delay network tomography
Energy Technology Data Exchange (ETDEWEB)
Lawrence, Earl Christophre [Los Alamos National Laboratory; Michailidis, George [U OF MICHIGAN; Nair, Vijayan N [U OF MICHIGAN
2008-01-01
Consider the delay network tomography problem where the goal is to estimate distributions of delays at the link-level using data on end-to-end delays. These measurements are obtained using probes that are injected at nodes located on the periphery of the network and sent to other nodes also located on the periphery. Much of the previous literature deals with discrete delay distributions by discretizing the data into small bins. This paper considers more general models with a focus on computationally efficient estimation. The moment-based schemes presented here are designed to function well for larger networks and for applications like monitoring that require speedy solutions.
PHREATOPHYTE WATER USE ESTIMATED BY EDDY-CORRELATION METHODS.
Weaver, H.L.; Weeks, E.P.; Campbell, G.S.; Stannard, D.I.; Tanner, B.D.
1986-01-01
Water-use was estimated for three phreatophyte communities: a saltcedar community and an alkali-Sacaton grass community in New Mexico, and a greasewood rabbit-brush-saltgrass community in Colorado. These water-use estimates were calculated from eddy-correlation measurements using three different analyses, since the direct eddy-correlation measurements did not satisfy a surface energy balance. The analysis that seems to be most accurate indicated the saltcedar community used from 58 to 87 cm (23 to 34 in. ) of water each year. The other two communities used about two-thirds this quantity.
Finite-Sample Bias Propagation in Autoregressive Estimation With the Yule–Walker Method
Broersen, P.M.T.
2009-01-01
The Yule-Walker (YW) method for autoregressive (AR) estimation uses lagged-product (LP) autocorrelation estimates to compute an AR parametric spectral model. The LP estimates only have a small triangular bias in the estimated autocorrelation function and are asymptotically unbiased. However, using t
Methods for design flood estimation in South Africa
African Journals Online (AJOL)
2012-07-04
Jul 4, 2012 ... Keywords: Design flood estimation, South Africa, research needs. Introduction ..... tion of the floods at the site and the fitted distribution may be further biased by ... et al., 2000). Thus, the concept of regional analysis is to supple- . Thus ...... design floods. There is evidence that many natural systems are being.
Assessment of in silico methods to estimate aquatic species sensitivity
Determining the sensitivity of a diversity of species to environmental contaminants continues to be a significant challenge in ecological risk assessment because toxicity data are generally limited to a few standard species. In many cases, QSAR models are used to estimate toxici...
A Practical Method of Policy Analysis by Estimating Effect Size
Phelps, James L.
2011-01-01
The previous articles on class size and other productivity research paint a complex and confusing picture of the relationship between policy variables and student achievement. Missing is a conceptual scheme capable of combining the seemingly unrelated research and dissimilar estimates of effect size into a unified structure for policy analysis and…
Methods of RVD object pose estimation and experiments
Shang, Yang; He, Yan; Wang, Weihua; Yu, Qifeng
2007-11-01
Methods of measuring a RVD (rendezvous and docking) cooperative object's pose from monocular and binocular images respectively are presented. The methods solve the initial values first and optimize the object pose parameters by bundle adjustment. In the disturbance-rejecting binocular method, chosen measurement system parameters of one camera's exterior parameters are modified simultaneously. The methods need three or more cooperative target points to measure the object's pose accurately. Experimental data show that the methods converge quickly and stably, provide accurate results and do not need accurate initial values. Even when the chosen measurement system parameters are subjected to some amount of disturbance, the binocular method manages to provide fairly accurate results.
Stability estimates for hybrid coupled domain decomposition methods
Steinbach, Olaf
2003-01-01
Domain decomposition methods are a well established tool for an efficient numerical solution of partial differential equations, in particular for the coupling of different model equations and of different discretization methods. Based on the approximate solution of local boundary value problems either by finite or boundary element methods, the global problem is reduced to an operator equation on the skeleton of the domain decomposition. Different variational formulations then lead to hybrid domain decomposition methods.
NONLINEAR ESTIMATION METHODS FOR AUTONOMOUS TRACKED VEHICLE WITH SLIP
Institute of Scientific and Technical Information of China (English)
ZHOU Bo; HAN Jianda
2007-01-01
In order to achieve precise, robust autonomous guidance and control of a tracked vehicle, a kinematic model with longitudinal and lateral slip is established. Four different nonlinear filters are used to estimate both state vector and time-varying parameter vector of the created model jointly. The first filter is the well-known extended Kalman filter. The second filter is an unscented version of the Kalman filter. The third one is a particle filter using the unscented Kalman filter to generate the importance proposal distribution. The last one is a novel and guaranteed filter that uses a linear set-membership estimator and can give an ellipsoid set in which the true state lies. The four different approaches have different complexities, behavior and advantages that are surveyed and compared.
An econometrics method for estimating gold coin futures prices
Directory of Open Access Journals (Sweden)
Fatemeh Pousti
2011-10-01
Full Text Available In this paper, we present some regression functions to estimate gold coin future price based on gold coin price, future exchange price, price of gold traded globally and trend of time. The proposed model of this paper is used for price estimation of special gold coin traded in Iran. The proposed model of this paper is applied for historical data of future gold prices and the results are discussed. The preliminary results indicate that an increase on gold coin price could increase gold coin future price. An increase on foreign exchange price has negative impact on gold coin future and present trend on time has positive impact on gold coin future.
SPECIFIC MECHANISMS AND METHODS FOR ESTIMATING TAX FRAUD
Directory of Open Access Journals (Sweden)
Brindusa Tudose
2017-01-01
Full Text Available In the last decades, tax fraud has grown, being catalogued as a serious impediment in the way of economic development. The paper aims to make contributions on two levels: a Theoretical level - by synthesis methodologies for estimating tax fraud and b Empirical level - by analyzing fraud mechanisms and dynamics of this phenomenon, properly established methodologies. To achieve the objective, we have appealed to the qualitative and quantitative analysis. Whatever the context that generates tax fraud mechanisms, the ultimate goal of fraudsters is the same: total or partial avoidance of taxation, respectively obtaining public funds unduly. The increasing complexity of business (regarded as a tax base and failure to adapt prompt of legal regulations to new contexts have allowed diversification and “improving” the mechanisms of fraud, creating additional risks for accuracy estimates of tax fraud.
A New Method for Estimation of Velocity Vectors
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Munk, Peter
1998-01-01
, and the standard deviation in the lateral and longitudinal direction. The average performance of the estimates for all angles is: mean velocity 0.99 m/s, longitudinal S.D. 0.015 m/s, and lateral S.D. 0.196 m/s. For flow parallel to the transducer the results are: mean velocity 0.95 m/s, angle 0.10, longitudinal S...
A practical method for suction estimation in unsaturated soil testing
Amaral, M.F.; Viana da Fonseca, A.; Romero Morales, Enrique Edgar; Arroyo Alvarez de Toledo, Marcos
2013-01-01
This research presents an alternative methodology to estimate suction in triaxial tests carried out under constant water content. A preliminary determination of the retention curve is proposed using two complementary techniques, namely psychrometer measurements and mercury intrusion porosimetry results. Starting with the definition of a set of retention curves at different void ratios, an attempt is made for establishing a correspondence of the measured retention curves with the results of a ...
Gray bootstrap method for estimating frequency-varying random vibration signals with small samples
Directory of Open Access Journals (Sweden)
Wang Yanqing
2014-04-01
Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.
Novel Approach to Estimate Missing Data Using Spatio-Temporal Estimation Method
Directory of Open Access Journals (Sweden)
Aniruddha D. Shelotkar
2016-04-01
Full Text Available With advancement of wireless technology and the processing power in mobile devices, every handheld device supports numerous video streaming applications. Generally, user datagram protocol (UDP is used in video transmission technology which does not provide assured quality of service (QoS. Therefore, there is need for video post processing modules for error concealments. In this paper we propose one such algorithm to recover multiple lost blocks of data in video. The proposed algorithm is based on a combination of wavelet transform and spatio-temporal data estimation. We decomposed the frame with lost blocks using wavelet transform in low and high frequency bands. Then the approximate information (low frequency of missing block is estimated using spatial smoothening and the details (high frequency are added using bidirectional (temporal predication of high frequency wavelet coefficients. Finally inverse wavelet transform is applied on modified wavelet coefficients to recover the frame. In proposed algorithm, we carry out an automatic estimation of missing block using spatio-temporal manner. Experiments are carried with different YUV and compressed domain streams. The experimental results show enhancement in PSNR as well as visual quality and cross verified by video quality metrics (VQM.
Apparatus and method for velocity estimation in synthetic aperture imaging
DEFF Research Database (Denmark)
2003-01-01
by the object (4) and received by the elements of the transducer array (5). All of these signals are then combined in the beam processor (6) to focus all of the beams in the image in both transmit and receive mode and the simultaneously focused signals are used for updating the image in the processor (7......). The update signals are used in the velocity estimation processor (8) to correlate the individual measurements to obtain the displacement between high-resolution images and thereby determine the velocity....
Ishigaki, Tsukasa; Yamamoto, Yoshinobu; Nakamura, Yoshiyuki; Akamatsu, Motoyuki
Patients that have an health service by doctor have to wait long time at many hospitals. The long waiting time is the worst factor of patient's dissatisfaction for hospital service according to questionnaire for patients. The present paper describes an estimation method of the waiting time for each patient without an electronic medical chart system. The method applies a portable RFID system to data acquisition and robust estimation of probability distribution of the health service and test time by doctor for high-accurate waiting time estimation. We carried out an health service of data acquisition at a real hospital and verified the efficiency of the proposed method. The proposed system widely can be used as data acquisition system in various fields such as marketing service, entertainment or human behavior measurement.
A building cost estimation method for inland ships
Hekkenberg, R.G.
2014-01-01
There is very little publicly available data about the building cost of inland ships, especially for ships that have dimensions that differ significantly from those of common ships. Also, no methods to determine the building cost of inland ships are described in literature. In this paper, a method t
Hida, Hajime; Tomigashi, Yoshio; Ueyama, Kenji; Inoue, Yukinori; Morimoto, Shigeo
This paper proposes a new torque estimation method that takes into account the spatial harmonics of permanent magnet synchronous motors and that is capable of real-time estimation. First, the torque estimation equation of the proposed method is derived. In the method, the torque ripple of a motor can be estimated from the average of the torque calculated by the conventional method (cross product of the fluxlinkage and motor current) and the torque calculated from the electric input power to the motor. Next, the effectiveness of the proposed method is verified by simulations in which two kinds of motors with different components of torque ripple are considered. The simulation results show that the proposed method estimates the torque ripple more accurately than the conventional method. Further, the effectiveness of the proposed method is verified by performing on experiment. It is shown that the torque ripple is decreased by using the proposed method to the torque control.
An econometrics method to estimate demand of sugar
Directory of Open Access Journals (Sweden)
Negar Seyed Soleimany
2012-01-01
Full Text Available Sugar is one of the strategic goods in the basket of households in each country and it plays an important role in supplying the required energy. On the other hand, it is one of the goods, which Iranian government is about to change its subsidy strategies. To design useful sugar subsidy strategies, it is necessary to know sugar position in the basket of households and be familiar with households' sugar demand or consumption behavior. This research estimates sugar demand for Iranian households by using time series of 1984-2008, which is taken from central bank of Iran. In this paper, first independent and dependent variables of household sugar demand model are chosen based on the literature review and theory of demand. Then, sugar demand is estimated by OLS technique and linear regression. The preliminary statistical observations such as Durbin-Watson, F statistic and R2 indicate that the regression is admissible. The results seem plausible and consistent with theory and show that sugar demand in Iranian households is associated with household expenditure, relative sugar price, family size and indicate that demand of sugar is affected during the war time. The results also show the income elasticity is 0.8 and price elasticity is -0.2 which means sugar is essential good for Iranian households and is inelastic to price.
Palatine tonsil volume estimation using different methods after tonsillectomy.
Sağıroğlu, Ayşe; Acer, Niyazi; Okuducu, Hacı; Ertekin, Tolga; Erkan, Mustafa; Durmaz, Esra; Aydın, Mesut; Yılmaz, Seher; Zararsız, Gökmen
2016-06-15
This study was carried out to measure the volume of the palatine tonsil in otorhinolaryngology outpatients with complaints of adenotonsillar hypertrophy and chronic tonsillitis who had undergone tonsillectomy. To date, no study has investigated palatine tonsil volume using different methods and compared with subjective tonsil size in the literature. For this purpose, we used three different methods to measure palatine tonsil volume. The correlation of each parameter with tonsil size was assessed. After tonsillectomy, palatine tonsil volume was measured by Archimedes, Cavalieri and Ellipsoid methods. Mean right-left palatine tonsil volumes were calculated as 2.63 ± 1.34 cm(3) and 2.72 ± 1.51 cm(3) by the Archimedes method, 3.51 ± 1.48 cm(3) and 3.37 ± 1.36 cm(3) by the Cavalieri method, and 2.22 ± 1.22 cm(3) and 2.29 ± 1.42 cm(3) by the Ellipsoid method, respectively. Excellent agreement was found among the three methods of measuring volumetric techniques according to Bland-Altman plots. In addition, tonsil grade was correlated significantly with tonsil volume.
comparison of estimation methods for fitting weibull distribution to ...
African Journals Online (AJOL)
Tersor
JOURNAL OF RESEARCH IN FORESTRY, WILDLIFE AND ENVIRONMENT VOLUME 7, No.2 SEPTEMBER, 2015. ... method was more accurate in fitting the Weibull distribution to the natural stand. ... appropriate for mixed age group.
Multiuser detection and channel estimation: Exact and approximate methods
DEFF Research Database (Denmark)
Fabricius, Thomas
2003-01-01
. We also derive optimal detectors when nuisance parameters such as the channel and noise level are unknown, and show how well the proposed methods fit into this framework via the Generalised Expectation Maximisation algorithm. Our numerical evaluation show that naive mean field annealing and adaptive...... order Plefka expansion, adaptive TAP, and large system limit self-averaging behaviours, and a method based on Kikuchi and Bethe free energy approximations, which we denote the Generalised Graph Expansion. Since all these methods are improvements of the naive mean field approach we make a thorough...... analysis of the convexity and bifurcations of the naive mean field free energy and optima. This proves that we can avoid local minima by tracking a global convex solution into the non-convex region, effectively avoiding error propagation. This method is in statistical physics denoted mean field annealing...
Semi-quantitative method to estimate levels of Campylobacter
Introduction: Research projects utilizing live animals and/or systems often require reliable, accurate quantification of Campylobacter following treatments. Even with marker strains, conventional methods designed to quantify are labor and material intensive requiring either serial dilutions or MPN ...
An estimation method of the fault wind turbine power generation loss based on correlation analysis
Zhang, Tao; Zhu, Shourang; Wang, Wei
2017-01-01
A method for estimating the power generation loss of a fault wind turbine is proposed in this paper. In this method, the wind speed is estimated and the estimated value of the loss of power generation is given by combining the actual output power characteristic curve of the wind turbine. In the wind speed estimation, the correlation analysis is used, and the normal operation of the wind speed of the fault wind turbine is selected, and the regression analysis method is used to obtain the estimated value of the wind speed. Based on the estimation method, this paper presents an implementation of the method in the monitoring system of the wind turbine, and verifies the effectiveness of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Sakurai, Kiyoshi; Arakawa, Takuya; Yamamoto, Toshihiro; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-08-01
Estimation accuracy for subcriticality on `Indirect Estimation Method for Calculation Error` is expressed in form of {rho}{sub m} - {rho}{sub C} = K ({gamma}{sub zc}{sup 2} - {gamma}{sub zm}{sup 2}). This expression means that estimation accuracy for subcriticality is proportional to ({gamma}{sub zc}{sup 2} - {gamma}{sub zm}{sup 2}) as estimation accuracy of buckling for axial direction. The proportional constant K is calculated, but the influence of the uncertainty of K to estimation accuracy for subcriticality is smaller than in case of comparison for {rho}{sub m} = -K ({gamma}{sub zm}{sup 2} + B{sub z}{sup 2}) with calculated {rho}{sub c}. When the values of K were calculated, the estimation accuracy is kept enough. If {gamma}{sub zc}{sup 2} equal to {gamma}{sub zm}{sup 2}, {rho}{sub c} equal to {rho}{sub m}. Reliability of this method is shown on base of results in which are calculated using MCNP 4A for four subcritical cores of TCA. (author)
Two Dynamic Discrete Choice Estimation Problems and Simulation Method Solutions
Steven Stern
1994-01-01
This paper considers two problems that frequently arise in dynamic discrete choice problems but have not received much attention with regard to simulation methods. The first problem is how to simulate unbiased simulators of probabilities conditional on past history. The second is simulating a discrete transition probability model when the underlying dependent variable is really continuous. Both methods work well relative to reasonable alternatives in the application discussed. However, in bot...
Methods for measuring and estimating methane emission from ruminants
2012-01-01
Simple Summary Knowledge about methods used in quantification of greenhouse gasses is currently needed due to international commitments to reduce the emissions. In the agricultural sector one important task is to reduce enteric methane emissions from ruminants. Different methods for quantifying these emissions are presently being used and others are under development, all with different conditions for application. For scientist and other persons working with the topic it is very important to ...
Method of estimating pulse response using an impedance spectrum
Energy Technology Data Exchange (ETDEWEB)
Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G
2014-10-21
Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.
Switching Equalization Algorithm Based on a New SNR Estimation Method
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
It is well-known that turbo equalization with the max-log-map (MLM) rather than the log-map (LM) algorithm is insensitive to signal to noise ratio (SNR) mismatch. As our first contribution, an improved MLM algorithm called scaled max-log-map (SMLM) algorithm is presented. Simulation results show that the SMLM scheme can dramatically outperform the MLM without sacrificing the robustness against SNR mismatch. Unfortunately, its performance is still inferior to that of the LM algorithm with exact SNR knowledge over the class of high-loss channels. As our second contribution, a switching turbo equalization scheme, which switches between the SMLM and LM schemes, is proposed to practically close the performance gap. It is based on a novel way to estimate the SNR from the reliability values of the extrinsic information of the SMLM algorithm.
Walker, Neff; Hill, Kenneth; Zhao, Fengmin
2012-01-01
In most low- and middle-income countries, child mortality is estimated from data provided by mothers concerning the survival of their children using methods that assume no correlation between the mortality risks of the mothers and those of their children. This assumption is not valid for populations with generalized HIV epidemics, however, and in this review, we show how the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) uses a cohort component projection model to correct for AIDS-related biases in the data used to estimate trends in under-five mortality. In this model, births in a given year are identified as occurring to HIV-positive or HIV-negative mothers, the lives of the infants and mothers are projected forward using survivorship probabilities to estimate survivors at the time of a given survey, and the extent to which excess mortality of children goes unreported because of the deaths of HIV-infected mothers prior to the survey is calculated. Estimates from the survey for past periods can then be adjusted for the estimated bias. The extent of the AIDS-related bias depends crucially on the dynamics of the HIV epidemic, on the length of time before the survey that the estimates are made for, and on the underlying non-AIDS child mortality. This simple methodology (which does not take into account the use of effective antiretroviral interventions) gives results qualitatively similar to those of other studies.
Directory of Open Access Journals (Sweden)
Neff Walker
Full Text Available In most low- and middle-income countries, child mortality is estimated from data provided by mothers concerning the survival of their children using methods that assume no correlation between the mortality risks of the mothers and those of their children. This assumption is not valid for populations with generalized HIV epidemics, however, and in this review, we show how the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME uses a cohort component projection model to correct for AIDS-related biases in the data used to estimate trends in under-five mortality. In this model, births in a given year are identified as occurring to HIV-positive or HIV-negative mothers, the lives of the infants and mothers are projected forward using survivorship probabilities to estimate survivors at the time of a given survey, and the extent to which excess mortality of children goes unreported because of the deaths of HIV-infected mothers prior to the survey is calculated. Estimates from the survey for past periods can then be adjusted for the estimated bias. The extent of the AIDS-related bias depends crucially on the dynamics of the HIV epidemic, on the length of time before the survey that the estimates are made for, and on the underlying non-AIDS child mortality. This simple methodology (which does not take into account the use of effective antiretroviral interventions gives results qualitatively similar to those of other studies.
New Estimates for the Rate of Convergence of the Method of Subspace Corrections
Institute of Scientific and Technical Information of China (English)
Durkbin Cho; Jinchao Xu; Ludmil Zikatanov
2008-01-01
We discuss estimates for the rate of convergence of the method of successive subspace corrections in terms of condition number estimate for the method of parallel subspace corrections. We provide upper bounds and in a special case, a lower bound for preconditioners defined via the method of successive subspace corrections.
Methods to estimate railway capacity and passenger delays
DEFF Research Database (Denmark)
Landex, Alex
of defining railway capacity, which depends on the infrastructure, the rolling stock and the actual timetable. In 2004, the International Union of Railways (UIC) published a leaflet giving a method to measure the capacity consumption of line sections based on the actual infrastructure and timetable (and...... to the analytical way of determining the capacity consumption, capacity consumption can be measured by compressing the timetable graphs as much as possible for the line section and then using the compression ratio as a measurement of the capacity consumption. CHAPTER 3 shows how the UIC 406 method can be expounded...... method describes how the capacity is utilized based on four topics (Number of trains, Average speed, Heterogeneity, and Stability)—the so-called “balance of capacity”. The four topics are normally correlated, but analytical measurements dealing with each topic individually are developed in CHAPTER 4...
Kernel methods and minimum contrast estimators for empirical deconvolution
Delaigle, Aurore
2010-01-01
We survey classical kernel methods for providing nonparametric solutions to problems involving measurement error. In particular we outline kernel-based methodology in this setting, and discuss its basic properties. Then we point to close connections that exist between kernel methods and much newer approaches based on minimum contrast techniques. The connections are through use of the sinc kernel for kernel-based inference. This `infinite order' kernel is not often used explicitly for kernel-based deconvolution, although it has received attention in more conventional problems where measurement error is not an issue. We show that in a comparison between kernel methods for density deconvolution, and their counterparts based on minimum contrast, the two approaches give identical results on a grid which becomes increasingly fine as the bandwidth decreases. In consequence, the main numerical differences between these two techniques are arguably the result of different approaches to choosing smoothing parameters.
A general method of estimating stellar astrophysical parameters from photometry
Belikov, A N
2008-01-01
Applying photometric catalogs to the study of the population of the Galaxy is obscured by the impossibility to map directly photometric colors into astrophysical parameters. Most of all-sky catalogs like ASCC or 2MASS are based upon broad-band photometric systems, and the use of broad photometric bands complicates the determination of the astrophysical parameters for individual stars. This paper presents an algorithm for determining stellar astrophysical parameters (effective temperature, gravity and metallicity) from broad-band photometry even in the presence of interstellar reddening. This method suits the combination of narrow bands as well. We applied the method of interval-cluster analysis to finding stellar astrophysical parameters based on the newest Kurucz models calibrated with the use of a compiled catalog of stellar parameters. Our new method of determining astrophysical parameters allows all possible solutions to be located in the effective temperature-gravity-metallicity space for the star and se...
A hybrid training method for neural energy estimation in calorimetry
Da Silva, P V M; Seixas, J
2001-01-01
A neural mapping is developed to improve the overall performance of Tilecal, which is the hadronic calorimeter of the ATLAS detector. Feeding the input nodes of a multilayer feedforward neural network with the energy values sampled by the calorimeter cells in beam tests, it is shown that the original energy scale of pion beams is reconstructed over a wide energy range and linearity is significantly improved. As it happens for classical methods, a compromise between nonlinearity correction and the optimization of the energy resolution of the detector has to be accomplished. A hybrid training method for the neural mapping is proposed to achieve this design goal. Using the backpropagation algorithm, the method intercalates an epoch of training steps, for which the neural mapping mainly focus on linearity correction, with another block of training steps, in which the original energy resolution obtained by linearly combining the calorimeter cells becomes the main target. (6 refs).
Estimation of mechanical properties of nanomaterials using artificial intelligence methods
Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.
2014-09-01
Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.
A New Method For Cosmological Parameter Estimation From SNIa Data
March, Marisa; Trotta, R.; Berkes, P.; Starkman, G. D.; Vaudrevange, P. M.
2011-01-01
We present a new methodology to extract constraints on cosmological parameters from SNIa data obtained with the SALT2 lightcurve fitter. The power of our Bayesian method lies in its full exploitation of relevant prior information, which is ignored by the usual chisquare approach. Using realistic simulated data sets we demonstrate that our method outperforms the usual chisquare approach 2/3 of the time while achieving better long-term coverage properties. A further benefit of our methodology is its ability to produce a posterior probability distribution for the intrinsic dispersion of SNe. This feature can also be used to detect hidden systematics in the data.
Directory of Open Access Journals (Sweden)
Xinyi Yang
2014-01-01
Full Text Available Accurate gas turbine engine health status estimation is very important for engine applications and aircraft flight safety. Due to the fact that there are many to-be-estimated parameters, engine health status estimation is a very difficult optimization problem. Traditional gas path analysis (GPA methods are based on the linearized thermodynamic engine performance model, and the estimation accuracy is not satisfactory on conditions that the nonlinearity of the engine model is significant. To solve this problem, a novel gas turbine engine health status estimation method has been developed. The method estimates degraded engine component parameters using quantum-behaved particle swarm optimization (QPSO algorithm. And the engine health indices are calculated using these estimated component parameters. The new method was applied to turbine fan engine health status estimation and is compared with the other three representative methods. Results show that although the developed method is slower in computation speed than GPA methods it succeeds in estimating engine health status with the highest accuracy in all test cases and is proven to be a very suitable tool for off-line engine health status estimation.
Evaluation of current methods to estimate pulp yield of hemp
Meijer, de E.P.M.; Werf, van der H.M.G.
1995-01-01
Large-scale evaluation of hemp stems from field trials requires a rapid method for the characterization of stem quality. The large differences between bark and woody core in anatomical and chemical properties, make a quantification of these two fractions of primary importance for quality assessment.
Estimation methods for bioaccumulation in risk assessment of organic chemicals
Jager DT; Hamers T; ECO
1997-01-01
Methodes voor het inschatten van bioaccumulatie van organische stoffen worden ge-evalueerd. Deze studie is beperkt tot drie typen organismen: vis, wormen en planten (bladgewassen, wortelgewassen en gras). We stellen een simpel mechanistisch model voor dat goed presteert t.o.v. gemeten waarden. O
Effective Laboratory Method of Chromite Content Estimation in Reclaimed Sands
Directory of Open Access Journals (Sweden)
Ignaszak Z.
2016-09-01
Full Text Available The paper presents an original method of measuring the actual chromite content in the circulating moulding sand of foundry. This type of material is applied for production of moulds. This is the case of foundry which most frequently perform heavy casting in which for the construction of chemical hardening mould is used, both the quartz sand and chromite sand. After the dry reclamation of used moulding sand, both types of sands are mixed in various ratios resulting that in reclaimed sand silos, the layers of varying content of chromite in mixture are observed. For chromite recuperation from the circulating moulding sand there are applied the appropriate installations equipped with separate elements generating locally strong magnetic field. The knowledge of the current ratio of chromite and quartz sand allows to optimize the settings of installation and control of the separation efficiency. The arduous and time-consuming method of determining the content of chromite using bromoform liquid requires operational powers and precautions during using this toxic liquid. It was developed and tested the new, uncomplicated gravimetric laboratory method using powerful permanent magnets (neodymium. The method is used in the production conditions of casting for current inspection of chromite quantity in used sand in reclamation plant.
An evaluation of estimation methods for determining addition in presbyopes
Directory of Open Access Journals (Sweden)
Leonardo Catunda Bittencourt
2013-08-01
Full Text Available PURPOSE: The optical correction of presbyopia must be handled individually. Our aim was to compare the methods used in addition to the refractive near vision, with the final addition used in presbyopic patients. METHODS: Eighty healthy subjects with a mean age of 49.7 years (range 40 to 60 years were studied. Tentative near additions were determined using four different techniques: one-half amplitude accommodation with minus lenses (AAL; one-third accommodative demand with positive lens (ADL; balanced range of accommodation with minus and positive lenses (BRA and crossed cylinder test with initial myopisation (CCT. The power of the addition was then refined to arrive at the final addition. RESULTS: The mean tentative near additions were lower than the final addition for ADL and BRA addition methods. The mean differences between tentative and final additions were low for all the tests examined (less than 0.25 D. The intervals between the 95% limits of agreement differed substantially and were always higher than ±0.50 D. CONCLUSION: All the methods used displayed similar behavior and provided a tentative addition close to the final addition. The coefficient of agreements (COA detected suggests that every tentative addition should be adjusted according to the particular needs of the patient.
Antioxidant Capacity of Cultured Mammalian Cells Estimated by ESR Method
Directory of Open Access Journals (Sweden)
Tamar Kartvelishvili
2004-01-01
Full Text Available In the present study, the antioxidant capacity against hydrogen peroxide (H2O2, one of the stress-inducing agents, was investigated in two distinct cell lines: L-41 (human epithelial-like cells and HLF (human diploid lung fibroblasts, which differ in tissue origin, life span in culture, proliferate activity, and special enzyme system activity. The cell antioxidant capacity against H2O2 was estimated by the electron spin resonance (ESR spin-trapping technique in the Fenton reaction system via Fe+2 ion action with H2O2 resulting in hydroxyl radical generation. The effects of catalase inhibitors, such as sodium azide and 3-amino-1,2,4-triazole, on the antioxidant capacity of cells were tested. Based on our observation, it can be concluded that the defensive capacity of cells against H2O2 depends on the ratio between catalase/GPx/SOD and H2O2, especially at high-stress situations, and the intracellular balance of these enzymes are more important than the influence of the single component.
Estimation of missing rainfall data using spatial interpolation and imputation methods
Radi, Noor Fadhilah Ahmad; Zakaria, Roslinazairimah; Azman, Muhammad Az-zuhri
2015-02-01
This study is aimed to estimate missing rainfall data by dividing the analysis into three different percentages namely 5%, 10% and 20% in order to represent various cases of missing data. In practice, spatial interpolation methods are chosen at the first place to estimate missing data. These methods include normal ratio (NR), arithmetic average (AA), coefficient of correlation (CC) and inverse distance (ID) weighting methods. The methods consider the distance between the target and the neighbouring stations as well as the correlations between them. Alternative method for solving missing data is an imputation method. Imputation is a process of replacing missing data with substituted values. A once-common method of imputation is single-imputation method, which allows parameter estimation. However, the single imputation method ignored the estimation of variability which leads to the underestimation of standard errors and confidence intervals. To overcome underestimation problem, multiple imputations method is used, where each missing value is estimated with a distribution of imputations that reflect the uncertainty about the missing data. In this study, comparison of spatial interpolation methods and multiple imputations method are presented to estimate missing rainfall data. The performance of the estimation methods used are assessed using the similarity index (S-index), mean absolute error (MAE) and coefficient of correlation (R).
On using sample selection methods in estimating the price elasticity of firms' demand for insurance.
Marquis, M Susan; Louis, Thomas A
2002-01-01
We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates.
ESTIMATE ACCURACY OF NONLINEAR COEFFICIENTS OF SQUEEZEFILM DAMPER USING STATE VARIABLE FILTER METHOD
Institute of Scientific and Technical Information of China (English)
1998-01-01
The estimate model for a nonlinear system of squeeze-film damper (SFD) is described.The method of state variable filter (SVF) is used to estimate the coefficients of SFD.The factors which are critical to the estimate accuracy are discussed.
Study on Top-Down Estimation Method of Software Project Planning
Institute of Scientific and Technical Information of China (English)
ZHANG Jun-guang; L(U) Ting-jie; ZHAO Yu-mei
2006-01-01
This paper studies a new software project planning method under some actual project data in order to make software project plans more effective. From the perspective of system theory, our new method regards a software project plan as an associative unit for study. During a top-down estimation of a software project, Program Evaluation and Review Technique (PERT) method and analogy method are combined to estimate its size, then effort estimation and specific schedules are obtained according to distributions of the phase effort. This allows a set of practical and feasible planning methods to be constructed. Actual data indicate that this set of methods can lead to effective software project planning.
Sidik, S. M.
1975-01-01
Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.
Linnet, K
1990-12-01
The linear relationship between the measurements of two methods is estimated on the basis of a weighted errors-in-variables regression model that takes into account a proportional relationship between standard deviations of error distributions and true variable levels. Weights are estimated by an interative procedure. As shown by simulations, the regression procedure yields practically unbiased slope estimates in realistic situations. Standard errors of slope and location difference estimations are derived by the jackknife principle. For illustration, the linear relationship is estimated between the measurements of two albumin methods with proportional errors.
OPTIMAL ERROR ESTIMATES OF THE PARTITION OF UNITY METHOD WITH LOCAL POLYNOMIAL APPROXIMATION SPACES
Institute of Scientific and Technical Information of China (English)
Yun-qing Huang; Wei Li; Fang Su
2006-01-01
In this paper, we provide a theoretical analysis of the partition of unity finite element method(PUFEM), which belongs to the family of meshfree methods. The usual error analysis only shows the order of error estimate to the same as the local approximations[12].Using standard linear finite element base functions as partition of unity and polynomials as local approximation space, in 1-d case, we derive optimal order error estimates for PUFEM interpolants. Our analysis show that the error estimate is of one order higher than the local approximations. The interpolation error estimates yield optimal error estimates for PUFEM solutions of elliptic boundary value problems.
Joint Parametric Fault Diagnosis and State Estimation Using KF-ML Method
DEFF Research Database (Denmark)
Sun, Zhen; Yang, Zhenyu
2014-01-01
) technique to identify the fault parameter and employs the result to make fault decision based on the predefined threshold. Then this estimated fault parameter value is substituted into parameterized state estimation of KF to obtain the state estimation. Finally, a robot case study with two different fault...... scenarios shows this method can lead to a good performance in terms of fast and accurate fault detection and state estimation....
2014-02-01
Employs empiricism • Employs experience/competency • Provides probabilistic estimating results. The first three items of this list deal with the...this list address particular inputs to the margin estimating technique. The first is that methods to estimate margin should employ empiricism . This can...experience becomes more critical to the success of the estimation if the methodology employed utilizes a smaller amount of empiricism . [1] The
DRUG ADDICTION SOCIAL COST IN RUSSIA REGIONS: METHODICAL APPROACH AND ESTIMATION RESULTS
Directory of Open Access Journals (Sweden)
A.V. Kalina
2007-06-01
Full Text Available The methodical approach to drug addiction social cost estimation in Russian regions is suggested in the article. It is presented by cost estimation of socio-economical consequences of drug addiction spread on the territory. The main approaches to latency characteristics of drug addiction situation estimation are shown. The results of drug addiction and its separate parts social cost estimation are given for federal regions and subjects of Russian Federation for the period of 2001 − 2005.
Method for estimating the lattice thermal conductivity of metallic alloys
Energy Technology Data Exchange (ETDEWEB)
Yarbrough, D.W.; Williams, R.K.
1978-08-01
A method is described for calculating the lattice thermal conductivity of alloys as a function of temperature and composition for temperatures above theta/sub D//2 using readily available information about the atomic species present in the alloy. The calculation takes into account phonon interactions with point defects, electrons and other phonons. Comparisons between experimental thermal conductivities (resistivities) and calculated values are discussed for binary alloys of semiconductors, alkali halides and metals. A discussion of the theoretical background is followed by sufficient numerical work to facilitate the calculation of lattice thermal conductivity of an alloy for which no conductivity data exist.
A new source number estimation method based on the beam eigenvalue
Institute of Scientific and Technical Information of China (English)
JIANG Lei; CAI Ping; YANG Juan; WANG Yi-ling; XU Dan
2007-01-01
Most source number estimation methods based on the eigenvalues are decomposed by covariance matrix in MUSIC algorithm. To develop the source number estimation method which has lower signal to noise ratio and is suitable to both correlated and uncorrelated impinging signals, a new source number estimation method called beam eigenvalue method (BEM) is proposed in this paper.Through analyzing the space power spectrum and the correlation of the line array, the covariance matrix is constructed in a new way, which is decided by the line array shape when the signal frequency is given.Both of the theory analysis and the simulation results show that the BEM method can estimate the source number for correlated signals and can be more effective at lower signal to noise ratios than the normal source number estimation methods.
New method to estimate stability of chelate complexes
Grigoriev, F V; Romanov, A N; Kondakova, O A; Sulimov, V B
2009-01-01
A new method allowing calculation of the stability of chelate complexes with Mg2+ ion in water have been developed. The method is based on two-stage scheme for the complex formation. The first stage is the ligand transfer from an arbitrary point of the solution to the second solvation shell of the Mg2+ ion. At this stage the ligand is considered as a charged or neutral rigid body. The second stage takes into account disruption of coordinate bonds between Mg2+ and water molecules from the first solvation shell and formation of the bonds between the ligand and the Mg2+ ion. This effect is considered using the quantum chemical modeling. It has been revealed that the main contribution to the free energy of the complex formation is caused by the disruption/formation of the coordinate bonds between Mg2+, water molecules and the ligand. Another important contribution to the complex formation energy is change of electrostatic interactions in water solvent upon the ligand binding with Mg2+ ion. For all complexes under...
Institute of Scientific and Technical Information of China (English)
Shuo Zhang; Ming Wang
2008-01-01
In this paper,we consider the nonconforming finite element approximations of fourth order elliptic perturbation problems in two dimensions.We present an a posteriori error estimator under certain conditions,and give an h-version adaptive algorithm based on the error estimation.The local behavior of the estimator is analyzed as well.This estimator works for several nonconforming methods,such as the modified Morley method and the modified Zienkiewicz method,and under some assumptions,it is an optimal one.Numerical examples are reported.with a linear stationary Cahn-Hilliard-type equation as a model problem.
Wu, Zhihong; Lu, Ke; Zhu, Yuan
2015-01-01
The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.
Method and system for non-linear motion estimation
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
Statistical Methods for Estimating the Cumulative Risk of Screening Mammography Outcomes
Hubbard, R.A.; Ripping, T.M.; Chubak, J.; Broeders, M.J.; Miglioretti, D.L.
2016-01-01
BACKGROUND: This study illustrates alternative statistical methods for estimating cumulative risk of screening mammography outcomes in longitudinal studies. METHODS: Data from the US Breast Cancer Surveillance Consortium (BCSC) and the Nijmegen Breast Cancer Screening Program in the Netherlands were
A Refined Method for Estimating the Annual Extreme Wave Heights at A Project Site
Institute of Scientific and Technical Information of China (English)
徐德伦; 范海梅; 张军
2003-01-01
This paper presents a refined method for estimating the annual extreme wave heights at a coastal or offshore project site on the basis of the data acquired at some nearby routine hydrographic stations. This method is based on the orthogonality principle in linear mean square estimation of stochastic processes. The error of the method is analyzed and compared with that of the conventional method. It is found that the method is able to effectively reduce the error so long as some feasible measures are adopted. A simulated test of the method has been conducted in a large-scale wind-wave flume. The test results are in good agreement with those given by theoretical error analysis. A scheme to implement the method is proposed on the basis of error analysis. The scheme is so designed as to reduce the estimation error as far as possible. This method is also suitable to utilizing satellite wave data for the estimation.
Joint DOA and Fundamental Frequency Estimation Methods based on 2-D Filtering
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2010-01-01
It is well-known that ﬁltering methods can be used for processing of signals in both time and space. This comprises, for example, fundamental frequency estimation and direction-of-arrival (DOA) estimation. In this paper, we propose two novel 2-D ﬁltering methods for joint estimation...... of the fundamental frequency and the DOA of spatio-temporarily sampled periodic signals. The ﬁrst and simplest method is based on the 2-D periodogram, whereas the second method is a generalization of the 2-D Capon method. In the experimental part, both qualitative and quantitative measurements show that the proposed...... methods are well-suited for solving the joint estimation problem. Furthermore, it is shown that the methods are able to resolve signals separated sufﬁciently in only one dimension. In the case of closely spaced sources, however, the 2-D Capon-based method shows the best performance....
Real Order and Logarithmic Moment Estimation Method of P-norm Distribution
Directory of Open Access Journals (Sweden)
PAN Xiong
2016-03-01
Full Text Available The estimation methods of P-norm distribution is improved in this paper from the perspective of the parameters estimation precision and algorithm complexity. The real order and logarithmic moment estimation is introduced and the real order moment estimation method of P-norm distribution is established based on the actual error distribution. First of all, the relation between the shape parameter p and the real order value r is derived by using the real order moment estimation, and corresponding suggestions are provided for shape parameter's selection. Then, the nonlinear estimation formula of shape parameter, expectations and mean square error is derived via logarithmic moment estimation, function truncation error on the calculation of parameter estimation is eliminated and the solving method of corresponding parameters and calculation process is given, leading an improvement of the theory. Finally, some examples are performed for analyzing the stability and precision of such three methods including real order moment, logarithmic moment and maximum likelihood estimation. The result shows that the stability, precision and convergence speed of the method in this paper are better than maximum likelihood estimation, which generalized the existing errors theory.
DEFF Research Database (Denmark)
Wang, Z.; Lu, K.; Ye, Y.
2011-01-01
According to saliency of permanent magnet synchronous motor (PMSM), the information of rotor position is implied in performance of stator inductances due to the magnetic saturation effect. Researches focused on the initial rotor position estimation of PMSM by injecting modulated pulse voltage vec....... The experimental results show that the proposed method estimates the initial rotor position reliably and efficently. The method is also simple and can achieve satisfied estimation accuracy....
Global mean estimation using a self-organizing dual-zoning method for preferential sampling.
Pan, Yuchun; Ren, Xuhong; Gao, Bingbo; Liu, Yu; Gao, YunBing; Hao, Xingyao; Chen, Ziyue
2015-03-01
Giving an appropriate weight to each sampling point is essential to global mean estimation. The objective of this paper was to develop a global mean estimation method with preferential samples. The procedure for this estimation method was to first zone the study area based on self-organizing dual-zoning method and then to estimate the mean according to stratified sampling method. In this method, spreading of points in both feature and geographical space is considered. The method is tested in a case study on the metal Mn concentrations in Jilin provinces of China. Six sample patterns are selected to estimate the global mean and compared with the global mean calculated by direct arithmetic mean method, polygon method, and cell method. The results show that the proposed method produces more accurate and stable mean estimates under different feature deviation index (FDI) values and sample sizes. The relative errors of the global mean calculated by the proposed method are from 0.14 to 1.47 % and they are the largest (4.83-8.84 %) by direct arithmetic mean method. At the same time, the mean results calculated by the other three methods are sensitive to the FDI values and sample sizes.
A method for segmentation and motion estimation of multiple independently moving objects
Willemink, G.H.; Heijden, van der F.
2006-01-01
The aim of this work is to design a robust method for online estimation of object motion and structure in a dynamic scene. A method for segmentation and estimation of object motion and structure from 3-d points in a dynamic scene, observed by a sequence of stereo images, is proposed. The proposed me
Junction temperature estimation method for a 600 V, 30A IGBT module during converter operation
DEFF Research Database (Denmark)
Choi, U. M.; Blaabjerg, F.; Iannuzzo, F.
2015-01-01
This paper proposes an accurate method to estimate the junction temperature using the on-state collector-emitter voltage at high current. By means of the proposed method, the estimation error which comes from the different temperatures of the interconnection materials in the module is compensated...
Kevin S. Laves; Susan C. Loeb
2005-01-01
It is commonly assumed that population estimates derived from trapping small mammals are accurate and unbiased or that estimates derived from different capture methods are comparable. We captured southern flying squirrels (Glaucmrtys volam) using two methods to study their effect on red-cockaded woodpecker (Picoides bumah) reproductive success. Southern flying...
A novel method to estimate model uncertainty using machine learning techniques
Solomatine, D.P.; Lal Shrestha, D.
2009-01-01
A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the
Fan, Xitao; Wang, Lin; Thompson, Bruce
1999-01-01
A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)
DEFF Research Database (Denmark)
Müller, Pavel; Hiller, Jochen; Dai, Y.
2014-01-01
This paper presents the application of the substitution method for the estimation of measurement uncertainties using calibrated workpieces in X-ray computed tomography (CT) metrology. We have shown that this, well accepted method for uncertainty estimation using tactile coordinate measuring...
Some error estimates for the lumped mass finite element method for a parabolic problem
Chatzipantelidis, P.
2012-01-01
We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.
A method to estimate weight and dimensions of large and small gas turbine engines
Onat, E.; Klees, G. W.
1979-01-01
A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.
Multivariate drought frequency estimation using copula method in Southwest China
Hao, Cui; Zhang, Jiahua; Yao, Fengmei
2015-12-01
Drought over Southwest China occurs frequently and has an obvious seasonal characteristic. Proper management of regional droughts requires knowledge of the expected frequency or probability of specific climate information. This study utilized k-means classification and copulas to demonstrate the regional drought occurrence probability and return period based on trivariate drought properties, i.e., drought duration, severity, and peak. A drought event in this study was defined when 3-month Standardized Precipitation Evapotranspiration Index (SPEI) was less than -0.99 according to the regional climate characteristic. Then, the next step was to classify the region into six clusters by k-means method based on annual and seasonal precipitation and temperature and to establish marginal probabilistic distributions for each drought property in each sub-region. Several copula types were selected to test the best fit distribution, and Student t copula was recognized as the best one to integrate drought duration, severity, and peak. The results indicated that a proper classification was important for a regional drought frequency analysis, and copulas were useful tools in exploring the associations of the correlated drought variables and analyzing drought frequency. Student t copula was a robust and proper function for drought joint probability and return period analysis, which is important for analyzing and predicting the regional drought risks.
Multivariate drought frequency estimation using copula method in Southwest China
Hao, Cui; Zhang, Jiahua; Yao, Fengmei
2017-02-01
Drought over Southwest China occurs frequently and has an obvious seasonal characteristic. Proper management of regional droughts requires knowledge of the expected frequency or probability of specific climate information. This study utilized k-means classification and copulas to demonstrate the regional drought occurrence probability and return period based on trivariate drought properties, i.e., drought duration, severity, and peak. A drought event in this study was defined when 3-month Standardized Precipitation Evapotranspiration Index (SPEI) was less than -0.99 according to the regional climate characteristic. Then, the next step was to classify the region into six clusters by k-means method based on annual and seasonal precipitation and temperature and to establish marginal probabilistic distributions for each drought property in each sub-region. Several copula types were selected to test the best fit distribution, and Student t copula was recognized as the best one to integrate drought duration, severity, and peak. The results indicated that a proper classification was important for a regional drought frequency analysis, and copulas were useful tools in exploring the associations of the correlated drought variables and analyzing drought frequency. Student t copula was a robust and proper function for drought joint probability and return period analysis, which is important for analyzing and predicting the regional drought risks.
Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods
Morimoto, Emi; Namerikawa, Susumu
The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.
A Full Performance Analysis of Channel Estimation Methods for Time Varying OFDM Systems
Aida, Zaier; 10.5121/ijmnct.2011.1201
2012-01-01
In this paper, we have evaluated various methods of time-frequency-selective fading channels estimation in OFDM system and some of them improved under time varying conditions. So, these different techniques will be studied through different algorithms and for different schemes of modulations (16 QAM, BPSK, QPSK, ...). Channel estimation gathers different schemes and algorithms, some of them are dedicated for slowly time varying (such as block type arrangement insertion, Bayesian Cramer-Rao Bound, Kalman estimator, Subspace estimator, ...) whereas the others concern highly time varying channels (comb type insertion, ...). There are others methods that are just suitable for stationary channels like blind or semi blind estimators. For this aim, diverse algorithms were used for these schemes such as Least Squares estimator LS, Least Minimum Squares LMS, Minimum Mean-Square-Error MMSE, Linear Minimum Mean-Square-Error LMMSE, Maximum Likelihood ML, ... to refine estimators shown previously.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
Brassey, Charlotte A; Maidment, Susannah C R; Barrett, Paul M
2015-03-01
Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082-2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses.
Hidayati, D. S.; Suryonegoro, H.; Makes, B. N.
2017-08-01
Age estimation is important for individual identification. Root development of third molars occurs at age 15-25 years. This study was conducted to determine the accuracy of age estimation using the Thevissen method in Indonesia. The Thevissen method was applied to 100 panoramic radiographs of both male and female subjects. Reliability was tested by the Dahlberg formula and Cohen’s Kappa test, and the significance measurement was tested by the paired t-test and the Wilcoxon test. The deviation of estimated age was then calculated. The deviation of age estimation was ±3.050 years and ±2.067 for male and female subjects, respectively. The deviation of age estimation of female subjects was less than male subject. Age estimation with the Thevissen method is preferred for age 15-22 years.
Bearing Capacity Estimation of Bridge Piles Using the Impulse Transient Response Method
Directory of Open Access Journals (Sweden)
Meng Ma
2016-01-01
Full Text Available A bearing capacity estimation method for bridge piles was developed. In this method, the pulse echo test was used to select the intact piles; the dynamic stiffness was obtained by the impulse transient response test. A total of 680 bridge piles were tested, and their capacities were estimated. Finally, core drilling analysis was used to check the reliability of this method. The results show that, for intact piles, an obvious positive correlation exits between the dynamic stiffness and bearing capacity of the piles. The core drilling analysis proved that the estimation method was reliable.
Evaluation of estimation methods for meiofaunal biomass from a meiofaunal survey in Bohai Bay
Institute of Scientific and Technical Information of China (English)
张青田; 王新华; 胡桂坤
2010-01-01
Studies in the coastal area of Bohai Bay,China,from July 2006 to October 2007,suggest that the method of meiofaunal biomass estimation affected the meiofaunal analysis.Conventional estimation methods that use a unique mean individual weight value for nematodes to calculate total biomass may cause deviation of the results.A modified estimation method,named the Subsection Count Method (SCM),was also used to calculate meiofaunal biomass.This entails only a slight increase in workload but generates results of g...
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Accuracy of Nearly Extinct Cohort Methods for Estimating Very Elderly Subnational Populations
Directory of Open Access Journals (Sweden)
Wilma Terblanche
2015-01-01
Full Text Available Increasing very elderly populations (ages 85+ have potentially major implications for the cost of income support, aged care, and healthcare. The availability of accurate estimates for this population age group, not only at a national level but also at a state or regional scale, is vital for policy development, budgeting, and planning for services. At the highest ages census-based population estimates are well known to be problematic and previous studies have demonstrated that more accurate estimates can be obtained indirectly from death data. This paper assesses indirect estimation methods for estimating state-level very elderly populations from death counts. A method for incorporating internal migration is also proposed. The results confirm that the accuracy of official estimates deteriorates rapidly with increasing age from 95 and that the survivor ratio method can be successfully applied at subnational level and internal migration is minor. It is shown that the simpler alternative of applying the survivor ratio method at a national level and apportioning the estimates between the states produces very accurate estimates for most states and years. This is the recommended method. While the methods are applied at a state level in Australia, the principles are generic and are applicable to other subnational geographies.
Method for Estimating Low-Frequency Return Current of DC Electric Railcar
Hatsukade, Satoru
The Estimation of the harmonic current of railcars is necessary for achieving compatibility between train signaling systems and railcar equipment. However, although several theoretical analyses methods for estimating the harmonic current of railcars using switching functions exist, there are no theoretical analysis methods estimating a low-frequency current at a frequency less than the power converter's carrier frequency. This paper describes a method for estimating the spectrum (frequency and amplitude) of the low-frequency return current of DC electric railcars. First, relationships between the return current and characteristics of the DC electric railcars, such as mass and acceleration, are determined. Then, the mathematical (not numerical) calculation results for low-frequency current are obtained from the time-current curve for a DC electric railcar by using Fourier series expansions. Finally, the measurement results clearly show the effectiveness of the estimation method development in this study.
A NOVEL METHOD FOR ESTIMATING SOIL PRECOMPRESSION STRESS FROM UNIAXIAL CONFINED COMPRESSION TESTS
DEFF Research Database (Denmark)
Lamandé, Mathieu; Schjønning, Per; Labouriau, Rodrigo
2017-01-01
The concept of precompression stress is used for estimating soil strength of relevance to fieldtraffic. It represents the maximum stress experienced by the soil. The most recently developed fitting method to estimate precompression stress (Gompertz) is based on the assumption of an S-shape stress......-strain curve, which is not always fulfilled. A new simple numerical method was developed to estimate precompression stress from stress-strain curves, based solely on the sharp bend on the stress-strain curve partitioning the curve into an elastic and a plastic section. Our study had three objectives: (i......) Assessing the utility of the numerical method by comparison with the Gompertz method; (ii) Comparing the estimated precompression stress to the maximum preload of test samples; (iii) Determining the influence that soil type, bulk density and soil water potential have on the estimated precompression stress...
A sampling method for estimating the accuracy of predicted breeding values in genetic evaluation
Directory of Open Access Journals (Sweden)
Laloë Denis
2001-09-01
Full Text Available Abstract A sampling-based method for estimating the accuracy of estimated breeding values using an animal model is presented. Empirical variances of true and estimated breeding values were estimated from a simulated n-sample. The method was validated using a small data set from the Parthenaise breed with the estimated coefficient of determination converging to the true values. It was applied to the French Salers data file used for the 2000 on-farm evaluation (IBOVAL of muscle development score. A drawback of the method is its computational demand. Consequently, convergence can not be achieved in a reasonable time for very large data files. Two advantages of the method are that a it is applicable to any model (animal, sire, multivariate, maternal effects... and b it supplies off-diagonal coefficients of the inverse of the mixed model equations and can therefore be the basis of connectedness studies.
Motion estimation using low-band-shift method for wavelet-based moving-picture coding.
Park, H W; Kim, H S
2000-01-01
The discrete wavelet transform (DWT) has several advantages of multiresolution analysis and subband decomposition, which has been successfully used in image processing. However, the shift-variant property is intrinsic due to the decimation process of the wavelet transform, and it makes the wavelet-domain motion estimation and compensation inefficient. To overcome the shift-variant property, a low-band-shift method is proposed and a motion estimation and compensation method in the wavelet-domain is presented. The proposed method has a superior performance to the conventional motion estimation methods in terms of the mean absolute difference (MAD) as well as the subjective quality. The proposed method can be a model method for the motion estimation in wavelet-domain just like the full-search block matching in the spatial domain.
Zemek, Radim; Hara, Shinsuke; Yanagihara, Kentaro; Kitayama, Ken-Ichi
In a centralized localization scenario, the limited throughput of the central node constrains the possible number of target node locations that can be estimated simultaneously. To overcome this limitation, we propose a method which effectively decreases the traffic load associated with target node localization, and therefore increases the possible number of target node locations that can estimated simultaneously in a localization system based on received signal strength indicator (RSSI) and maximum likelihood estimation. Our proposed method utilizes a threshold which limits the amount of forwarded RSSI data to the central node. As the threshold is crucial to the method, we further propose a method to theoretically determine its value. We experimentally verified the proposed method in various environments and the experimental results revealed that the method can reduce the load by 32-64% without significantly affecting the estimation accuracy.
A clinical evaluation of the ability of the Dentobuff method to estimate buffer capacity of saliva.
Wikner, S; Nedlich, U
1985-01-01
The power of a colourimetric method to estimate buffer capacity of saliva (Dentobuff) was compared with an electrometric method in 220 adults. The methods correlated well but Dentobuff frequently underestimated high buffer values which was considered to be of minor practical importance. Dentobuff identified groups with low, intermediate and high buffer capacity as good as the electrometric method.
Using Resampling To Estimate the Precision of an Empirical Standard-Setting Method.
Muijtjens, Arno M. M.; Kramer, Anneke W. M.; Kaufman, David M.; Van der Vleuten, Cees P. M.
2003-01-01
Developed a method to estimate the cutscore precisions for empirical standard-setting methods by using resampling. Illustrated the method with two actual datasets consisting of 86 Dutch medical residents and 155 Canadian medical students taking objective structured clinical examinations. Results show the applicability of the method. (SLD)
Directory of Open Access Journals (Sweden)
Dengxiao Lang
2017-09-01
Full Text Available Potential evapotranspiration (PET is crucial for water resources assessment. In this regard, the FAO (Food and Agriculture Organization–Penman–Monteith method (PM is commonly recognized as a standard method for PET estimation. However, due to requirement of detailed meteorological data, the application of PM is often constrained in many regions. Under such circumstances, an alternative method with similar efficiency to that of PM needs to be identified. In this study, three radiation-based methods, Makkink (Mak, Abtew (Abt, and Priestley–Taylor (PT, and five temperature-based methods, Hargreaves–Samani (HS, Thornthwaite (Tho, Hamon (Ham, Linacre (Lin, and Blaney–Criddle (BC, were compared with PM at yearly and seasonal scale, using long-term (50 years data from 90 meteorology stations in southwest China. Indicators, viz. (videlicet Nash–Sutcliffe efficiency (NSE, relative error (Re, normalized root mean squared error (NRMSE, and coefficient of determination (R2 were used to evaluate the performance of PET estimations by the above-mentioned eight methods. The results showed that the performance of the methods in PET estimation varied among regions; HS, PT, and Abt overestimated PET, while others underestimated. In Sichuan basin, Mak, Abt and HS yielded similar estimations to that of PM, while, in Yun-Gui plateau, Abt, Mak, HS, and PT showed better performances. Mak performed the best in the east Tibetan Plateau at yearly and seasonal scale, while HS showed a good performance in summer and autumn. In the arid river valley, HS, Mak, and Abt performed better than the others. On the other hand, Tho, Ham, Lin, and BC could not be used to estimate PET in some regions. In general, radiation-based methods for PET estimation performed better than temperature-based methods among the selected methods in the study area. Among the radiation-based methods, Mak performed the best, while HS showed the best performance among the temperature
Kwon, Ki-Won; Cho, Yongsoo
This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.
Parameters Estimation for the Spherical Model of the Human Knee Joint Using Vector Method
Ciszkiewicz, A.; Knapczyk, J.
2014-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint
Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method
Ciszkiewicz, A.; Knapczyk, J.
2015-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.
Parameters estimation using the first passage times method in a jump-diffusion model
Khaldi, K.; Meddahi, S.
2016-06-01
The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.
A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers
Energy Technology Data Exchange (ETDEWEB)
Melboe, Hallgeir
2001-10-01
This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)
A method for estimating and removing streaking artifacts in quantitative susceptibility mapping.
Li, Wei; Wang, Nian; Yu, Fang; Han, Hui; Cao, Wei; Romero, Rebecca; Tantiwongkosi, Bundhit; Duong, Timothy Q; Liu, Chunlei
2015-03-01
Quantitative susceptibility mapping (QSM) is a novel MRI method for quantifying tissue magnetic property. In the brain, it reflects the molecular composition and microstructure of the local tissue. However, susceptibility maps reconstructed from single-orientation data still suffer from streaking artifacts which obscure structural details and small lesions. We propose and have developed a general method for estimating streaking artifacts and subtracting them from susceptibility maps. Specifically, this method uses a sparse linear equation and least-squares (LSQR)-algorithm-based method to derive an initial estimation of magnetic susceptibility, a fast quantitative susceptibility mapping method to estimate the susceptibility boundaries, and an iterative approach to estimate the susceptibility artifact from ill-conditioned k-space regions only. With a fixed set of parameters for the initial susceptibility estimation and subsequent streaking artifact estimation and removal, the method provides an unbiased estimate of tissue susceptibility with negligible streaking artifacts, as compared to multi-orientation QSM reconstruction. This method allows for improved delineation of white matter lesions in patients with multiple sclerosis and small structures of the human brain with excellent anatomical details. The proposed methodology can be extended to other existing QSM algorithms.
Inter-Method Discrepancies in Brain Volume Estimation May Drive Inconsistent Findings in Autism
Katuwal, Gajendra J.; Baum, Stefi A.; Cahill, Nathan D.; Dougherty, Chase C.; Evans, Eli; Evans, David W.; Moore, Gregory J.; Michael, Andrew M.
2016-01-01
Previous studies applying automatic preprocessing methods on Structural Magnetic Resonance Imaging (sMRI) report inconsistent neuroanatomical abnormalities in Autism Spectrum Disorder (ASD). In this study we investigate inter-method differences as a possible cause behind these inconsistent findings. In particular, we focus on the estimation of the following brain volumes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and total intra cranial volume (TIV). T1-weighted sMRIs of 417 ASD subjects and 459 typically developing controls (TDC) from the ABIDE dataset were estimated using three popular preprocessing methods: SPM, FSL, and FreeSurfer (FS). Brain volumes estimated by the three methods were correlated but had significant inter-method differences; except TIVSPM vs. TIVFS, all inter-method differences were significant. ASD vs. TDC group differences in all brain volume estimates were dependent on the method used. SPM showed that TIV, GM, and CSF volumes of ASD were larger than TDC with statistical significance, whereas FS and FSL did not show significant differences in any of the volumes; in some cases, the direction of the differences were opposite to SPM. When methods were compared with each other, they showed differential biases for autism, and several biases were larger than ASD vs. TDC differences of the respective methods. After manual inspection, we found inter-method segmentation mismatches in the cerebellum, sub-cortical structures, and inter-sulcal CSF. In addition, to validate automated TIV estimates we performed manual segmentation on a subset of subjects. Results indicate that SPM estimates are closest to manual segmentation, followed by FS while FSL estimates were significantly lower. In summary, we show that ASD vs. TDC brain volume differences are method dependent and that these inter-method discrepancies can contribute to inconsistent neuroimaging findings in general. We suggest cross-validation across methods and emphasize the
Modified periodogram method for estimating the Hurst exponent of fractional Gaussian noise.
Liu, Yingjun; Liu, Yong; Wang, Kun; Jiang, Tianzi; Yang, Lihua
2009-12-01
Fractional Gaussian noise (fGn) is an important and widely used self-similar process, which is mainly parametrized by its Hurst exponent (H) . Many researchers have proposed methods for estimating the Hurst exponent of fGn. In this paper we put forward a modified periodogram method for estimating the Hurst exponent based on a refined approximation of the spectral density function. Generalizing the spectral exponent from a linear function to a piecewise polynomial, we obtained a closer approximation of the fGn's spectral density function. This procedure is significant because it reduced the bias in the estimation of H . Furthermore, the averaging technique that we used markedly reduced the variance of estimates. We also considered the asymptotical unbiasedness of the method and derived the upper bound of its variance and confidence interval. Monte Carlo simulations showed that the proposed estimator was superior to a wavelet maximum likelihood estimator in terms of mean-squared error and was comparable to Whittle's estimator. In addition, a real data set of Nile river minima was employed to evaluate the efficiency of our proposed method. These tests confirmed that our proposed method was computationally simpler and faster than Whittle's estimator.
Modified periodogram method for estimating the Hurst exponent of fractional Gaussian noise
Liu, Yingjun; Liu, Yong; Wang, Kun; Jiang, Tianzi; Yang, Lihua
2009-12-01
Fractional Gaussian noise (fGn) is an important and widely used self-similar process, which is mainly parametrized by its Hurst exponent (H) . Many researchers have proposed methods for estimating the Hurst exponent of fGn. In this paper we put forward a modified periodogram method for estimating the Hurst exponent based on a refined approximation of the spectral density function. Generalizing the spectral exponent from a linear function to a piecewise polynomial, we obtained a closer approximation of the fGn’s spectral density function. This procedure is significant because it reduced the bias in the estimation of H . Furthermore, the averaging technique that we used markedly reduced the variance of estimates. We also considered the asymptotical unbiasedness of the method and derived the upper bound of its variance and confidence interval. Monte Carlo simulations showed that the proposed estimator was superior to a wavelet maximum likelihood estimator in terms of mean-squared error and was comparable to Whittle’s estimator. In addition, a real data set of Nile river minima was employed to evaluate the efficiency of our proposed method. These tests confirmed that our proposed method was computationally simpler and faster than Whittle’s estimator.
Blast Load Input Estimation of the Medium Girder Bridgeusing Inverse Method
Directory of Open Access Journals (Sweden)
Ming-Hui Lee
2008-01-01
Full Text Available Innovative adaptive weighted input estimation inverse methodology for estimating theunknown time-varying blast loads on the truss structure system is presented. This method isbased on the Kalman filter and the recursive least square estimator (RLSE. The filter models thesystem dynamics in a linear set of state equations. The state equations of the truss structureare constructed using the finite element method. The input blast loads of the truss structuresystem are inverse estimated from the system responses measured at two distinct nodes. Thiswork presents an efficient weighting factor applied in the RLSE, which is capable of providinga reasonable estimation results. The results obtained from the simulations show that the methodis effective in estimating input blast loads, so has great stability and precision.Defence Science Journal, 2008, 58(1, pp.46-56, DOI:http://dx.doi.org/10.14429/dsj.58.1622
Computationally efficient DOD and DOA estimation for bistatic MIMO radar with propagator method
Zhang, Xiaofei; Wu, Hailang; Li, Jianfeng; Xu, Dazhuan
2012-09-01
In this article, we consider a computationally efficient direction of departure and direction of arrival estimation problem for a bistatic multiple-input multiple-output (MIMO) radar. The computational loads of the propagator method (PM) can be significantly smaller since the PM does not require any eigenvalue decomposition of the cross correlation matrix and singular value decomposition of the received data. An improved PM algorithm is proposed to obtain automatically paired transmit and receive angle estimations in the MIMO radar. The proposed algorithm has very close angle estimation performance to conventional PM, which has a much higher complexity than our algorithm. For high signal-to-noise ratio, the proposed algorithm has very close angle estimation to estimation of signal parameters via rotational invariance technique algorithm. The variance of the estimation error and Cramér-Rao bound of angle estimation are derived. Simulation results verify the usefulness of our algorithm.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Spectral Estimation Methods Comparison and Performance Analysis on a Steganalysis Application
Mataracioglu, Tolga
2011-01-01
Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message. In today's world, it is widely used in order to secure the information. In this paper, the traditional spectral estimation methods are introduced. The performance analysis of each method is examined by comparing all of the spectral estimation methods. Finally, from utilizing those performance analyses, a brief pros and cons of the spectral estimation methods are given. Also we give a steganography demo by hiding information into a sound signal and manage to pull out the information (i.e, the true frequency of the information signal) from the sound by means of the spectral estimation methods.
[Estimation with the capture-recapture method of the number of economic immigrants in Mallorca].
Ramos Monserrat, M; March Cerdá, J C
2002-05-15
estimate the number of irregular economic immigrants in Mallorca. We used the capture-recapture method, an indirect method based on contrasts of data from two or more sources. Data were obtained from the Delegación de Gobierno (police and immigration authority), Comisiones Obreras (labor union), and institutions that provide health-related services to immigrants. Individuals were identified by birth date and country of origin. The total number of economic immigrants estimated with this method was 39 392. According to the Delegación de Gobierno data, the number of regular immigrants on the date of our inquiry was 9000. With the capture-recapture method, the number of irregular immigrants in Mallorca was therefore estimated at 30 000. The capture-recapture method can be useful to estimate the population of irregular immigrants in a given area at a given time, if sufficiently precise information on the identity of each individual can be obtained.
An Adaptive Finite Element Method Based on Optimal Error Estimates for Linear Elliptic Problems
Institute of Scientific and Technical Information of China (English)
汤雁
2004-01-01
The subject of the work is to propose a series of papers about adaptive finite element methods based on optimal error control estimate. This paper is the third part in a series of papers on adaptive finite element methods based on optimal error estimates for linear elliptic problems on the concave corner domains. In the preceding two papers (part 1:Adaptive finite element method based on optimal error estimate for linear elliptic problems on concave corner domain; part 2:Adaptive finite element method based on optimal error estimate for linear elliptic problems on nonconvex polygonal domains), we presented adaptive finite element methods based on the energy norm and the maximum norm. In this paper, an important result is presented and analyzed. The algorithm for error control in the energy norm and maximum norm in part 1 and part 2 in this series of papers is based on this result.
Khorate, Manisha M; Dinkar, A D; Ahmed, Junaid
2014-01-01
Changes related to chronological age are seen in both hard and soft tissue. A number of methods for age estimation have been proposed which can be classified in four categories, namely, clinical, radiological, histological and chemical analysis. In forensic odontology, age estimation based on tooth development is universally accepted method. The panoramic radiographs of 500 healthy Goan, Indian children (250 boys and 250 girls) aged between 4 and 22.1 years were selected. Modified Demirjian's method (1973/2004), Acharya AB formula (2011), Dr Ajit D. Dinkar (1984) regression equation, Foti and coworkers (2003) formula (clinical and radiological) were applied for estimation of age. The result of our study has shown that Dr Ajit D. Dinkar method is more accurate followed by Acharya Indian-specific formula. Furthermore, in this study by applying all these methods to one regional population, we have attempted to present dental age estimation methodology best suited for the Goan Indian population.
A reduced-order method for estimating the stability region of power systems with saturated controls
Institute of Scientific and Technical Information of China (English)
GAN; DeQiang; XIN; HuanHai; QIU; JiaJu; HAN; ZhenXiang
2007-01-01
In a modern power system, there is often large difference in the decay speeds of transients. This could lead to numerical problems such as heavy simulation burden and singularity when the traditional methods are used to estimate the stability region of such a dynamic system with saturation nonlinearities. To overcome these problems, a reduced-order method, based on the singular perturbation theory, is suggested to estimate the stability region of a singular system with saturation nonlinearities. In the reduced-order method, a low-order linear dynamic system with saturation nonlinearities is constructed to estimate the stability region of the primary high-order system so that the singularity is eliminated and the estimation process is simplified. In addition, the analytical foundation of the reduction method is proven and the method is validated using a test power system with 3 buses and 5 machines.
A weight modification sequential method for VSC-MTDC power system state estimation
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
Novel Channel Estimation Method Based on Decision-Directed in OFDM
Institute of Scientific and Technical Information of China (English)
BU Xiang-yuan; ZHANG Jian-kang; YANG Jing
2009-01-01
Based on the analysis of decision-directed (DD) channel estimation by using training symbols,a novel DD channel estimation method is proposed for orthogonal frequency division multiplexing (OFDM) system.The proposed algorithm takes the impact of decision error into account,and calculates the impact to next symbol duration channel state information.Analysis shows that the error propagation can be effectively restrained and the channel variation is tracked well.Simulation results demonstrate that both the signal error rate (SER) and the normalized mean square error (NMSE) performance of the proposed method are better than the traditional DD (DD+ IS) and the maximum likelihood estimate (DD+ MLE) method.
Two methods for estimating aeroelastic damping of operational wind turbine modes from experiments
DEFF Research Database (Denmark)
Hansen, Morten Hartvig; Thomsen, Kenneth; Fuglsang, Peter;
2006-01-01
on stochastic subspace identification, where a linear model of the turbine is estimated alone from measured response signals by assuming that the ambient excitation from turbulence is random in time and space. Although the assumption is not satisfied, this operational modal analysis method can handle......The theory and results of two experimental methods for estimating the modal damping of a wind turbine during operation are presented. Estimations of the aeroelastic damping of the operational turbine modes (including the effects of the aerodynamic forces) give a quantitative view of the stability...... characteristics of the turbine. In the first method the estimation of modal damping is based on the assumption that a turbine mode can be excited by a harmonic force at its natural frequency, whereby the decaying response after the end of excitation gives an estimate of the damping. Simulations and experiments...
Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays.
Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin
2016-02-23
In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches.
Testing a statistical method of global mean palotemperature estimations in a long climate simulation
Energy Technology Data Exchange (ETDEWEB)
Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik
2001-07-01
Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)
Novel method of ordinal bearing estimation for more sources based on oblique projector
Institute of Scientific and Technical Information of China (English)
Sun Wei; Bai Jianlin; Wang Kai
2009-01-01
A novel direction of arrival (DOA) estimation method is proposed when uncorrelated, correlated, and coherent sources coexist under color noise field. The uncorrelated and correlated sources are firstly estimated using the conventional spatial spectrum estimation method, then the noise and uncorrelated sources in Toeplitz structure are eliminated using differencing, finally by exploiting the property of oblique projection, the contributions of correlated sources are then eliminated from the covariance matrix and only the coherent sources remain. So the coherent sources can be estimated by the technique of modified spatial smoothing. The number of sources resolved by this approach can exceed the number of array elements without repeatedly estimating correlated sources. Simulation results demonstrate the effectiveness and efficiency of our proposed method.
Development of a new simple estimating method for protein, fat, and carbohydrate in cooked foods.
Kumae, T
2000-01-01
Evaluations of daily nutrient intakes with practical accuracy contribute not only to public but also to personal health. To obtain accurate estimations of nutrient intake, chemical analyses of a duplicate sample of all foods consumed are recommended. But these analytical methods are expensive, time consuming, and not practically applicable for field surveys dealing with numerous food samples. To solve this problem, a new rapid and simple method of estimating nutrients is developed here. Elemental compositions of cooked foods were examined using a high speed and high performance carbon, hydrogen, and nitrogen autoanalyzer, and the results showed good reproducibility. A significant correlation between Kjeldahl's and the autoanalyzer methods was observed in the nitrogen measurement (n=20; r =0.999; p< 0.001), and very good agreement was observed between the methods. Therefore, the nitrogen amount obtained by the autoanalyzer was used for the estimation of the protein proportion in the cooked foods. The fat and carbohydrate proportions estimated by the new method correlated with the values obtained by the chemical method (p< 0.001 each). There were also good agreements of fat and carbohydrate proportions between the chemical and the new estimation methods. According to these results, the new, rapid and simple estimation method established in this study should be applicable to nutritional research.
Zhao, Fangfang; Zhang, Lu; Xu, Zongxue; Scott, David F.
2010-03-01
Changes in vegetation cover can significantly affect streamflow. Two common methods for estimating vegetation effects on streamflow are the paired catchment method and the time trend analysis technique. In this study, the performance of these methods is evaluated using data from paired catchments in Australia, New Zealand, and South Africa. Results show that these methods generally yield consistent estimates of the vegetation effect, and most of the observed streamflow changes are attributable to vegetation change. These estimates are realistic and are supported by the vegetation history. The accuracy of the estimates, however, largely depends on the length of calibration periods or pretreatment periods. For catchments with short or no pretreatment periods, we find that statistically identified prechange periods can be used as calibration periods. Because streamflow also responds to climate variability, in assessing streamflow changes it is necessary to consider the effect of climate in addition to the effect of vegetation. Here, the climate effect on streamflow was estimated using a sensitivity-based method that calculates changes in rainfall and potential evaporation. A unifying conceptual framework, based on the assumption that climate and vegetation are the only drivers for streamflow changes, enables comparison of all three methods. It is shown that these methods provide consistent estimates of vegetation and climate effects on streamflow for the catchments considered. An advantage of the time trend analysis and sensitivity-based methods is that they are applicable to nonpaired catchments, making them potentially useful in large catchments undergoing vegetation change.
A New Method for Radar Rainfall Estimation Using Merged Radar and Gauge Derived Fields
Hasan, M. M.; Sharma, A.; Johnson, F.; Mariethoz, G.; Seed, A.
2014-12-01
Accurate estimation of rainfall is critical for any hydrological analysis. The advantage of radar rainfall measurements is their ability to cover large areas. However, the uncertainties in the parameters of the power law, that links reflectivity to rainfall intensity, have to date precluded the widespread use of radars for quantitative rainfall estimates for hydrological studies. There is therefore considerable interest in methods that can combine the strengths of radar and gauge measurements by merging the two data sources. In this work, we propose two new developments to advance this area of research. The first contribution is a non-parametric radar rainfall estimation method (NPZR) which is based on kernel density estimation. Instead of using a traditional Z-R relationship, the NPZR accounts for the uncertainty in the relationship between reflectivity and rainfall intensity. More importantly, this uncertainty can vary for different values of reflectivity. The NPZR method reduces the Mean Square Error (MSE) of the estimated rainfall by 16 % compared to a traditionally fitted Z-R relation. Rainfall estimates are improved at 90% of the gauge locations when the method is applied to the densely gauged Sydney Terrey Hills radar region. A copula based spatial interpolation method (SIR) is used to estimate rainfall from gauge observations at the radar pixel locations. The gauge-based SIR estimates have low uncertainty in areas with good gauge density, whilst the NPZR method provides more reliable rainfall estimates than the SIR method, particularly in the areas of low gauge density. The second contribution of the work is to merge the radar rainfall field with spatially interpolated gauge rainfall estimates. The two rainfall fields are combined using a temporally and spatially varying weighting scheme that can account for the strengths of each method. The weight for each time period at each location is calculated based on the expected estimation error of each method
Comparison of methods for estimating the attributable risk in the context of survival analysis
Directory of Open Access Journals (Sweden)
Malamine Gassama
2017-01-01
Full Text Available Abstract Background The attributable risk (AR measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Methods Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier’s estimator, one semiparametric based on Cox’s model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox’s model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Results Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. Conclusion In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of
Energy Technology Data Exchange (ETDEWEB)
Ball, J.R.; Cohen, S.; Ziegler, E.Z.
1984-10-01
This document provides overall guidance to assist the NRC in preparing the types of cost estimates required by the Regulatory Analysis Guidelines and to assist in the assignment of priorities in resolving generic safety issues. The Handbook presents an overall cost model that allows the cost analyst to develop a chronological series of activities needed to implement a specific regulatory requirement throughout all applicable commercial LWR power plants and to identify the significant cost elements for each activity. References to available cost data are provided along with rules of thumb and cost factors to assist in evaluating each cost element. A suitable code-of-accounts data base is presented to assist in organizing and aggregating costs. Rudimentary cost analysis methods are described to allow the analyst to produce a constant-dollar, lifetime cost for the requirement. A step-by-step example cost estimate is included to demonstrate the overall use of the Handbook.
Gabre, P; Martinsson, T; Gahnberg, L
1999-08-01
The aim of the present study was to evaluate whether estimation of lactobacilli was possible with simplified saliva sampling methods. Dentocult LB (Orion Diagnostica AB, Trosa, Sweden) was used to estimate the number of lactobacilli in saliva sampled by 3 different methods from 96 individuals: (i) Collecting and pouring stimulated saliva over a Dentocult dip-slide; (ii) direct licking of the Dentocult LB dip-slide; (iii) contaminating a wooden spatula with saliva and pressing against the Dentocult dip-slide. The first method was in accordance with the manufacturer's instructions and selected as the 'gold standard'; the other 2 methods were compared with this result. The 2 simplified methods for estimating levels of lactobacilli in saliva showed good reliability and specificity. Sensitivity, defined as the ability to detect individuals with a high number of lactabacilli in saliva, was sufficient for the licking method (85%), but significantly reduced for the wooden spatula method (52%).
Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques
Energy Technology Data Exchange (ETDEWEB)
Melius, J.; Margolis, R.; Ong, S.
2013-12-01
A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.
Assessing methods to estimate emissions of non-methane organic compounds from landfills
DEFF Research Database (Denmark)
Saquing, Jovita M.; Chanton, Jeffrey P.; Yazdani, Ramin
2014-01-01
in estimating speciated NMOC flux from landfills; (2) determine for what types of landfills the ratio method may be in error and why, using recent field data to quantify the spatial variation of (CNMOCs/CCH4) in landfills; and (3) formulate alternative models for estimating NMOC emissions from landfills...
Multi-agent coordination strategy estimation method based on control domain
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
For estimation group competition and multiagent coordination strategy, this paper introduces a notion based on multiagent group. According to the control domain, it analyzes the multiagent strategy during competi tion in the macroscopic. It has been adopted in robot soccer and result enunciates that our method does not de pend on competition result. It can objectively quantitatively estimate coordination strategy.
A modified delay-time method for statics estimation with the virtual refraction
Mikesell, T.D.; Van Wijk, K.; Ruigrok, E.N.; Lamb, A.; Blum, T.E.
2012-01-01
Topography and near-surface heterogeneities lead to traveltime perturbations in surface land-seismic experiments. Usually, these perturbations are estimated and removed prior to further processing of the data. A common technique to estimate these perturbations is the delay-time method. We have devel
Comparison of methods for estimating herbage intake in grazing dairy cows
DEFF Research Database (Denmark)
Hellwing, Anne Louise Frydendahl; Lund, Peter; Weisbjerg, Martin Riis
2015-01-01
season, and the herbage intake was estimated twice during each season. Cows were on pasture from 8:00 until 15:00, and were subsequently housed inside and fed a mixed ration (MR) based on maize silage ad libitum. Herbage intake was estimated with nine different methods: (1) animal performance (2) intake...
A modified delay-time method for statics estimation with the virtual refraction
Mikesell, T.D.; Van Wijk, K.; Ruigrok, E.N.; Lamb, A.; Blum, T.E.
2012-01-01
Topography and near-surface heterogeneities lead to traveltime perturbations in surface land-seismic experiments. Usually, these perturbations are estimated and removed prior to further processing of the data. A common technique to estimate these perturbations is the delay-time method. We have devel
The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probabi...
Bauer, Daniel J.; Sterba, Sonya K.
2011-01-01
Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…
Critical length sampling: a method to estimate the volume of downed coarse woody debris
G& #246; ran St& #229; hl; Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey
2010-01-01
In this paper, critical length sampling for estimating the volume of downed coarse woody debris is presented. Using this method, the volume of downed wood in a stand can be estimated by summing the critical lengths of down logs included in a sample obtained using a relascope or wedge prism; typically, the instrument should be tilted 90° from its usual...
A modified delay-time method for statics estimation with the virtual refraction
Mikesell, T.D.; Van Wijk, K.; Ruigrok, E.N.; Lamb, A.; Blum, T.E.
2012-01-01
Topography and near-surface heterogeneities lead to traveltime perturbations in surface land-seismic experiments. Usually, these perturbations are estimated and removed prior to further processing of the data. A common technique to estimate these perturbations is the delay-time method. We have
Joint estimation of TOA and DOA in IR-UWB system using a successive propagator method
Wang, Fangqiu; Zhang, Xiaofei; Wang, Chenghua; Zhou, Shengkui
2015-10-01
Impulse radio ultra-wideband (IR-UWB) ranging and positioning require accurate estimation of time-of-arrival (TOA) and direction-of-arrival (DOA). With receiver of two antennas, both of the TOA and DOA parameters can be estimated via two-dimensional (2D) propagator method (PM), in which the 2D spectral peak searching, however, renders much higher computational complexity. This paper proposes a successive PM algorithm for joint TOA and DOA estimation in IR-UWB system to avoid 2D spectral peak searching. The proposed algorithm firstly gets the initial TOA estimates in the two antennas from the propagation matrix, then utilises successively one-dimensional (1D) local searches to achieve the estimation of TOAs in the two antennas, and finally obtains the DOA estimates via the difference in the TOAs between the two antennas. The proposed algorithm, which only requires 1D local searches, can avoid the high computational cost in 2D-PM algorithm. Furthermore, the proposed algorithm can obtain automatically paired parameters and has better joint TOA and DOA estimation performance than conventional PM algorithm, estimation of signal parameters via rotational invariance techniques algorithm and matrix pencil algorithm. Meanwhile, it has very close parameter estimation to that of 2D-PM algorithm. We have also derived the mean square error of TOA and DOA estimation of the proposed algorithm and the Cramer-Rao bound of TOA and DOA estimation in this paper. The simulation results verify the usefulness of the proposed algorithm.
Energy Technology Data Exchange (ETDEWEB)
Boubal, O.; Oksman, J. [Ecole Superieure d' Electricite, 91 - Gif-sur-Yvette (France)
1999-07-01
Knock on spark ignition engines goes against car manufacturers efforts to reduce fuel consumption and exhaust gas emissions. This article develops a signal processing method to quantify knock. After discussing some classical techniques of knock energy estimation, an acoustical measurement technique is presented. An original signal processing method based on a parametric behavioral model for both knock and apparatus and a special inversion technique are used to get actual knock parameters. The knock related parameters are computed in a two step process. A deconvolution algorithm is used to obtain a signal made of unitary pulses, followed by an efficient inversion method. The whole process is applied to real data from a one-cylinder engine. Moreover, the results are compared to those obtained from an existing technique to suit a common industrial application. (authors)
Parameter Estimation from Near Stall Flight Data using Conventional and Neural-based Methods
Directory of Open Access Journals (Sweden)
S. Saderla
2016-12-01
Full Text Available The current research paper is an endeavour to estimate the parameters from near stall flight data of manned and unmanned research flight vehicles using conventional and neural based methods. For an aircraft undergoing stall, the aerodynamic model at these high angles of attack becomes non linear due to the influence of unsteady, transient and flow separation phenomena. In order to address these issues the Kirchhoff’s flow separation theory was used to incorporate the nonlinearity in the aerodynamic model in terms of flow separation point and stall characteristic parameters. The classical Maximum Likelihood (MLE method and Neural Gauss-Newton (NGN method have been employed to estimate the nonlinear parameters of two manned and one unmanned research aircrafts. The estimated static stall parameter and the break point, for the flight vehicles under consideration, were observed to be consistent from both the methods. Moreover the efficacy of the methods is also evident from the consistent estimates of post stall hysteresis time constant. It can also be inferred that the considered quasi steady model is able to adequately capture the drag and pitching moment coefficients in the post stall regime. The confidence in these estimates have been significantly enhanced with the observed lower values of Cramer-Rao bounds. Further the estimated nonlinear parameters were validated by performing a proof of match exercise for the considered flight vehicles. Interestingly the NGN method, which doesn’t involve solving equations of motion, was able to perform on a par with the MLE method.
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Evaluation of multiple tracer methods to estimate low groundwater flow velocities.
Reimus, Paul W; Arnold, Bill W
2017-04-01
Four different tracer methods were used to estimate groundwater flow velocity at a multiple-well site in the saturated alluvium south of Yucca Mountain, Nevada: (1) two single-well tracer tests with different rest or "shut-in" periods, (2) a cross-hole tracer test with an extended flow interruption, (3) a comparison of two tracer decay curves in an injection borehole with and without pumping of a downgradient well, and (4) a natural-gradient tracer test. Such tracer methods are potentially very useful for estimating groundwater velocities when hydraulic gradients are flat (and hence uncertain) and also when water level and hydraulic conductivity data are sparse, both of which were the case at this test location. The purpose of the study was to evaluate the first three methods for their ability to provide reasonable estimates of relatively low groundwater flow velocities in such low-hydraulic-gradient environments. The natural-gradient method is generally considered to be the most robust and direct method, so it was used to provide a "ground truth" velocity estimate. However, this method usually requires several wells, so it is often not practical in systems with large depths to groundwater and correspondingly high well installation costs. The fact that a successful natural gradient test was conducted at the test location offered a unique opportunity to compare the flow velocity estimates obtained by the more easily deployed and lower risk methods with the ground-truth natural-gradient method. The groundwater flow velocity estimates from the four methods agreed very well with each other, suggesting that the first three methods all provided reasonably good estimates of groundwater flow velocity at the site. The advantages and disadvantages of the different methods, as well as some of the uncertainties associated with them are discussed. Published by Elsevier B.V.
Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.
1977-01-01
Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.
A method to estimate relative orientations of body segments during movement using accelerometry
Kortier, H.G.; Schenk, O.; Luinge, H.J.; Veltink, P.H.
2011-01-01
Quantitative assessment of human body movements includes the analysis of joint orientations. We propose a new method to estimate relative orientation between body segments using a single 3D accelerometer per segment.
National Aeronautics and Space Administration — The problem of estimating the aerodynamic models for flight control of damaged aircraft using an innovative differential vortex lattice method tightly coupled with...
A Novel Method for the Initial-Condition Estimation of a Tent Map
Institute of Scientific and Technical Information of China (English)
CHEN Xi; GAG Yong; YANG Yuan
2009-01-01
Based on the connection between the tent map and the saw tooth map or Bernoulli map, a novel method for the initial-condition estimation of the tent map is presented. In the method, firstly the symbolic sequence generated from the tent map is converted to the forms obtained from the saw tooth map and Bernoulli map, and then the relationship between the symbolic sequence and the initial condition of the tent map can be obtained from the initial-condition estimation equations, which can be easily obtained, hence the estimation of the tent map can be achieved finally. The method is computationally simple and the error of the estimator is less than 1/2N. The method is verified by software simulation.
Fundamental Frequency Estimation using Polynomial Rooting of a Subspace-Based Method
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2010-01-01
We consider the problem of estimating the fundamental frequency of periodic signals such as audio and speech. A novel estimation method based on polynomial rooting of the harmonic MUltiple SIgnal Classiﬁcation (HMUSIC) is presented. By applying polynomial rooting, we obtain two signiﬁcant...... improvements compared to HMUSIC. First, by using the proposed method we can obtain an estimate of the fundamental frequency without doing a grid search like in HMUSIC. This is due to that the fundamental frequency is estimated as the argument of the root lying closest to the unit circle. Second, we obtain...... a higher spectral resolution compared to HMUSIC which is a property of polynomial rooting methods. Our simulation results show that the proposed method is applicable to real-life signals, and that we in most cases obtain a higher spectral resolution than HMUSIC....
Critical review of methods for the estimation of actual evapotranspiration in hydrological models
CSIR Research Space (South Africa)
Jovanovic, Nebojsa
2012-01-01
Full Text Available The chapter is structured in three parts, namely: i) A theoretical overview of evapotranspiration processes, including the principle of atmospheric demand-soil water supply, ii) A review of methods and techniques to measure and estimate actual...
Directory of Open Access Journals (Sweden)
V. E. Strizhius
2015-01-01
Full Text Available Methods of the approximate estimations of fatigue durability of composite airframe component typical elements which can be recommended for application at the stage of outline designing of the airplane are generated and presented.
Estimation method of margin gasdynamic stability of the gas turbine engine while in service
Directory of Open Access Journals (Sweden)
І.Ф. Кінащук
2005-01-01
Full Text Available Considered is the method of an estimation gasdynamic stability margin of the compressor gas turbine engine under operating conditions. The given way can be used at diagnosing gas turbine engine and gas turbine engine installations.
National Aeronautics and Space Administration — Estimation of aerodynamic models for the control of damaged aircraft using an innovative differential vortex lattice method tightly coupled with an extended Kalman...
Estimating Magic Numbers Larger Than 126 by Fermi-Yang Liming Method
Institute of Scientific and Technical Information of China (English)
LI Xian-Hui; ZHOU Zhi-Ning; ZHONG Yu-Shu; YANG Ze-Sen
2001-01-01
The Fermi Yang Liming method is followed and developed to estimate new magic numbers in nuclei with a Woods Saxon density function. The calculated results predict that the magic number next to 126 should be around 184 and 258.
Borodachev, S. M.
2016-06-01
The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.
A Comparison of Iterative 2D-3D Pose Estimation Methods for Real-Time Applications
DEFF Research Database (Denmark)
Grest, Daniel; Krüger, Volker; Petersen, Thomas
2009-01-01
This work compares iterative 2D-3D Pose Estimation methods for use in real-time applications. The compared methods are available for public as C++ code. One method is part of the openCV library, namely POSIT. Because POSIT is not applicable for planar 3Dpoint congurations, we include the planar...
Institute of Scientific and Technical Information of China (English)
MA Qinghua; YANG Enhao
2000-01-01
An estimation method for solutions to the general linear system of Volterratype integral inequalities containing several iterated integral functionals is obtained. This method is based on a result proved by the present second author in Journ. Math. Anal. Appl.(1984). A certain two-dimensional system of nonlinear ordinary differential equations is also discussed to demonstrate the usefulness of our method.
Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher
2017-08-29
Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.
Combining the triangle method with thermal inertia to estimate regional evapotranspiration
DEFF Research Database (Denmark)
Stisen, Simon; Sandholt, Inge; Nørgaard, Anette
2008-01-01
Spatially distributed estimates of evaporative fraction and actual evapotranspiration are pursued using a simple remote sensing technique based on a remotely sensed vegetation index (NDVI) and diurnal changes in land surface temperature. The technique, known as the triangle method, is improved...... in surface temperature, dTs with an interpretation of the triangular shaped dTs-NDVI space allows for a direct estimation of evaporative fraction. The mean daytime energy available for evapotranspiration (Rn-G) is estimated using several remote sensors and limited ancillary data. Finally regional estimates...
Sunbuloglu, Emin; Bozdag, Ergun; Toprak, Tuncer; Islak, Civan
2013-01-01
This study is aimed at setting a method of experimental parameter estimation for large-deforming nonlinear viscoelastic continuous fibre-reinforced composite material model. Specifically, arterial tissue was investigated during experimental research and parameter estimation studies, due to medical, scientific and socio-economic importance of soft tissue research. Using analytical formulations for specimens under combined inflation/extension/torsion on thick-walled cylindrical tubes, in vitro experiments were carried out with fresh sheep arterial segments, and parameter estimation procedures were carried out on experimental data. Model restrictions were pointed out using outcomes from parameter estimation. Needs for further studies that can be developed are discussed.
Application of a novel method for age estimation of a baleen whale and a porpoise
DEFF Research Database (Denmark)
Nielsen, Nynne H.; Garde, Eva; Heide-Jørgensen, Mads Peter
2013-01-01
Eyeballs from 121 fin whales (Balaenoptera physalus) and 83 harbor porpoises (Phocoena phocoena) were used for age estimation using the aspartic acid racemization (AAR) technique. The racemization rate (kAsp) for fin whales was established from 15 fetuses (age 0) and 15 adult whales where age was...... ± 0.0018) were estimated, which is considerably higher than found for other cetaceans. Correlation between chosen age estimates from AAR and GLG counts indicated that AAR might be an alternative method for estimating age in marine mammals....
A Consistent Direct Method for Estimating Parameters in Ordinary Differential Equations Models
Holte, Sarah E.
2016-01-01
Ordinary differential equations provide an attractive framework for modeling temporal dynamics in a variety of scientific settings. We show how consistent estimation for parameters in ODE models can be obtained by modifying a direct (non-iterative) least squares method similar to the direct methods originally developed by Himmelbau, Jones and Bischoff. Our method is called the bias-corrected least squares (BCLS) method since it is a modification of least squares methods known to be biased. Co...
Sparse Inverse Covariance Estimation via an Adaptive Gradient-Based Method
Sra, Suvrit; Kim, Dongmin
2011-01-01
We study the problem of estimating from data, a sparse approximation to the inverse covariance matrix. Estimating a sparsity constrained inverse covariance matrix is a key component in Gaussian graphical model learning, but one that is numerically very challenging. We address this challenge by developing a new adaptive gradient-based method that carefully combines gradient information with an adaptive step-scaling strategy, which results in a scalable, highly competitive method. Our algorithm...
Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method
DEFF Research Database (Denmark)
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik;
1993-01-01
Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....
Error estimates of H1-Galerkin mixed finite element method for Schr(o)dinger equation
Institute of Scientific and Technical Information of China (English)
LIU Yang; LI Hong; WANG Jin-feng
2009-01-01
An H1-Galerkin mixed finite element method is discussed for a class of second order SchrSdinger equation. Optimal error estimates of semidiscrete schemes are derived for problems in one space dimension. At the same time, optimal error estimates are derived for fully discrete schemes. And it is showed that the H1-Galerkin mixed finite element approximations have the same rate of convergence as in the classical mixed finite element methods without requiring the LBB consistency condition.
A Study of Alternative Quantile Estimation Methods in Newsboy-Type Problems
1980-03-01
early in 1950 [1]. This problem has been presented in the literature under a variety of names, including newsvendor problem , Christmas tree problem ...quantile estimation methods in newsboy-type problems . Sok, Yong-u Monterey, California. Naval Postgraduate School http://hdl.handle.net/10945/19069...Downloaded from NPS Archive: Calhoun A STUDY OF ALTERNATIVE QUANTILE ESTIMATION METHODS IN NEWSBOY-TYPE PROBLEMS Yong-u Sok sss- 939434 NAVAL POSTGRADUATE
2015-08-24
Underwater Wireless Sensor Networks The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an...reviewed journals: Final Report: Information-Driven Blind Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ...Report Title We investigated different methods for blind Doppler shift estimation and compensation in underwater acoustic wireless sensor networks
Using the Black Scholes method for estimating high cost illness insurance premiums in Colombia
Directory of Open Access Journals (Sweden)
Liliana Chicaíza
2009-04-01
Full Text Available This article applied the Black-Scholes option valuation formula to calculating high-cost illness reinsurance premiums in the Colombian health system. The coverage pattern used in reinsuring high-cost illnesses was replicated by means of a European call option contract. The option’s relevant variables and parameters were adapted to an insurance market context. The premium estimated by the BlackScholes method fell within the range of premiums estimated by the actuarial method.
Evaluation of Six Methods for Estimating Synonymous and Nonsynonymous Substitution Rates
Institute of Scientific and Technical Information of China (English)
Zhang Zhang; Jun Yu
2006-01-01
Methods for estimating synonymous and nonsynonymous substitution rates among protein-coding sequences adopt different mutation (substitution) models with subtle yet significant differences, which lead to different estimates of evolutionary information. Little attention has been devoted to the comparison of methods for obtaining reliable estimates since the amount of sequence variations within targeted datasets is always unpredictable. To our knowledge, there is little information available in literature about evaluation of these different methods. In this study, we compared six widely used methods and provided with evaluation results using simulated sequences. The results indicate that incorporating sequence features (such as transition/transversion bias and nucleotide/codon frequency bias)into methods could yield better performance. We recommend that conclusions related to or derived from Ka and Ks analyses should not be readily drawn only according to results from one method.
Design of a Direction-of-Arrival Estimation Method Used for an Automatic Bearing Tracking System
Guo, Feng; Liu, Huawei; Huang, Jingchang; Zhang, Xin; Zu, Xingshui; Li, Baoqing; Yuan, Xiaobing
2016-01-01
In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the coherence between the frequency sub-bands of wideband signals. Then, we design a sub-band DOA estimation method which chooses a sub-band from the wideband signals by SMSC for the bearing tracking system. The simulations demonstrate that the sub-band method has a good tradeoff between the wideband methods and narrowband methods in terms of the estimation accuracy, spatial resolution, and computational cost. The proposed method was also tested in the field environment with the bearing tracking system, which also showed a good performance. PMID:27455267
A Robust Method for Relative Gravity Data Estimation with High Efficiency
Touati, F.; Idres, M.; Kahlouche, S.
2010-01-01
When gravimetric data observations have outliers, using standard least squares (LS) estimation will likely give poor accuracies and unreliable parameter estimates. One of the typical approaches to overcome this problem consists of using the robust estimation techniques. In this paper, we modified the robust estimator of Gervini and Yohai (2002) called REWLSE (Robust and Efficient Weighted Least Squares Estimator), which combines simultaneously high statistical efficiency and high breakdown point by replacing the weight function by a new weight function. This method allows reducing the outlier impacts and makes more use of the information provided by the data. In order to adapt this technique to the relative gravity data, weights are computed using the empirical distribution of the residuals obtained initially by the LTS (Least Trimmed Squares) estimator and by minimizing the mean distances relatively to the LS-estimator without outliers. The robustness of the initial estimator is maintained by adapted cut-off values as suggested by the REWLSE method which allows also a reasonable statistical efficiency. Hereafter we give the advantage and the pertinence of REWLSE procedure on real and semi-simulated gravity data by comparing it with conventional LS and other robust approaches like M- and MM-estimators.
A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding
Institute of Scientific and Technical Information of China (English)
Xue Mengfan; Li Xiaoping; Sun Haifeng; Fang Haiyan
2016-01-01
X-ray pulsar-based navigation (XPNAV) is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint prob-ability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC) and nonlinear least squares (NLS) estima-tors, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML) estimators.
A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding
Directory of Open Access Journals (Sweden)
Xue Mengfan
2016-06-01
Full Text Available X-ray pulsar-based navigation (XPNAV is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint probability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC and nonlinear least squares (NLS estimators, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML estimators.
One-repetition maximum bench press performance estimated with a new accelerometer method.
Rontu, Jari-Pekka; Hannula, Manne I; Leskinen, Sami; Linnamo, Vesa; Salmi, Jukka A
2010-08-01
The one repetition maximum (1RM) is an important method to measure muscular strength. The purpose of this study was to evaluate a new method to predict 1RM bench press performance from a submaximal lift. The developed method was evaluated by using different load levels (50, 60, 70, 80, and 90% of 1RM). The subjects were active floorball players (n = 22). The new method is based on the assumption that the estimation of 1RM can be calculated from the submaximal weight and the maximum acceleration of the submaximal weight during the lift. The submaximal bench press lift was recorded with a 3-axis accelerometer integrated to a wrist equipment and a data acquisition card. The maximum acceleration was calculated from the measurement data of the sensor and analyzed in personal computer with LabView-based software. The estimated 1RM results were compared with traditionally measured 1RM results of the subjects. An own estimation equation was developed for each load level, that is, 5 different estimation equations have been used based on the measured 1RM values of the subjects. The mean (+/-SD) of measured 1RM result was 69.86 (+/-15.72) kg. The mean of estimated 1RM values were 69.85-69.97 kg. The correlations between measured and estimated 1RM results were high (0.89-0.97; p < 0.001). The differences between the methods were very small (-0.11 to 0.01 kg) and were not significantly different from each other. The results of this study showed promising prediction accuracy for estimating bench press performance by performing just a single submaximal bench press lift. The estimation accuracy is competitive with other known estimation methods, at least with the current study population.
Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data
Directory of Open Access Journals (Sweden)
Wei-Kuang Lai
2016-02-01
Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.
An indirect transmission measurement-based spectrum estimation method for computed tomography
Zhao, Wei; Schafer, Sebastian; Royalty, Kevin
2015-01-01
The characteristics of an x-ray spectrum can greatly influence imaging and related tasks. In practice, due to the pile-up effect of the detector, it's difficult to directly measure the spectrum of a CT scanner using an energy resolved detector. An alternative solution is to estimate the spectrum using transmission measurements with a step phantom or other CT phantom. In this work, we present a new spectrum estimation method based on indirect transmission measurement and model spectra mixture approach. The estimated x-ray spectrum was expressed as weighted summation of a set of model spectra, which can significantly reduce the degrees of freedom (DOF) of the spectrum estimation problem. Next, an estimated projection can be calculated with the assumed spectrum. By iteratively updating the unknown weights, we minimized the difference between the estimated projection data and the raw projection data. The final spectrum was calculated with these calibrated weights and the model spectra. Both simulation and experim...
An iterative method for coil sensitivity estimation in multi-coil MRI systems.
Ling, Qiang; Li, Zhaohui; Song, Kaikai; Li, Feng
2014-12-01
This paper presents an iterative coil sensitivity estimation method for multi-coil MRI systems. The proposed method works with coil images in the magnitude image domain. It determines a region of support (RoS), a region being composed of the same type of tissues, by a region growing algorithm, which makes use of both intensities and intensity gradients of pixels. By repeating this procedure, it can determine multiple regions of support, which together cover most of the concerned image area. The union of these regions of support provides a rough estimate of the sensitivity of each coil through dividing the intensities of pixels by the average intensity inside every region of support. The obtained rough coil sensitivity estimate is further approached with the product of multiple low-order polynomials, rather than a single one. The product of these polynomials provides a smooth estimate of the sensitivity of each coil. With the obtained sensitivities of coils, it can produce a better reconstructed image, which determines more correct regions of support and yields preciser estimates of the sensitivities of coils. In other words, the method can be iteratively implemented to improve the estimation performance. The proposed method was verified through both simulated data and clinical data from different body parts. The experimental results confirm the superiority of our method to some conventional methods.
Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.
Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E
2016-12-20
Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.
Improved FRFT-based method for estimating the physical parameters from Newton's rings
Wu, Jin-Min; Lu, Ming-Feng; Tao, Ran; Zhang, Feng; Li, Yang
2017-04-01
Newton's rings are often encountered in interferometry, and in analyzing them, we can estimate the physical parameters, such as curvature radius and the rings' center. The fractional Fourier transform (FRFT) is capable of estimating these physical parameters from the rings despite noise and obstacles, but there is still a small deviation between the estimated coordinates of the rings' center and the actual values. The least-squares fitting method is popularly used for its accuracy but it is easily affected by the initial values. Nevertheless, with the estimated results from the FRFT, it is easy to meet the requirements of initial values. In this paper, the proposed method combines the advantages of the fractional Fourier transform (FRFT) with the least-squares fitting method in analyzing Newton's rings fringe patterns. Its performance is assessed by analyzing simulated and actual Newton's rings images. The experimental results show that the proposed method is capable of estimating the parameters in the presence of noise and obstacles. Under the same conditions, the estimation results are better than those obtained with the original FRFT-based method, especially for the rings' center. Some applications are shown to illustrate that the improved FRFT-based method is an important technique for interferometric measurements.
Parameter estimation for MIMO system based on MUSIC and ML methods
Institute of Scientific and Technical Information of China (English)
Wei DONG; Jiandong LI; Zhuo LU; Linjing ZHAO
2009-01-01
The frequency offset and channel gain estimation problem for multiple-input multiple-output (MIMO)systems in the case of flat-fading channels is addressed.Based on the multiple signal classification (MUSIC) and the maximum likelihood (ML) methods, a new joint estimation algorithm of frequency offsets and channel gains is proposed. The new algorithm has three steps. A subset of frequency offsets is first estimated with the MUSIC algorithm. All frequency offsets in the subset are then identified with the ML method. Finally, channel gains are calculated with the ML estimator. The algorithm is a one-dimensional search scheme and therefore greatly decreases the complexity of joint ML estimation, which is essentially a multi-dimensional search scheme.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
A modified micrometeorological gradient method for estimating O3 dry deposition over a forest canopy
Directory of Open Access Journals (Sweden)
Z. Y. Wu
2015-01-01
Full Text Available Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air–surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM and the modified Bowen-Ratio method (MBR. A modified micrometeorological gradient method (MGM is proposed in this study for estimating O3 dry deposition fluxes over a forest canopy using concentration gradients between a level above and a level below the canopy top, taking advantage of relatively large gradients between these levels due to significant pollutant uptake at top layers of the canopy. The new method is compared with the AGM and MBR methods and is also evaluated using eddy-covariance (EC flux measurements collected at the Harvard Forest Environmental Measurement Site, Massachusetts during 1993–2000. All the three gradient methods (AGM, MBR and MGM produced similar diurnal cycles of O3 dry deposition velocity (Vd(O3 to the EC measurements, with the MGM method being the closest in magnitude to the EC measurements. The multi-year average Vd(O3 differed significantly between these methods, with the AGM, MBR and MGM method being 2.28, 1.45 and 1.18 times of that of the EC. Sensitivity experiments identified several input parameters for the MGM method as first-order parameters that affect the estimated Vd(O3. A 10% uncertainty in the wind speed attenuation coefficient or canopy displacement height can cause about 10% uncertainty in the estimated Vd(O3. An unrealistic leaf area density vertical profile can cause an uncertainty of a factor of 2.0 in the estimated Vd(O3. Other input parameters or formulas for stability functions only caused an uncertainly of a few percent. The new method provides an alternative approach in monitoring/estimating long-term deposition fluxes of similar pollutants over tall canopies.
Hasuo, Emi; Nakajima, Yoshitaka; Tomimatsu, Erika; Grondin, Simon; Ueda, Kazuo
2014-03-01
A time interval between the onset and the offset of a continuous sound (filled interval) is often perceived to be longer than a time interval between two successive brief sounds (empty interval) of the same physical duration. The present study examined whether and how this phenomenon, sometimes called the filled duration illusion (FDI), occurs for short time intervals (40-520 ms). The investigation was conducted with the method of adjustment (Experiment 1) and the method of magnitude estimation (Experiment 2). When the method of adjustment was used, the FDI did not appear for the majority of the participants, but it appeared clearly for some participants. In the latter case, the amount of the FDI increased as the interval duration lengthened. The FDI was more likely to occur with magnitude estimation than with the method of adjustment. The participants who showed clear FDI with one method did not necessarily show such clear FDI with the other method.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
MOTION ERROR ESTIMATION OF5-AXIS MACHINING CENTER USING DBB METHOD
Institute of Scientific and Technical Information of China (English)
CHEN Huawei; ZHANG Dawei; TIAN Yanling; ICHIRO Hagiwara
2006-01-01
In order to estimate the motion errors of 5-axis machine center, the double ball bar (DBB)method is adopted to realize the diagnosis procedure. The motion error sources of rotary axes in 5-axis machining center comprise of the alignment error of rotary axes and the angular error due to various factors, e.g. the inclination of rotary axes. From sensitive viewpoints, each motion error is possible to have a particular sensitive direction in which deviation of DBB error trace arises from only some specific error sources. The model of the DBB error trace is established according to the spatial geometry theory. Accordingly, the sensitive direction of each motion error source is made clear through numerical simulation, which is used as the reference patterns for rotational error estimation.The estimation method is proposed to easily estimate the motion error sources of rotary axes in quantitative manner. To verify the proposed DBB method for rotational error estimation, the experimental tests are carried out on a 5-axis machining center M-400 (MORISEIKI). The effect of the mismatch of the DBB is also studied to guarantee the estimation accuracy. From the experimental data, it is noted that the proposed estimation method for 5-axis machining center is feasible and effective.
Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries
Directory of Open Access Journals (Sweden)
Zhongyue Zou
2014-08-01
Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.
Parent, Maxim; Niezgoda, Helen; Keller, Heather H; Chambers, Larry W; Daly, Shauna
2012-10-01
A variety of methods are available for assessing diet; however, many are impractical for large research studies in an institutional environment. Technology, specifically digital imaging, can make diet estimations more feasible for research. Our goal was to compare a digital imaging method of estimating regular and modified-texture main plate food waste with traditional on-site visual estimations, in a continuing and long-term care setting using a meal-tray delivery service. Food waste was estimated for participants on regular (n=36) and modified-texture (n=42) diets. A tracking system to ensure collection and digital imaging of all main meal plates was developed. Four observers used a modified Comstock method to assess food waste for vegetables, starches, and main courses on 551 main meal plates. Intermodal, inter-rater, and intra-rater reliability were calculated using intraclass correlation for absolute agreement. Intermodal reliability was based on one rater's assessments. The digital imaging method results were in high agreement with the real-time visual method for both regular and modified-texture food (intraclass correlation=0.90 and 0.88, respectively). Agreements between observers for regular diets were higher than those for modified-texture food (range=0.91 to 0.94; 0.82 to 0.91, respectively). Intra-rater agreements were very high for both regular and modified-texture food (range=0.93 to 0.99; 0.91 to 0.98). The digital imaging method is a reliable alternative to estimating regular and modified-texture food waste for main meal plates when compared with real-time visual estimation. Color, shape, reheating, mixing, and use of sauces made modified-texture food waste slightly more difficult to estimate, regardless of estimation method. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Bryan, M.F.; Piepel, G.F.; Simpson, D.B.
1996-03-01
The high-level waste (HLW) vitrification plant at the Hanford Site was being designed to transuranic and high-level radioactive waste in borosilicate class. Each batch of plant feed material must meet certain requirements related to plant performance, and the resulting class must meet requirements imposed by the Waste Acceptance Product Specifications. Properties of a process batch and the resultlng glass are largely determined by the composition of the feed material. Empirical models are being developed to estimate some property values from data on feed composition. Methods for checking and documenting compliance with feed and glass requirements must account for various types of uncertainties. This document focuses on the estimation. manipulation, and consequences of composition uncertainty, i.e., the uncertainty inherent in estimates of feed or glass composition. Three components of composition uncertainty will play a role in estimating and checking feed and glass properties: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. In this document, composition uncertainty and its components are treated in terms of variances and variance components or univariate situations, covariance matrices and covariance components for multivariate situations. The importance of variance and covariance components stems from their crucial role in properly estimating uncertainty In values calculated from a set of observations on a process batch. Two general types of methods for estimating uncertainty are discussed: (1) methods based on data, and (2) methods based on knowledge, assumptions, and opinions about the vitrification process. Data-based methods for estimating variances and covariance matrices are well known. Several types of data-based methods exist for estimation of variance components; those based on the statistical method analysis of variance are discussed, as are the strengths and weaknesses of this approach.
Surface estimation methods with phased-arrays for adaptive ultrasonic imaging in complex components
Robert, S.; Calmon, P.; Calvo, M.; Le Jeune, L.; Iakovleva, E.
2015-03-01
Immersion ultrasonic testing of structures with complex geometries may be significantly improved by using phased-arrays and specific adaptive algorithms that allow to image flaws under a complex and unknown interface. In this context, this paper presents a comparative study of different Surface Estimation Methods (SEM) available in the CIVA software and used for adaptive imaging. These methods are based either on time-of-flight measurements or on image processing. We also introduce a generalized adaptive method where flaws may be fully imaged with half-skip modes. In this method, both the surface and the back-wall of a complex structure are estimated before imaging flaws.
Development of rapid method for the estimation of reactive silica in fly ash
Energy Technology Data Exchange (ETDEWEB)
N.K. Katyal; J.M. Sharma; A.K. Dhawan; M.M. Ali; K. Mohan [National Council for Cement and Building Materials, Haryana (India)
2008-01-15
Reactive silica (SiO{sub 2}) is an important component of fly ash controlling its use in cement and building materials. Literature search shows that the methods available for the estimation of reactive silica are very time consuming and tedious. It requires a minimum of four days by the conventional gravimetric method described in the standards. In the current paper a rapid volumetric method has been developed where it is possible to estimate reactive silica in fly ash in 4 h. Besides this a gravimetric method has been developed which takes two and half days.
Konovalenko, Ivan; Kuznetsova, Elena
2015-02-01
In this paper, we consider the problem of object's velocity estimation via video stream by comparing three new methods of velocity estimation named as vertical edge algorithm, modified Lucas-Kanade method, and feature points algorithm. As an applied example the task of automatic evaluation of vehicles' velocity via video stream on toll roads is chosen. We took some videos from cameras mounted on the toll roads and marked them out to determine true velocity. Comparison is carried out of performance in the correct velocity detection of the proposed methods with each other. The relevance of this paper is practical implementation of these methods overcoming all the difficulties of realization.
Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas
2014-07-01
Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077).
Damavandi, Mohsen; Barbier, Franck; Leboucher, Julien; Farahpour, Nader; Allard, Paul
2009-09-01
Body segment moments of inertia (MOI) are estimated from data obtained from cadavers or living individuals. Though these methods can be valid for the general population, they usually are limited when applied to special populations (e.g., obese). The effect of two geometric methods, photogrammetry and two new methods, namely, inverse dynamics and angular momentum on the estimations of MOI in individuals of different body mass index (BMI) were compared to gain insight into their relative accuracy. The de Leva (1996) method was chosen as a criterion to determine how these methods behaved. MOI methods were not different in individuals with a normal BMI. On the average, MOI values obtained with inverse dynamics and angular momentum were respectively 13.2% lower for lean participants and 17.9% higher for obese subjects than those obtained from the de Leva method. The average Pearson coefficients of correlation between the MOI values, estimated by the de Leva method, and the other methods was 0.76 (+/-0.31). Since the proposed methods made no assumption on the mass distribution and segments' geometry, they appeared to be more sensitive to body morphology changes to estimate whole body MOI values in lean and obese subjects.
Sampling Frequency Offset Estimation Methods for DVB-T/H Systems
Directory of Open Access Journals (Sweden)
Kyung Hoon Won
2010-03-01
Full Text Available A precise estimation and compensation of SFO (Sampling Frequency Offset is an important issue in OFDM (Orthogonal Frequency Division Multiplexing system, because sampling frequency mismatch between the transmitter and the receiver dramatically degrades the system performance due to the loss of orthogonality between the subcarriers. However, the conventional method causes serious performance degradation of SFO estimation in low SNR (Signal to Noise Ratio or large Doppler frequency environment. Therefore, in this paper, we propose two SFO estimation methods which can achieve stable operation in low SNR and large Doppler frequency environment. The proposals for SFO estimation / compensation are mainly specialized on DVB (Digital Video Broadcasting system, and we verified that the proposed method has good performance and stable operation through extensive simulation.
SAR images classification method based on Dempster-Shafer theory and kernel estimate
Institute of Scientific and Technical Information of China (English)
He Chu; Xia Guisong; Sun Hong
2007-01-01
To study the scene classification in the Synthetic Aperture Radar (SAR) image, a novel method based on kernel estimate, with the Markov context and Dempster-Shafer evidence theory is proposed.Initially, a nonparametric Probability Density Function (PDF) estimate method is introduced, to describe the scene of SAR images.And then under the Markov context, both the determinate PDF and the kernel estimate method are adopted respectively, to form a primary classification.Next, the primary classification results are fused using the evidence theory in an unsupervised way to get the scene classification.Finally, a regularization step is used, in which an iterated maximum selecting approach is introduced to control the fragments and modify the errors of the classification.Use of the kernel estimate and evidence theory can describe the complicated scenes with little prior knowledge and eliminate the ambiguities of the primary classification results.Experimental results on real SAR images illustrate a rather impressive performance.
Cavuoti, Stefano; Brescia, Massimo; Vellucci, Civita; Tortora, Crescenzo; Longo, Giuseppe
2016-01-01
A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z's). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine learning based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z Probability Density Function (PDF), due to the fact that the analytical relation mapping the photometric parameters onto the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use...
Tian, Li-Ping; Liu, Lizhi; Wu, Fang-Xiang
2010-01-01
Derived from biochemical principles, molecular biological systems can be described by a group of differential equations. Generally these differential equations contain fractional functions plus polynomials (which we call improper fractional model) as reaction rates. As a result, molecular biological systems are nonlinear in both parameters and states. It is well known that it is challenging to estimate parameters nonlinear in a model. However, in fractional functions both the denominator and numerator are linear in the parameters while polynomials are also linear in parameters. Based on this observation, we develop an iterative linear least squares method for estimating parameters in biological systems modeled by improper fractional functions. The basic idea is to transfer optimizing a nonlinear least squares objective function into iteratively solving a sequence of linear least squares problems. The developed method is applied to the estimation of parameters in a metabolism system. The simulation results show the superior performance of the proposed method for estimating parameters in such molecular biological systems.
Method of estimating mechanical stress on Si body of MOSFET using drain–body junction current
Seo, Ji-Hoon; Kim, Gang-Jun; Son, Donghee; Lee, Nam-Hyun; Kang, Bongkoo
2017-01-01
A simple and accurate method of estimating the mechanical stress σ on the Si body of a MOSFET is proposed. This method measures the doping concentration of the body, N d, and the onset voltage V hl for the high-level injection of the drain–body junction, uses N d, the ideality factor η, and the Fermi potential ϕf ≈ V hl/2η to calculate the intrinsic carrier concentration n i of the Si body, and then uses the calculated n i to obtain the bandgap energy E g of the Si body. σ is estimated from E g using the deformation potential theory. The estimates of σ agree well with those obtained using previous methods. The proposed method requires one MOSFET, whereas the others require at least two MOSFETs, so the proposed method can give an absolute measurement of σ on the Si body of a MOSFET.
A novel method for estimating the initial rotor position of PM motors without the position sensor
Energy Technology Data Exchange (ETDEWEB)
Rostami, Alireza; Asaei, Behzad [School of Electrical and Computer Engineering, Faculty of Engineering, University of Tehran, Tehran (Iran)
2009-08-15
Permanent magnet (PM) motors have been used widely in the industrial applications. However, a need of the position sensor is a drawback of their control system. The sensorless methods using the back-EMF (electromotive force) cannot detect the rotor position at a standstill; recently, a few methods proposed to detect the initial rotor position, but they have high estimation error which reduces starting torque of the motor. Therefore, in this paper, a novel method to detect the initial rotor position of the PM motors is proposed, first, by using a space vector model, response of the stator current space vector to the saturation of the stator core is analyzed; then a novel method based on the saturation effect is presented that estimates the initial rotor position and the maximum estimation error is less than 3.8. Simulation results confirm this method is effective and precise, and variation of the motor parameters does not affect its precision. (author)
Institute of Scientific and Technical Information of China (English)
YU Xin-yi; GAO Hai-bo; DENG Zong-quan
2009-01-01
Based on the study of passive articulated rover, a complete suspension kinematics model from wheel to inertial reference frame is presented, which uses D-H method of manipulator and presentation with Euler an-gle of pitch, roll and yaw. An improved contact model is adopted aimed at the loose and rough lunar terrain. U-sing this kinematics model and numerical continuous and discrete Newton' s method with iterative factor, the numerical method for estimation of kinematical parameters of articulated rovers on loose and rough terrain is con-strueted. To demonstrate this numerical method, an example of two torsion bar rocker-bogie lunar rover with eight wheels is presented. Simulation results show that the numerical method for estimation of kinematical pa-rameters of articulated rovers based on improved contact model can improve the precision of kinematical estima-tion on loose and rough terrain and decrease errors caused by contact models established based on general hy-pothesis.
Estimation of Large-Scale Implicit Models Using 2-Stage Methods
Directory of Open Access Journals (Sweden)
Rolf Henriksen
1985-01-01
Full Text Available The problem of estimating large scale implicit (non-recursive models by two- stage methods is considered. The first stage of the methods is used to construct or estimate an explicit form of the total model, by constructing a minimal stochastic realization of the system. This model is then subsequently used in the second stage to generate instrumental variables for the purpose of estimating each sub-model separately. This latter stage can be carried out by utilizing a generalized least squares method, but most emphasis is put on utilizing decentralized filtering algorithms and a prediction error formulation. A note about the connection between the original TSLS-method (two-stage least squares method and stochastic realization is also made.
A comparison of some methods to estimate the fatigue life of plain dents
Energy Technology Data Exchange (ETDEWEB)
Martins, Ricardo R.; Noronha Junior, Dauro B. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)
2009-12-19
This paper describes a method under development at PETROBRAS R and D Center (CENPES) to estimate the fatigue life of plain dents. This method uses the API Publication 1156 as a base to estimate the fatigue life of dome shaped plain dents and the Pipeline Defect Assessment Manual (PDAM) approach to take into account the uncertainty inherent in the fatigue phenomenon. CENPES method, an empirical and a semi-empirical method available in the literature were employed to estimate the fatigue lives of 10 plain dents specimens of Year 1 of an ongoing test program carried out by BMT Fleet Technology Limited, with the support of the Pipeline Research Council International (PRCI). The results obtained with the different methods are presented and compared. Furthermore some details are given on the numerical methodology proposed by PETROBRAS that have been used to describe the behavior of plain dents. (author)
An estimation method for InSAR interferometric phase combined with image auto-coregistration
Institute of Scientific and Technical Information of China (English)
LI Hai; LI Zhenfang; LIAO Guisheng; BAO Zheng
2006-01-01
In this paper we propose a method to estimate the InSAR interferometric phase of the steep terrain based on the terrain model of local plane by using the joint subspace projection technique proposed in our previous paper. The method takes advantage of the coherence information of neighboring pixel pairs to auto-coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase. The method can auto-coregister the SAR images and reduce the interferometric phase noise simultaneously. Theoretical analysis and computer simulation results show that the method can provide accurate estimate of the interferometric phase (interferogram) of very steep terrain even if the coregistration error reaches one pixel. The effectiveness of the method is verified via simulated data and real data.
Kai, Bo; Li, Runze; Zou, Hui
2011-02-01
The complexity of semiparametric models poses new challenges to statistical inference and model selection that frequently arise from real applications. In this work, we propose new estimation and variable selection procedures for the semiparametric varying-coefficient partially linear model. We first study quantile regression estimates for the nonparametric varying-coefficient functions and the parametric regression coefficients. To achieve nice efficiency properties, we further develop a semiparametric composite quantile regression procedure. We establish the asymptotic normality of proposed estimators for both the parametric and nonparametric parts and show that the estimators achieve the best convergence rate. Moreover, we show that the proposed method is much more efficient than the least-squares-based method for many non-normal errors and that it only loses a small amount of efficiency for normal errors. In addition, it is shown that the loss in efficiency is at most 11.1% for estimating varying coefficient functions and is no greater than 13.6% for estimating parametric components. To achieve sparsity with high-dimensional covariates, we propose adaptive penalization methods for variable selection in the semiparametric varying-coefficient partially linear model and prove that the methods possess the oracle property. Extensive Monte Carlo simulation studies are conducted to examine the finite-sample performance of the proposed procedures. Finally, we apply the new methods to analyze the plasma beta-carotene level data.
Ariza, Adriana Alexandra Aparicio; Ayala Blanco, Elizabeth; García Sánchez, Luis Eduardo; García Sánchez, Carlos Eduardo
2015-06-01
Natural gas is a mixture that contains hydrocarbons and other compounds, such as CO2 and N2. Natural gas composition is commonly measured by gas chromatography, and this measurement is important for the calculation of some thermodynamic properties that determine its commercial value. The estimation of uncertainty in chromatographic measurement is essential for an adequate presentation of the results and a necessary tool for supporting decision making. Various approaches have been proposed for the uncertainty estimation in chromatographic measurement. The present work is an evaluation of three approaches of uncertainty estimation, where two of them (guide to the expression of uncertainty in measurement method and prediction method) were compared with the Monte Carlo method, which has a wider scope of application. The aforementioned methods for uncertainty estimation were applied to gas chromatography assays of three different samples of natural gas. The results indicated that the prediction method and the guide to the expression of uncertainty in measurement method (in the simple version used) are not adequate to calculate the uncertainty in chromatography measurement, because uncertainty estimations obtained by those approaches are in general lower than those given by the Monte Carlo method.
Estimation Methods for Multicollinearity Proplem Combined with High Leverage Data Points
Directory of Open Access Journals (Sweden)
Moawad El-Fallah
2011-01-01
Full Text Available Problem statement: Least Squares (LS method has been the most popular method for estimating the parameters of a model due to its optimal properties and ease of computation. LS estimated regression may be seriously affected by multicollinearity which is a near linear dependency between two or more explanatory variables in the regression models. Although LS estimates are unbiased in the presence of multicollinearity, they will be imprecise with inflated standard errors of the estimated regression coefficients. Approach: In this study, we will study some alternative regression methods for estimating the regression parameters in the presence of multiple high leverage points which cause multicollinearity problem. These methods are mainly depend on a one step reweighted least square, where the initial weight functions were determined by the Diagnostic-Robust Generalized Potentials (DRGP. The proposed alternative methods in this study are called GM-DRGP-L1, GMDRGP- LTS, M-DRGP, MM-DRGP and DRGP-MM. Results: The empirical results of this study indicated that, the DRGP-MM and the GM-DRGP-LTS offers a substantial improvement over other methods for correcting the problems of high leverage points enhancing multicollinearity. Conclusion: The study had established that the DRGP-MM and the GM-DRGP-LTS methods were recommended to solve the multicollinearity problem with high leverage data points.
Comparison of mode estimation methods and application in molecular clock analysis
Hedges, S. Blair; Shah, Prachi
2003-01-01
BACKGROUND: Distributions of time estimates in molecular clock studies are sometimes skewed or contain outliers. In those cases, the mode is a better estimator of the overall time of divergence than the mean or median. However, different methods are available for estimating the mode. We compared these methods in simulations to determine their strengths and weaknesses and further assessed their performance when applied to real data sets from a molecular clock study. RESULTS: We found that the half-range mode and robust parametric mode methods have a lower bias than other mode methods under a diversity of conditions. However, the half-range mode suffers from a relatively high variance and the robust parametric mode is more susceptible to bias by outliers. We determined that bootstrapping reduces the variance of both mode estimators. Application of the different methods to real data sets yielded results that were concordant with the simulations. CONCLUSION: Because the half-range mode is a simple and fast method, and produced less bias overall in our simulations, we recommend the bootstrapped version of it as a general-purpose mode estimator and suggest a bootstrap method for obtaining the standard error and 95% confidence interval of the mode.
Asiri, Sharefa M.
2017-10-08
Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters
Methods for Estimating Mean Annual Rate of Earthquakes in Moderate and Low Seismicity Regions~
Institute of Scientific and Technical Information of China (English)
Peng Yanju; Zhang Lifang; Lv Yuejun; Xie Zhuojuan
2012-01-01
Two kinds of methods for determining seismic parameters are presented, that is, the potential seismic source zoning method and grid-spatially smoothing method. The Gaussian smoothing method and the modified Gaussian smoothing method are described in detail, and a comprehensive analysis of the advantages and disadvantages of these methods is made. Then, we take centrai China as the study region, and use the Gaussian smoothing method and potential seismic source zoning method to build seismic models to calculate the mean annual seismic rate. Seismic hazard is calculated using the probabilistic seismic hazard analysis method to construct the ground motion acceleration zoning maps. The differences between the maps and these models are discussed and the causes are investigated. The results show that the spatial smoothing method is suitable for estimating the seismic hazard over the moderate and low seismicity regions or the hazard caused by background seismicity; while the potential seismic source zoning method is suitable for estimating the seismic hazard in well-defined seismotectonics. Combining the spatial smoothing method and the potential seismic source zoning method with an integrated account of the seismicity and known seismotectonics is a feasible approach to estimate the seismic hazard in moderate and low seismicity regions.
Improved Estimation of Subsurface Magnetic Properties using Minimum Mean-Square Error Methods
Energy Technology Data Exchange (ETDEWEB)
Saether, Bjoern
1997-12-31
This thesis proposes an inversion method for the interpretation of complicated geological susceptibility models. The method is based on constrained Minimum Mean-Square Error (MMSE) estimation. The MMSE method allows the incorporation of available prior information, i.e., the geometries of the rock bodies and their susceptibilities. Uncertainties may be included into the estimation process. The computation exploits the subtle information inherent in magnetic data sets in an optimal way in order to tune the initial susceptibility model. The MMSE method includes a statistical framework that allows the computation not only of the estimated susceptibilities, given by the magnetic measurements, but also of the associated reliabilities of these estimations. This allows the evaluation of the reliabilities in the estimates before any measurements are made, an option, which can be useful for survey planning. The MMSE method has been tested on a synthetic data set in order to compare the effects of various prior information. When more information is given as input to the estimation, the estimated models come closer to the true model, and the reliabilities in their estimates are increased. In addition, the method was evaluated using a real geological model from a North Sea oil field, based on seismic data and well information, including susceptibilities. Given that the geometrical model is correct, the observed mismatch between the forward calculated magnetic anomalies and the measured anomalies causes changes in the susceptibility model, which may show features of interesting geological significance to the explorationists. Such magnetic anomalies may be due to small fractures and faults not detectable on seismic, or local geochemical changes due to the upward migration of water or hydrocarbons. 76 refs., 42 figs., 18 tabs.
A weighted least-squares method for parameter estimation in structured models
Galrinho, Miguel; Rojas, Cristian R.; Hjalmarsson, Håkan
2014-01-01
Parameter estimation in structured models is generally considered a difficult problem. For example, the prediction error method (PEM) typically gives a non-convex optimization problem, while it is difficult to incorporate structural information in subspace identification. In this contribution, we revisit the idea of iteratively using the weighted least-squares method to cope with the problem of non-convex optimization. The method is, essentially, a three-step method. First, a high order least...
NEW METHOD TO ESTIMATE SCALING OF POWER-LAW DEGREE DISTRIBUTION AND HIERARCHICAL NETWORKS
Institute of Scientific and Technical Information of China (English)
YANG Bo; DUAN Wen-qi; CHEN Zhong
2006-01-01
A new method and corresponding numerical procedure are introduced to estimate scaling exponents of power-law degree distribution and hierarchical clustering func tion for complex networks. This method can overcome the biased and inaccurate faults of graphical linear fitting methods commonly used in current network research. Furthermore, it is verified to have higher goodness-of-fit than graphical methods by comparing the KS (Kolmogorov-Smirnov) test statistics for 10 CNN (Connecting Nearest-Neighbor)networks.
Magnard, Christophe; Small, David; Meier, Erich
2015-01-01
The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the interme...
Comparison of methods for estimating motor unit firing rate time series from firing times.
Liu, Lukai; Bonato, Paolo; Clancy, Edward A
2016-12-01
The central nervous system regulates recruitment and firing of motor units to modulate muscle tension. Estimation of the firing rate time series is typically performed by decomposing the electromyogram (EMG) into its constituent firing times, then lowpass filtering a constituent train of impulses. Little research has examined the performance of different estimation methods, particularly in the inevitable presence of decomposition errors. The study of electrocardiogram (ECG) and electroneurogram (ENG) firing rate time series presents a similar problem, and has applied novel simulation models and firing rate estimators. Herein, we adapted an ENG/ECG simulation model to generate realistic EMG firing times derived from known rates, and assessed various firing rate time series estimation methods. ENG/ECG-inspired rate estimation worked exceptionally well when EMG decomposition errors were absent, but degraded unacceptably with decomposition error rates of ⩾1%. Typical EMG decomposition error rates-even after expert manual review-are 3-5%. At realistic decomposition error rates, more traditional EMG smoothing approaches performed best, when optimal smoothing window durations were selected. This optimal window was often longer than the 400ms duration that is commonly used in the literature. The optimal duration decreased as the modulation frequency of firing rate increased, average firing rate increased and decomposition errors decreased. Examples of these rate estimation methods on physiologic data are also provided, demonstrating their influence on measures computed from the firing rate estimate. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Estimation of Oceanic Eddy Viscosity Profile and Wind Stress Drag Coefficient Using Adjoint Method
Directory of Open Access Journals (Sweden)
Qilin Zhang
2015-01-01
Full Text Available Adjoint method is used to assimilate pseudoobservations to simultaneously estimate the OEVP and the WSDC in an oceanic Ekman layer model. Five groups of experiments are designed to investigate the influences that the optimization algorithms, step-length, inverse integral time of the adjoint model, prescribed vertical distribution of eddy viscosity, and regularization parameter exert on the inversion results. Experimental results show that the best estimation results are obtained with the GD algorithm; the best estimation results are obtained when the step-length is equal to 1 in Group 2; in Group 3, 8 days of inverse integral time yields the best estimation results, and good assimilation efficiency is achieved by increasing iteration steps when the inverse integral time is reduced; in Group 4, the OEVP can be estimated for some specific distributions; however, when the VEVCs increase along with the depth at the bottom of water, the estimation results are relatively poor. For this problem, we use extrapolation method to deal with the VEVCs in layers in which the estimation results are poor; the regularization method with appropriate regularization parameter can indeed improve the experiment result to some extent. In all experiments in Groups 2-3, the WSDCs are inverted successfully within 100 iterations.
SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation
Energy Technology Data Exchange (ETDEWEB)
Zhao, C; Jin, M [University of Texas at Arlington, Arlington, TX (United States); Ouyang, L; Wang, J [UT Southwestern Medical Center at Dallas, Dallas, TX (United States)
2015-06-15
Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used to recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation
Directory of Open Access Journals (Sweden)
Hamid Reza Khalkhali
2016-09-01
Full Text Available Background Often, there is no access to sufficient sample size to estimate the prevalence using the method of direct estimator in all areas. The aim of this study was to compare small area’s Bayesian method and direct method in estimating the prevalence of steatosis in obese and overweight children. Materials and Methods: In this cross-sectional study, was conducted on 150 overweight and obese children aged 2 to 15 years referred to the Children's digestive clinic of Urmia University of Medical Sciences- Iran, in 2013. After Body mass index (BMI calculation, children with overweight and obese were assessed in terms of primary tests of obesity screening. Then children with steatosis confirmed by abdominal Ultrasonography, were referred to the laboratory for doing further tests. Steatosis prevalence was estimated by direct and Bayesian method and their efficiency were evaluated using mean-square error Jackknife method. The study data was analyzed using the open BUGS3.1.2 and R2.15.2 software. Results: The findings indicated that estimation of steatosis prevalence in children using Bayesian and direct methods were between 0.3098 to 0.493, and 0.355 to 0.560 respectively, in Health Districts; 0.3098 to 0.502, and 0.355 to 0.550 in Education Districts; 0.321 to 0.582, and 0.357 to 0.615 in age groups; 0.313 to 0.429, and 0.383 to 0.536 in sex groups. In general, according to the results, mean-square error of Bayesian estimation was smaller than direct estimation (P
A Comparison of De-noising Methods for Diff erential Phase Shift and Associated Rainfall Estimation
Institute of Scientific and Technical Information of China (English)
胡志群; 刘察平; 吴林林; 魏庆
2015-01-01
Measured diff erential phase shift Φ DP is known to be a noisy unstable polarimetric radar variable, such that the quality of Φ DP data has direct impact on specifi c diff erential phase shift KDP estimation, and subsequently, the KDP-based rainfall estimation. Over the past decades, many Φ DP de-noising methods have been developed; however, the de-noising eff ects in these methods and their impact on KDP-based rainfall estimation lack comprehensive comparative analysis. In this study, simulated noisy Φ DP data were generated and de-noised by using several methods such as fi nite-impulse response (FIR), Kalman, wavelet, traditional mean, and median fi lters. The biases were compared between KDP from simulated and observedΦ DP radial profi les after de-noising by these methods. The results suggest that the complicated FIR, Kalman, and wavelet methods have a better de-noising eff ect than the traditional methods. AfterΦ DP was de-noised, the accuracy of the KDP-based rainfall estimation increased signifi cantly based on the analysis of three actual rainfall events. The improvement in estimation was more obvious when KDP was estimated withΦ DP de-noised by Kalman, FIR, and wavelet methods when the average rainfall was heavier than 5 mm h−1. However, the improved estimation was not signifi cant when the precipitation intensity further increased to a rainfall rate beyond 10 mm h−1. The performance of wavelet analysis was found to be the most stable of these fi lters.
Directory of Open Access Journals (Sweden)
Karuna
2014-03-01
Full Text Available BACKGROUND: Accurate determination of fetal birth weight prior to delivery can have significant bearing on management decision in labor, thereby markedly improving perinatal outcome. OBJECTIVE: To assess fetal weight in term pregnancy by different clinical and ultrasonographical methods. MATERIALS AND METHODS: The study was conducted in Obstetrics and Gynecology department of Dr. D.Y. Patil Medical College, Pimpri, Pune. Two hundred (200 women were selected and studied for a period of one year based on convenience sampling from October 2011 to September 2013.The fetal weight was estimated a week prior to the delivery by both ultrasound and clinical examination. Various formulae’s for estimation of fetal weight were applied based on Ultrasound and clinical parameters. All results arising from different formulas were studied and suitable statistics was applied and compared to have clear idea regarding results. RESULTS: All results arising from different formulas were studied by Descriptive statistics and compared with Actual weight at the time of delivery (gold standard. A box plot was used to differentiate various methods used for estimation of fetal weight, It shows that Johnson, Dawn formula and weight in gms (Dare’s formula corresponds with the base line indicating that they can predict birth weight correctly and among these Johnson showed promising results, whereas formulas like Campbell, Comb’s, Hadlock and Warsof didn’t predict birth weight as accurately as the three clinical methods used, thus in our study it can be seen that clinical methods are better predictors of birth weight than ultrasonographic methods. CONCLUSIONS: In our study it was observed that Clinical methods used for estimation of fetal birth weight was found to be by far simple and reliable methods in estimation of fetal birth weight than Ultrasonographic estimation methods and thus holds importance in day to day practice especially in countries like us where still
Li, Qiang; Wang, Zhongyu; Wang, Zhuoran; Yan, Hu
2015-06-01
A shock tube is usually used to excite the dynamic characteristics of the pressure sensor used in an aircraft. This paper proposes a novel estimation method for determining the dynamic characteristic parameters of the pressure sensor. A preprocessing operation based on Grey Model [GM(1,1)] and bootstrap method (BM) is employed to analyze the output of a calibrated pressure sensor under step excitation. Three sequences, which include the estimated value sequence, upper boundary, and lower boundary, are obtained. The processing methods on filtering and modeling are used to explore the three sequences independently. The optimal estimated, upper boundary, and lower boundary models are then established. The three models are solved, and a group of dynamic characteristic parameters corresponding to the estimated intervals are obtained. A shock tube calibration test consisting of two experiments is performed to validate the performance of the proposed method. The results show that the relative errors of the dynamic characteristic parameters of time and frequency domains do not exceed 9% and 10%, respectively. Moreover, the nominal and estimated values of the parameters fall into the estimated intervals limited by the upper and lower values.
Methods to estimate historical daily streamflow for ungaged stream locations in Minnesota
Lorenz, David L.; Ziegeweid, Jeffrey R.
2016-03-14
Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water; however, streamgages cannot be installed at every location where streamflow information is needed. Therefore, methods for estimating streamflow at ungaged stream locations need to be developed. This report presents a statewide study to develop methods to estimate the structure of historical daily streamflow at ungaged stream locations in Minnesota. Historical daily mean streamflow at ungaged locations in Minnesota can be estimated by transferring streamflow data at streamgages to the ungaged location using the QPPQ method. The QPPQ method uses flow-duration curves at an index streamgage, relying on the assumption that exceedance probabilities are equivalent between the index streamgage and the ungaged location, and estimates the flow at the ungaged location using the estimated flow-duration curve. Flow-duration curves at ungaged locations can be estimated using recently developed regression equations that have been incorporated into StreamStats (http://streamstats.usgs.gov/), which is a U.S. Geological Survey Web-based interactive mapping tool that can be used to obtain streamflow statistics, drainage-basin characteristics, and other information for user-selected locations on streams.
Si, Weijian; Qu, Xinggen; Liu, Lutao
2014-01-01
A novel direction of arrival (DOA) estimation method in compressed sensing (CS) is presented, in which DOA estimation is considered as the joint sparse recovery from multiple measurement vectors (MMV). The proposed method is obtained by minimizing the modified-based covariance matching criterion, which is acquired by adding penalties according to the regularization method. This minimization problem is shown to be a semidefinite program (SDP) and transformed into a constrained quadratic programming problem for reducing computational complexity which can be solved by the augmented Lagrange method. The proposed method can significantly improve the performance especially in the scenarios with low signal to noise ratio (SNR), small number of snapshots, and closely spaced correlated sources. In addition, the Cramér-Rao bound (CRB) of the proposed method is developed and the performance guarantee is given according to a version of the restricted isometry property (RIP). The effectiveness and satisfactory performance of the proposed method are illustrated by simulation results.
On convergence of the Horn and Schunck optical-flow estimation method.
Mitiche, Amar; Mansouri, Abdol-Reza
2004-06-01
The purpose of this study is to prove convergence results for the Horn and Schunck optical-flow estimation method. Horn and Schunck stated optical-flow estimation as the minimization of a functional. When discretized, the corresponding Euler-Lagrange equations form a linear system of equations We write explicitly this system and order the equations in such a way that its matrix is symmetric positive definite. This property implies the convergence Gauss-Seidel iterative resolution method, but does not afford a conclusion on the convergence of the Jacobi method. However, we prove directly that this method also converges. We also show that the matrix of the linear system is block tridiagonal. The blockwise iterations corresponding to this block tridiagonal structure converge for both the Jacobi and the Gauss-Seidel methods, and the Gauss-Seidel method is faster than the (sequential) Jacobi method.
Methods for Estimating Water Withdrawals for Aquaculture in the United States, 2005
Lovelace, John K.
2009-01-01
Aquaculture water use is associated with raising organisms that live in water - such as finfish and shellfish - for food, restoration, conservation, or sport. Aquaculture production occurs under controlled feeding, sanitation, and harvesting procedures primarily in ponds, flow-through raceways, and, to a lesser extent, cages, net pens, and tanks. Aquaculture ponds, raceways, and tanks usually require the withdrawal or diversion of water from a ground or surface source. Most water withdrawn or diverted for aquaculture production is used to maintain pond levels and/or water quality. Water typically is added for maintenance of levels, oxygenation, temperature control, and flushing of wastes. This report documents methods used to estimate withdrawals of fresh ground water and surface water for aqua-culture in 2005 for each county and county-equivalent in the United States, Puerto Rico, and the U.S. Virgin Islands by using aquaculture statistics and estimated water-use coefficients and water-replacement rates. County-level data for commercial and noncommercial operations compiled for the 2005 Census of Aquaculture were obtained from the National Agricultural Statistics Service. Withdrawals of water used at commercial and noncommercial operations for aquaculture ponds, raceways, tanks, egg incubators, and pens and cages for alligators were estimated and totaled by ground-water or surface-water source for each county and county equivalent. Use of the methods described in this report, when measured or reported data are unavailable, could result in more consistent water-withdrawal estimates for aquaculture that can be used by water managers and planners to determine water needs and trends across the United States. The results of this study were distributed to U.S. Geological Survey water-use personnel in each State during 2007. Water-use personnel are required to submit estimated withdrawals for all categories of use in their State to the U.S. Geological Survey National
Directory of Open Access Journals (Sweden)
A. J. Komkoua Mbienda
2013-01-01
Lee and Kesler (LK, and Ambrose-Walton (AW methods for estimating vapor pressures ( are tested against experimental data for a set of volatile organic compounds (VOC. required to determine gas-particle partitioning of such organic compounds is used as a parameter for simulating the dynamic of atmospheric aerosols. Here, we use the structure-property relationships of VOC to estimate . The accuracy of each of the aforementioned methods is also assessed for each class of compounds (hydrocarbons, monofunctionalized, difunctionalized, and tri- and more functionalized volatile organic species. It is found that the best method for each VOC depends on its functionality.
Estimating genetic correlations based on phenotypic data: a simulation-based method
Indian Academy of Sciences (India)
Elias Zintzaras
2011-04-01
Knowledge of genetic correlations is essential to understand the joint evolution of traits through correlated responses to selection, a difficult and seldom, very precise task even with easy-to-breed species. Here, a simulation-based method to estimate genetic correlations and genetic covariances that relies only on phenotypic measurements is proposed. The method does not require any degree of relatedness in the sampled individuals. Extensive numerical results suggest that the propose method may provide relatively efficient estimates regardless of sample sizes and contributions from common environmental effects.
A fast cross-entropy method for estimating buffer overflows in queueing networks
Boer, de P.T.; Kroese, D.P.; Rubinstein, R.Y.
2004-01-01
In this paper, we propose a fast adaptive importance sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First, we estimate the minimum cross-entropy tilting parameter for a small buffer level; next, we use this as a
Permanent Magnet Flux Online Estimation Based on Zero-Voltage Vector Injection Method
DEFF Research Database (Denmark)
Xie, Ge; Lu, Kaiyuan; Kumar, Dwivedi Sanjeet;
2015-01-01
In this paper, a simple signal injection method is proposed for sensorless control of PMSM at low speed, which ideally requires one voltage vector only for position estimation. The proposed method is easy to implement resulting in low computation burden. No filters are needed for extracting the h...
A service based estimation method for MPSoC performance modelling
DEFF Research Database (Denmark)
Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand
2008-01-01
This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... for various configurations of the system in order to explore the best possible implementation....
A POSTERIORI ERROR ESTIMATE OF THE DSD METHOD FOR FIRST-ORDER HYPERBOLIC EQUATIONS
Institute of Scientific and Technical Information of China (English)
康彤; 余德浩
2002-01-01
A posteriori error estimate of the discontinuous-streamline diffusion method for first-order hyperbolic equations was presented, which can be used to adjust space mesh reasonably. A numerical example is given to illustrate the accuracy and feasibility of this method.