portfolio optimization based on nonparametric estimation methods
mahsa ghandehari
2017-03-01
Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.
WAVELET BASED SPECTRAL CORRELATION METHOD FOR DPSK CHIP RATE ESTIMATION
Li Yingxiang; Xiao Xianci; Tai Hengming
2004-01-01
A wavelet-based spectral correlation algorithm to detect and estimate BPSK signal chip rate is proposed. Simulation results show that the proposed method can correctly estimate the BPSK signal chip rate, which may be corrupted by the quadratic characteristics of the spectral correlation function, in a low SNR environment.
A Channelization-Based DOA Estimation Method for Wideband Signals.
Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-07-04
In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.
A Channelization-Based DOA Estimation Method for Wideband Signals
Rui Guo
2016-07-01
Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.
Estimation of pump operational state with model-based methods
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina [Institute of Energy Technology, Lappeenranta University of Technology, P.O. Box 20, FI-53851 Lappeenranta (Finland); Kestilae, Juha [ABB Drives, P.O. Box 184, FI-00381 Helsinki (Finland)
2010-06-15
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently. (author)
MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE
NI Zhongxin; FEI Heliang
2005-01-01
In reliability theory and survival analysis,the problem of point estimation based on the censored sample has been discussed in many literatures.However,most of them are focused on MLE,BLUE etc;little work has been done on the moment-method estimation in censoring case.To make the method of moment estimation systematic and unifiable,in this paper,the moment-method estimators(abbr.MEs) and modified momentmethod estimators(abbr.MMEs) of the parameters based on type I and type Ⅱ censored samples are put forward involving mean residual lifetime. The strong consistency and other properties are proved. To be worth mentioning,in the exponential distribution,the proposed moment-method estimators are exactly MLEs. By a simulation study,in the view point of bias and mean square of error,we show that the MEs and MMEs are better than MLEs and the "pseudo complete sample" technique introduced in Whitten et al.(1988).And the superiority of the MEs is especially conspicuous,when the sample is heavily censored.
Accurate position estimation methods based on electrical impedance tomography measurements
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less
An Adaptive Background Subtraction Method Based on Kernel Density Estimation
Mignon Park
2012-09-01
Full Text Available In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.
Power Network Parameter Estimation Method Based on Data Mining Technology
ZHANG Qi-ping; WANG Cheng-min; HOU Zhi-fian
2008-01-01
The parameter values which actually change with the circumstances, weather and load level etc.produce great effect to the result of state estimation. A new parameter estimation method based on data mining technology was proposed. The clustering method was used to classify the historical data in supervisory control and data acquisition (SCADA) database as several types. The data processing technology was impliedto treat the isolated point, missing data and yawp data in samples for classified groups. The measurement data which belong to each classification were introduced to the linear regression equation in order to gain the regression coefficient and actual parameters by the least square method. A practical system demonstrates the high correctness, reliability and strong practicability of the proposed method.
A method for density estimation based on expectation identities
Peralta, Joaquín; Loyola, Claudia; Loguercio, Humberto; Davis, Sergio
2017-06-01
We present a simple and direct method for non-parametric estimation of a one-dimensional probability density, based on the application of the recent conjugate variables theorem. The method expands the logarithm of the probability density ln P(x|I) in terms of a complete basis and numerically solves for the coefficients of the expansion using a linear system of equations. No Monte Carlo sampling is needed. We present preliminary results that show the practical usefulness of the method for modeling statistical data.
Fast LCMV-based Methods for Fundamental Frequency Estimation
Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll
2013-01-01
Recently, optimal linearly constrained minimum variance (LCMV) filtering methods have been applied to fundamental frequency estimation. Such estimators often yield preferable performance but suffer from being computationally cumbersome as the resulting cost functions are multimodal with narrow...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...... be efficiently updated when new observations become available. The resulting time-recursive updating can reduce the computational complexity even further. The experimental results show that the performances of the proposed methods are comparable or better than that of other competing methods in terms of spectral...
Pipeline heating method based on optimal control and state estimation
Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu
2010-07-01
In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem
Moment-Based Method to Estimate Image Affine Transform
FENG Guo-rui; JIANG Ling-ge
2005-01-01
The estimation of affine transform is a crucial problem in the image recognition field. This paper resorted to some invariant properties under translation, rotation and scaling, and proposed a simple method to estimate the affine transform kernel of the two-dimensional gray image. Maps, applying to the original, produce some correlative points that can accurately reflect the affine transform feature of the image. Furthermore, unknown variables existing in the kernel of the transform are calculated. The whole scheme only refers to one-order moment,therefore, it has very good stability.
Fast, moment-based estimation methods for delay network tomography
Lawrence, Earl Christophre [Los Alamos National Laboratory; Michailidis, George [U OF MICHIGAN; Nair, Vijayan N [U OF MICHIGAN
2008-01-01
Consider the delay network tomography problem where the goal is to estimate distributions of delays at the link-level using data on end-to-end delays. These measurements are obtained using probes that are injected at nodes located on the periphery of the network and sent to other nodes also located on the periphery. Much of the previous literature deals with discrete delay distributions by discretizing the data into small bins. This paper considers more general models with a focus on computationally efficient estimation. The moment-based schemes presented here are designed to function well for larger networks and for applications like monitoring that require speedy solutions.
Switching Equalization Algorithm Based on a New SNR Estimation Method
无
2007-01-01
It is well-known that turbo equalization with the max-log-map (MLM) rather than the log-map (LM) algorithm is insensitive to signal to noise ratio (SNR) mismatch. As our first contribution, an improved MLM algorithm called scaled max-log-map (SMLM) algorithm is presented. Simulation results show that the SMLM scheme can dramatically outperform the MLM without sacrificing the robustness against SNR mismatch. Unfortunately, its performance is still inferior to that of the LM algorithm with exact SNR knowledge over the class of high-loss channels. As our second contribution, a switching turbo equalization scheme, which switches between the SMLM and LM schemes, is proposed to practically close the performance gap. It is based on a novel way to estimate the SNR from the reliability values of the extrinsic information of the SMLM algorithm.
Estimating genetic correlations based on phenotypic data: a simulation-based method
Elias Zintzaras
2011-04-01
Knowledge of genetic correlations is essential to understand the joint evolution of traits through correlated responses to selection, a difficult and seldom, very precise task even with easy-to-breed species. Here, a simulation-based method to estimate genetic correlations and genetic covariances that relies only on phenotypic measurements is proposed. The method does not require any degree of relatedness in the sampled individuals. Extensive numerical results suggest that the propose method may provide relatively efficient estimates regardless of sample sizes and contributions from common environmental effects.
A New Pitch Estimation Method Based on AMDF
Huan Zhao
2013-10-01
Full Text Available In this paper, a new modified average magnitude difference function (MAMDF is proposed which is robust for noise-corrupt speech pitch estimation. The traditional technology in pitch estimation can easily give rise to the problem of detecting error pitch period. And their estimation performance behaves badly with the occurrence of background noise. In the process of calculation on speech samples, MAMDF presented in this paper has the property of strengthening the characteristic of pitch period and reducing the influence of background noise. And therefore, MAMDF can not only decrease the disadvantage brought by the decreasing tendency of pitch period but also overcome the error caused by severe variation between neighboring samples. The experiment which is implemented in CSTR database shows that MAMDF is greatly superior to AMDF and CAMDF both in clean speech environment and noisy speech environment, representing prominent precision and robustness in pitch estimation
An Adaptive Finite Element Method Based on Optimal Error Estimates for Linear Elliptic Problems
汤雁
2004-01-01
The subject of the work is to propose a series of papers about adaptive finite element methods based on optimal error control estimate. This paper is the third part in a series of papers on adaptive finite element methods based on optimal error estimates for linear elliptic problems on the concave corner domains. In the preceding two papers (part 1:Adaptive finite element method based on optimal error estimate for linear elliptic problems on concave corner domain; part 2:Adaptive finite element method based on optimal error estimate for linear elliptic problems on nonconvex polygonal domains), we presented adaptive finite element methods based on the energy norm and the maximum norm. In this paper, an important result is presented and analyzed. The algorithm for error control in the energy norm and maximum norm in part 1 and part 2 in this series of papers is based on this result.
An estimation method of the fault wind turbine power generation loss based on correlation analysis
Zhang, Tao; Zhu, Shourang; Wang, Wei
2017-01-01
A method for estimating the power generation loss of a fault wind turbine is proposed in this paper. In this method, the wind speed is estimated and the estimated value of the loss of power generation is given by combining the actual output power characteristic curve of the wind turbine. In the wind speed estimation, the correlation analysis is used, and the normal operation of the wind speed of the fault wind turbine is selected, and the regression analysis method is used to obtain the estimated value of the wind speed. Based on the estimation method, this paper presents an implementation of the method in the monitoring system of the wind turbine, and verifies the effectiveness of the proposed method.
Joint DOA and Fundamental Frequency Estimation Methods based on 2-D Filtering
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2010-01-01
It is well-known that ﬁltering methods can be used for processing of signals in both time and space. This comprises, for example, fundamental frequency estimation and direction-of-arrival (DOA) estimation. In this paper, we propose two novel 2-D ﬁltering methods for joint estimation...... of the fundamental frequency and the DOA of spatio-temporarily sampled periodic signals. The ﬁrst and simplest method is based on the 2-D periodogram, whereas the second method is a generalization of the 2-D Capon method. In the experimental part, both qualitative and quantitative measurements show that the proposed...... methods are well-suited for solving the joint estimation problem. Furthermore, it is shown that the methods are able to resolve signals separated sufﬁciently in only one dimension. In the case of closely spaced sources, however, the 2-D Capon-based method shows the best performance....
Generalized Agile Estimation Method
Shilpa Bahlerao
2011-01-01
Full Text Available Agile cost estimation process always possesses research prospects due to lack of algorithmic approaches for estimating cost, size and duration. Existing algorithmic approach i.e. Constructive Agile Estimation Algorithm (CAEA is an iterative estimation method that incorporates various vital factors affecting the estimates of the project. This method has lots of advantages but at the same time has some limitations also. These limitations may due to some factors such as number of vital factors and uncertainty involved in agile projects etc. However, a generalized agile estimation may generate realistic estimates and eliminates the need of experts. In this paper, we have proposed iterative Generalized Estimation Method (GEM and presented algorithm based on it for agile with case studies. GEM based algorithm various project domain classes and vital factors with prioritization level. Further, it incorporates uncertainty factor to quantify the risk of project for estimating cost, size and duration. It also provides flexibility to project managers for deciding on number of vital factors, uncertainty level and project domains thereby maintaining the agility.
A new source number estimation method based on the beam eigenvalue
JIANG Lei; CAI Ping; YANG Juan; WANG Yi-ling; XU Dan
2007-01-01
Most source number estimation methods based on the eigenvalues are decomposed by covariance matrix in MUSIC algorithm. To develop the source number estimation method which has lower signal to noise ratio and is suitable to both correlated and uncorrelated impinging signals, a new source number estimation method called beam eigenvalue method (BEM) is proposed in this paper.Through analyzing the space power spectrum and the correlation of the line array, the covariance matrix is constructed in a new way, which is decided by the line array shape when the signal frequency is given.Both of the theory analysis and the simulation results show that the BEM method can estimate the source number for correlated signals and can be more effective at lower signal to noise ratios than the normal source number estimation methods.
Multi-agent coordination strategy estimation method based on control domain
无
2001-01-01
For estimation group competition and multiagent coordination strategy, this paper introduces a notion based on multiagent group. According to the control domain, it analyzes the multiagent strategy during competi tion in the macroscopic. It has been adopted in robot soccer and result enunciates that our method does not de pend on competition result. It can objectively quantitatively estimate coordination strategy.
A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding
Xue Mengfan; Li Xiaoping; Sun Haifeng; Fang Haiyan
2016-01-01
X-ray pulsar-based navigation (XPNAV) is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint prob-ability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC) and nonlinear least squares (NLS) estima-tors, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML) estimators.
A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding
Xue Mengfan
2016-06-01
Full Text Available X-ray pulsar-based navigation (XPNAV is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint probability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC and nonlinear least squares (NLS estimators, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML estimators.
Zemek, Radim; Hara, Shinsuke; Yanagihara, Kentaro; Kitayama, Ken-Ichi
In a centralized localization scenario, the limited throughput of the central node constrains the possible number of target node locations that can be estimated simultaneously. To overcome this limitation, we propose a method which effectively decreases the traffic load associated with target node localization, and therefore increases the possible number of target node locations that can estimated simultaneously in a localization system based on received signal strength indicator (RSSI) and maximum likelihood estimation. Our proposed method utilizes a threshold which limits the amount of forwarded RSSI data to the central node. As the threshold is crucial to the method, we further propose a method to theoretically determine its value. We experimentally verified the proposed method in various environments and the experimental results revealed that the method can reduce the load by 32-64% without significantly affecting the estimation accuracy.
Novel Channel Estimation Method Based on Decision-Directed in OFDM
BU Xiang-yuan; ZHANG Jian-kang; YANG Jing
2009-01-01
Based on the analysis of decision-directed (DD) channel estimation by using training symbols,a novel DD channel estimation method is proposed for orthogonal frequency division multiplexing (OFDM) system.The proposed algorithm takes the impact of decision error into account,and calculates the impact to next symbol duration channel state information.Analysis shows that the error propagation can be effectively restrained and the channel variation is tracked well.Simulation results demonstrate that both the signal error rate (SER) and the normalized mean square error (NMSE) performance of the proposed method are better than the traditional DD (DD+ IS) and the maximum likelihood estimate (DD+ MLE) method.
Sparse Inverse Covariance Estimation via an Adaptive Gradient-Based Method
Sra, Suvrit; Kim, Dongmin
2011-01-01
We study the problem of estimating from data, a sparse approximation to the inverse covariance matrix. Estimating a sparsity constrained inverse covariance matrix is a key component in Gaussian graphical model learning, but one that is numerically very challenging. We address this challenge by developing a new adaptive gradient-based method that carefully combines gradient information with an adaptive step-scaling strategy, which results in a scalable, highly competitive method. Our algorithm...
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
Application of age estimation methods based on teeth eruption: how easy is Olze method to use?
De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C
2014-09-01
The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.
Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries
Zhongyue Zou
2014-08-01
Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.
Cavuoti, Stefano; Brescia, Massimo; Vellucci, Civita; Tortora, Crescenzo; Longo, Giuseppe
2016-01-01
A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z's). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine learning based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z Probability Density Function (PDF), due to the fact that the analytical relation mapping the photometric parameters onto the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use...
A service based estimation method for MPSoC performance modelling
Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand
2008-01-01
This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... for various configurations of the system in order to explore the best possible implementation....
Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data
Wei-Kuang Lai
2016-02-01
Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.
Improved FRFT-based method for estimating the physical parameters from Newton's rings
Wu, Jin-Min; Lu, Ming-Feng; Tao, Ran; Zhang, Feng; Li, Yang
2017-04-01
Newton's rings are often encountered in interferometry, and in analyzing them, we can estimate the physical parameters, such as curvature radius and the rings' center. The fractional Fourier transform (FRFT) is capable of estimating these physical parameters from the rings despite noise and obstacles, but there is still a small deviation between the estimated coordinates of the rings' center and the actual values. The least-squares fitting method is popularly used for its accuracy but it is easily affected by the initial values. Nevertheless, with the estimated results from the FRFT, it is easy to meet the requirements of initial values. In this paper, the proposed method combines the advantages of the fractional Fourier transform (FRFT) with the least-squares fitting method in analyzing Newton's rings fringe patterns. Its performance is assessed by analyzing simulated and actual Newton's rings images. The experimental results show that the proposed method is capable of estimating the parameters in the presence of noise and obstacles. Under the same conditions, the estimation results are better than those obtained with the original FRFT-based method, especially for the rings' center. Some applications are shown to illustrate that the improved FRFT-based method is an important technique for interferometric measurements.
Fundamental Frequency Estimation using Polynomial Rooting of a Subspace-Based Method
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2010-01-01
We consider the problem of estimating the fundamental frequency of periodic signals such as audio and speech. A novel estimation method based on polynomial rooting of the harmonic MUltiple SIgnal Classiﬁcation (HMUSIC) is presented. By applying polynomial rooting, we obtain two signiﬁcant...... improvements compared to HMUSIC. First, by using the proposed method we can obtain an estimate of the fundamental frequency without doing a grid search like in HMUSIC. This is due to that the fundamental frequency is estimated as the argument of the root lying closest to the unit circle. Second, we obtain...... a higher spectral resolution compared to HMUSIC which is a property of polynomial rooting methods. Our simulation results show that the proposed method is applicable to real-life signals, and that we in most cases obtain a higher spectral resolution than HMUSIC....
Asiri, Sharefa M.
2017-10-08
Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters
An indirect transmission measurement-based spectrum estimation method for computed tomography
Zhao, Wei; Schafer, Sebastian; Royalty, Kevin
2015-01-01
The characteristics of an x-ray spectrum can greatly influence imaging and related tasks. In practice, due to the pile-up effect of the detector, it's difficult to directly measure the spectrum of a CT scanner using an energy resolved detector. An alternative solution is to estimate the spectrum using transmission measurements with a step phantom or other CT phantom. In this work, we present a new spectrum estimation method based on indirect transmission measurement and model spectra mixture approach. The estimated x-ray spectrum was expressed as weighted summation of a set of model spectra, which can significantly reduce the degrees of freedom (DOF) of the spectrum estimation problem. Next, an estimated projection can be calculated with the assumed spectrum. By iteratively updating the unknown weights, we minimized the difference between the estimated projection data and the raw projection data. The final spectrum was calculated with these calibrated weights and the model spectra. Both simulation and experim...
Parameter estimation for MIMO system based on MUSIC and ML methods
Wei DONG; Jiandong LI; Zhuo LU; Linjing ZHAO
2009-01-01
The frequency offset and channel gain estimation problem for multiple-input multiple-output (MIMO)systems in the case of flat-fading channels is addressed.Based on the multiple signal classification (MUSIC) and the maximum likelihood (ML) methods, a new joint estimation algorithm of frequency offsets and channel gains is proposed. The new algorithm has three steps. A subset of frequency offsets is first estimated with the MUSIC algorithm. All frequency offsets in the subset are then identified with the ML method. Finally, channel gains are calculated with the ML estimator. The algorithm is a one-dimensional search scheme and therefore greatly decreases the complexity of joint ML estimation, which is essentially a multi-dimensional search scheme.
Parameter Estimation from Near Stall Flight Data using Conventional and Neural-based Methods
S. Saderla
2016-12-01
Full Text Available The current research paper is an endeavour to estimate the parameters from near stall flight data of manned and unmanned research flight vehicles using conventional and neural based methods. For an aircraft undergoing stall, the aerodynamic model at these high angles of attack becomes non linear due to the influence of unsteady, transient and flow separation phenomena. In order to address these issues the Kirchhoff’s flow separation theory was used to incorporate the nonlinearity in the aerodynamic model in terms of flow separation point and stall characteristic parameters. The classical Maximum Likelihood (MLE method and Neural Gauss-Newton (NGN method have been employed to estimate the nonlinear parameters of two manned and one unmanned research aircrafts. The estimated static stall parameter and the break point, for the flight vehicles under consideration, were observed to be consistent from both the methods. Moreover the efficacy of the methods is also evident from the consistent estimates of post stall hysteresis time constant. It can also be inferred that the considered quasi steady model is able to adequately capture the drag and pitching moment coefficients in the post stall regime. The confidence in these estimates have been significantly enhanced with the observed lower values of Cramer-Rao bounds. Further the estimated nonlinear parameters were validated by performing a proof of match exercise for the considered flight vehicles. Interestingly the NGN method, which doesn’t involve solving equations of motion, was able to perform on a par with the MLE method.
SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation
Zhao, C; Jin, M [University of Texas at Arlington, Arlington, TX (United States); Ouyang, L; Wang, J [UT Southwestern Medical Center at Dallas, Dallas, TX (United States)
2015-06-15
Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used to recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation
SAR images classification method based on Dempster-Shafer theory and kernel estimate
He Chu; Xia Guisong; Sun Hong
2007-01-01
To study the scene classification in the Synthetic Aperture Radar (SAR) image, a novel method based on kernel estimate, with the Markov context and Dempster-Shafer evidence theory is proposed.Initially, a nonparametric Probability Density Function (PDF) estimate method is introduced, to describe the scene of SAR images.And then under the Markov context, both the determinate PDF and the kernel estimate method are adopted respectively, to form a primary classification.Next, the primary classification results are fused using the evidence theory in an unsupervised way to get the scene classification.Finally, a regularization step is used, in which an iterated maximum selecting approach is introduced to control the fragments and modify the errors of the classification.Use of the kernel estimate and evidence theory can describe the complicated scenes with little prior knowledge and eliminate the ambiguities of the primary classification results.Experimental results on real SAR images illustrate a rather impressive performance.
A numerical integration-based yield estimation method for integrated circuits
Liang Tao; Jia Xinzhang
2011-01-01
A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly.To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization.
Si, Weijian; Qu, Xinggen; Liu, Lutao
2014-01-01
A novel direction of arrival (DOA) estimation method in compressed sensing (CS) is presented, in which DOA estimation is considered as the joint sparse recovery from multiple measurement vectors (MMV). The proposed method is obtained by minimizing the modified-based covariance matching criterion, which is acquired by adding penalties according to the regularization method. This minimization problem is shown to be a semidefinite program (SDP) and transformed into a constrained quadratic programming problem for reducing computational complexity which can be solved by the augmented Lagrange method. The proposed method can significantly improve the performance especially in the scenarios with low signal to noise ratio (SNR), small number of snapshots, and closely spaced correlated sources. In addition, the Cramér-Rao bound (CRB) of the proposed method is developed and the performance guarantee is given according to a version of the restricted isometry property (RIP). The effectiveness and satisfactory performance of the proposed method are illustrated by simulation results.
SOC Estimation of LiFePO4 Battery based on Improved Ah Integral Method
Zheng ZHU
2013-07-01
Full Text Available State of charge (SOC is the most important status parameters of energy storage system, which is able to predict the available mileage of electric vehicle. In fact, the accuracy of SOC estimation plays a vital role in the usability and security of the battery. To fully consider the practical demands, a novel method to predict SOC of LiFePO4 battery is presented in this paper, which defines the correct coefficient separately under two working conditions of charging and discharging. Based on effective factors such as coulombic efficiency, charge and discharge current, and temperature, an Ah integral SOC estimation method with two kinds of efficiency correct coefficients is established by performing massive experimental study. Experiments prove that the estimated error of SOC is less than 5%. Compared with the original Ah method, the improved Ah method is more advantageous in the accuracy and reliability.
Shaolong Chen
2016-01-01
Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
A joint sparse representation-based method for double-trial evoked potentials estimation.
Yu, Nannan; Liu, Haikuan; Wang, Xiaoyan; Lu, Hanbing
2013-12-01
In this paper, we present a novel approach to solving an evoked potentials estimating problem. Generally, the evoked potentials in two consecutive trials obtained by repeated identical stimuli of the nerves are extremely similar. In order to trace evoked potentials, we propose a joint sparse representation-based double-trial evoked potentials estimation method, taking full advantage of this similarity. The estimation process is performed in three stages: first, according to the similarity of evoked potentials and the randomness of a spontaneous electroencephalogram, the two consecutive observations of evoked potentials are considered as superpositions of the common component and the unique components; second, making use of their characteristics, the two sparse dictionaries are constructed; and finally, we apply the joint sparse representation method in order to extract the common component of double-trial observations, instead of the evoked potential in each trial. A series of experiments carried out on simulated and human test responses confirmed the superior performance of our method.
Grid impedance estimation based hybrid islanding detection method for AC microgrids
Ghzaiel, Walid; Jebali-Ben Ghorbal, Manel; Slama-Belkhodja, Ilhem;
2017-01-01
This paper focuses on a hybrid islanding detection algorithm for parallel-inverters-based microgrids. The proposed algorithm is implemented on the unit ensuring the control of the intelligent bypass switch connecting or disconnecting the microgrid from the utility. This method employs a grid impe...... that the resonance excitation is canceled and the resistive and inductive grid impedance parts are estimated. Simulation results are carried out to illustrate the effectiveness of the proposed method....
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2014-05-01
Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.
A novel position estimation method based on displacement correction in AIS.
Jiang, Yi; Zhang, Shufang; Yang, Dongkai
2014-09-17
A new position estimation method by using the signals from two automatic identification system (AIS) stations is proposed in this paper. The time of arrival (TOA) method is enhanced with the displacement correction, so that the vessel's position can be determined even for the situation where it can receive the signals from only two AIS base stations. Its implementation scheme based on the mathematical model is presented. Furthermore, performance analysis is carried out to illustrate the relation between the positioning errors and the displacement vector provided by auxiliary sensors. Finally, the positioning method is verified and its performance is evaluated by simulation. The results show that the positioning accuracy is acceptable.
Yu, Zhang; Xiaohui, Song; Jianfang, Li; Fei, Gao
2017-05-01
Cable overheating will lead to the cable insulation level reducing, speed up the cable insulation aging, even easy to cause short circuit faults. Cable overheating risk identification and warning is nessesary for distribution network operators. Cable overheating risk warning method based on impedance parameter estimation is proposed in the paper to improve the safty and reliability operation of distribution network. Firstly, cable impedance estimation model is established by using least square method based on the data from distribiton SCADA system to improve the impedance parameter estimation accuracy. Secondly, calculate the threshold value of cable impedance based on the historical data and the forecast value of cable impedance based on the forecasting data in future from distribiton SCADA system. Thirdly, establish risks warning rules library of cable overheating, calculate the cable impedance forecast value and analysis the change rate of impedance, and then warn the overheating risk of cable line based on the overheating risk warning rules library according to the variation relationship between impedance and line temperature rise. Overheating risk warning method is simulated in the paper. The simulation results shows that the method can identify the imedance and forecast the temperature rise of cable line in distribution network accurately. The result of overheating risk warning can provide decision basis for operation maintenance and repair.
A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation
Tianshuang Qiu
2007-12-01
Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of Ã¢Â€ÂœbiasedÃ¢Â€Â or Ã¢Â€ÂœunbiasedÃ¢Â€Â is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.
A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation
Tang, Hong; Park, Yongwan; Qiu, Tianshuang
2007-12-01
This paper proposes a geometric method to locate a mobile station (MS) in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS) errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA), angle of arrival (AOA), and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of "biased" or "unbiased" is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.
Gang Zhang
2017-07-01
Full Text Available The Sacramento model is widely utilized in hydrological forecast, of which the accuracy and performance are primarily determined by the model parameters, indicating the key role of parameter estimation. This paper presents a multi-step parameter estimation method, which divides the parameter estimation of Sacramento model into three steps and realizes optimization step by step. We firstly use the immune clonal selection algorithm (ICSA to solve the non-liner objective function of parameter estimation, and compare the parameter calibration result of ideal artificial data with Shuffled Complex Evolution (SCE-UA, Parallel Genetic Algorithm (PGA, and Serial Master-slaver Swarms Shuffling Evolution Algorithm Based on Particle Swarms Optimization (SMSE-PSO. The comparison result shows that ICSA has the best convergence, efficiency and precision. Then we apply ICSA to the parameter estimation of single-step and multi-step Sacramento model and simulate 32 floods based on application examples of Dongyang and Tantou river basins for validation. It is clearly shown that the results of multi-step method based on ICSA show higher accuracy and 100% qualified rate, indicating its higher precision and reliability, which has great potential to improve Sacramento model and hydrological forecast.
Schall, Mark C; Fethke, Nathan B; Chen, Howard; Gerr, Fred
2015-05-01
The performance of an inertial measurement unit (IMU) system for directly measuring thoracolumbar trunk motion was compared to that of the Lumbar Motion Monitor (LMM). Thirty-six male participants completed a simulated material handling task with both systems deployed simultaneously. Estimates of thoracolumbar trunk motion obtained with the IMU system were processed using five common methods for estimating trunk motion characteristics. Results of measurements obtained from IMUs secured to the sternum and pelvis had smaller root-mean-square differences and mean bias estimates in comparison to results obtained with the LMM than results of measurements obtained solely from a sternum mounted IMU. Fusion of IMU accelerometer measurements with IMU gyroscope and/or magnetometer measurements was observed to increase comparability to the LMM. Results suggest investigators should consider computing thoracolumbar trunk motion as a function of estimates from multiple IMUs using fusion algorithms rather than using a single accelerometer secured to the sternum in field-based studies.
A Copula-Based Method for Estimating Shear Strength Parameters of Rock Mass
Da Huang
2014-01-01
Full Text Available The shear strength parameters (i.e., the internal friction coefficient f and cohesion c are very important in rock engineering, especially for the stability analysis and reinforcement design of slopes and underground caverns. In this paper, a probabilistic method, Copula-based method, is proposed for estimating the shear strength parameters of rock mass. The optimal Copula functions between rock mass quality Q and f, Q and c for the marbles are established based on the correlation analyses of the results of 12 sets of in situ tests in the exploration adits of Jinping I-Stage Hydropower Station. Although the Copula functions are derived from the in situ tests for the marbles, they can be extended to be applied to other types of rock mass with similar geological and mechanical properties. For another 9 sets of in situ tests as an extensional application, by comparison with the results from Hoek-Brown criterion, the estimated values of f and c from the Copula-based method achieve better accuracy. Therefore, the proposed Copula-based method is an effective tool in estimating rock strength parameters.
Simple new methods to estimate global solar radiation based on meteorological data in Egypt
El-Metwally, Mossad
2004-01-01
Three simple methods to estimate global solar radiation are proposed in addition to (Solar Energy 63 (1998) 147). All were tested seasonally and at different sky conditions at seven locations in Egypt. The methods use ground-based measurements of maximum and minimum temperature, daily mean of cloud cover and extraterrestrial global radiation. Average of root mean square differences (RMSD) for a comparison between observed and estimated global radiation for all locations tested was around 10% for the new methods and 13% for Supit-Van Kappel method. The coefficient of determination R2 is higher for the new methods for all tested locations. Better results were obtained when applying the new methods to different seasons. The differences in root mean square error (RMSE) between the new methods and Ångstrom-Prescott method that is based on sunshine duration data were less than 1.0 MJ m -2 day -1 at all sites. On the whole, the performance statistics demonstrate that the new methods are better when compared by Ångstrom-Prescott method.
A case-base sampling method for estimating recurrent event intensities.
Saarela, Olli
2016-10-01
Case-base sampling provides an alternative to risk set sampling based methods to estimate hazard regression models, in particular when absolute hazards are also of interest in addition to hazard ratios. The case-base sampling approach results in a likelihood expression of the logistic regression form, but instead of categorized time, such an expression is obtained through sampling of a discrete set of person-time coordinates from all follow-up data. In this paper, in the context of a time-dependent exposure such as vaccination, and a potentially recurrent adverse event outcome, we show that the resulting partial likelihood for the outcome event intensity has the asymptotic properties of a likelihood. We contrast this approach to self-matched case-base sampling, which involves only within-individual comparisons. The efficiency of the case-base methods is compared to that of standard methods through simulations, suggesting that the information loss due to sampling is minimal.
Checchi Francesco
2013-01-01
Full Text Available Abstract Background Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Methods Our method consisted of multiplying (i manual counts of assumed residential structures on a satellite image and (ii estimates of the mean number of people per structure (structure occupancy obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons’ camps and two urban neighbourhoods with a mixture of residents and displaced ranging in population from 1,969 to 90,547, and compared these to “gold standard” reference population figures from census or other robust methods. Results Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of Conclusions In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method’s development.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review
Qingguo Li
2012-05-01
Full Text Available Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.
Inertial sensor-based methods in walking speed estimation: a systematic review.
Yang, Shuozhi; Li, Qingguo
2012-01-01
Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.
Olga L. Quintero
Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.
Underwater terrain positioning method based on least squares estimation for AUV
Chen, Peng-yun; Li, Ye; Su, Yu-min; Chen, Xiao-long; Jiang, Yan-qing
2015-12-01
To achieve accurate positioning of autonomous underwater vehicles, an appropriate underwater terrain database storage format for underwater terrain-matching positioning is established using multi-beam data as underwater terrainmatching data. An underwater terrain interpolation error compensation method based on fractional Brownian motion is proposed for defects of normal terrain interpolation, and an underwater terrain-matching positioning method based on least squares estimation (LSE) is proposed for correlation analysis of topographic features. The Fisher method is introduced as a secondary criterion for pseudo localization appearing in a topographic features flat area, effectively reducing the impact of pseudo positioning points on matching accuracy and improving the positioning accuracy of terrain flat areas. Simulation experiments based on electronic chart and multi-beam sea trial data show that drift errors of an inertial navigation system can be corrected effectively using the proposed method. The positioning accuracy and practicality are high, satisfying the requirement of underwater accurate positioning.
Ray-Based and Graph-Based Methods for Fiber Bundle Boundary Estimation
Bauer, Miriam H A; Kuhnt, Daniela; Barbieri, Sebastiano; Klein, Jan; Hahn, Horst K; Freisleben, Bernd; Nimsky, Christopher
2011-01-01
Diffusion Tensor Imaging (DTI) provides the possibility of estimating the location and course of eloquent structures in the human brain. Knowledge about this is of high importance for preoperative planning of neurosurgical interventions and for intraoperative guidance by neuronavigation in order to minimize postoperative neurological deficits. Therefore, the segmentation of these structures as closed, three-dimensional object is necessary. In this contribution, two methods for fiber bundle segmentation between two defined regions are compared using software phantoms (abstract model and anatomical phantom modeling the right corticospinal tract). One method uses evaluation points from sampled rays as candidates for boundary points, the other method sets up a directed and weighted (depending on a scalar measure) graph and performs a min-cut for optimal segmentation results. Comparison is done by using the Dice Similarity Coefficient (DSC), a measure for spatial overlap of different segmentation results.
A systematic method based on statistical pattern recognition for estimating product quality on-line
无
2003-01-01
To avoid the complexity of building mechanistic models by studying the inner nature of the object, a systematic method based on statistical pattern recognition is developed in order to estimate the product quality on-lipe. The mapping relationship between a feature space and a product quality space can be built by using regression analysis, and in applying clustering analysis the product quality space can be partitioned automatically. Eventually, estimating product quality on-line can be accomplished by sorting the mapped data in the partitioned quality space. A concrete problem is proposed which has a relatively small ratio of training data to input variables. By implementing the method mentioned above, a satisfying result has been achieved. Furthermore, the further question about choosing suitable mapping methods is briefly discussed.
Estimating soil organic carbon stocks and spatial patterns with statistical and GIS-based methods.
Junjun Zhi
Full Text Available Accurately quantifying soil organic carbon (SOC is considered fundamental to studying soil quality, modeling the global carbon cycle, and assessing global climate change. This study evaluated the uncertainties caused by up-scaling of soil properties from the county scale to the provincial scale and from lower-level classification of Soil Species to Soil Group, using four methods: the mean, median, Soil Profile Statistics (SPS, and pedological professional knowledge based (PKB methods. For the SPS method, SOC stock is calculated at the county scale by multiplying the mean SOC density value of each soil type in a county by its corresponding area. For the mean or median method, SOC density value of each soil type is calculated using provincial arithmetic mean or median. For the PKB method, SOC density value of each soil type is calculated at the county scale considering soil parent materials and spatial locations of all soil profiles. A newly constructed 1∶50,000 soil survey geographic database of Zhejiang Province, China, was used for evaluation. Results indicated that with soil classification levels up-scaling from Soil Species to Soil Group, the variation of estimated SOC stocks among different soil classification levels was obviously lower than that among different methods. The difference in the estimated SOC stocks among the four methods was lowest at the Soil Species level. The differences in SOC stocks among the mean, median, and PKB methods for different Soil Groups resulted from the differences in the procedure of aggregating soil profile properties to represent the attributes of one soil type. Compared with the other three estimation methods (i.e., the SPS, mean and median methods, the PKB method holds significant promise for characterizing spatial differences in SOC distribution because spatial locations of all soil profiles are considered during the aggregation procedure.
A Lossy Counting-Based State of Charge Estimation Method and Its Application to Electric Vehicles
Hong Zhang
2015-12-01
Full Text Available Estimating the residual capacity or state-of-charge (SoC of commercial batteries on-line without destroying them or interrupting the power supply, is quite a challenging task for electric vehicle (EV designers. Many Coulomb counting-based methods have been used to calculate the remaining capacity in EV batteries or other portable devices. The main disadvantages of these methods are the cumulative error and the time-varying Coulombic efficiency, which are greatly influenced by the operating state (SoC, temperature and current. To deal with this problem, we propose a lossy counting-based Coulomb counting method for estimating the available capacity or SoC. The initial capacity of the tested battery is obtained from the open circuit voltage (OCV. The charging/discharging efficiencies, used for compensating the Coulombic losses, are calculated by the lossy counting-based method. The measurement drift, resulting from the current sensor, is amended with the distorted Coulombic efficiency matrix. Simulations and experimental results show that the proposed method is both effective and convenient.
Evaluation and Application of Satellite-Based Latent Heating Profile Estimation Methods
Olson, William S.; Grecu, Mircea; Yang, Song; Tao, Wei-Kuo
2004-01-01
In recent years, methods for estimating atmospheric latent heating vertical structure from both passive and active microwave remote sensing have matured to the point where quantitative evaluation of these methods is the next logical step. Two approaches for heating algorithm evaluation are proposed: First, application of heating algorithms to synthetic data, based upon cloud-resolving model simulations, can be used to test the internal consistency of heating estimates in the absence of systematic errors in physical assumptions. Second, comparisons of satellite-retrieved vertical heating structures to independent ground-based estimates, such as rawinsonde-derived analyses of heating, provide an additional test. The two approaches are complementary, since systematic errors in heating indicated by the second approach may be confirmed by the first. A passive microwave and combined passive/active microwave heating retrieval algorithm are evaluated using the described approaches. In general, the passive microwave algorithm heating profile estimates are subject to biases due to the limited vertical heating structure information contained in the passive microwave observations. These biases may be partly overcome by including more environment-specific a priori information into the algorithm s database of candidate solution profiles. The combined passive/active microwave algorithm utilizes the much higher-resolution vertical structure information provided by spaceborne radar data to produce less biased estimates; however, the global spatio-temporal sampling by spaceborne radar is limited. In the present study, the passive/active microwave algorithm is used to construct a more physically-consistent and environment-specific set of candidate solution profiles for the passive microwave algorithm and to help evaluate errors in the passive algorithm s heating estimates. Although satellite estimates of latent heating are based upon instantaneous, footprint- scale data, suppression
Validation of a new method for estimating VO2max based on VO2 reserve.
Swain, David P; Parrott, James A; Bennett, Anna R; Branch, J David; Dowling, Elizabeth A
2004-08-01
The American College of Sports Medicine's (ACSM) preferred method for estimating maximal oxygen consumption (VO2max) has been shown to overestimate VO2max, possibly due to the short length of the cycle ergometry stages. This study validates a new method that uses a final 6-min stage and that estimates VO2max from the relationship between heart rate reserve (HRR) and VO2 reserve. A cycle ergometry protocol was designed to elicit 65-75% HRR in the fifth and sixth minutes of the final stage. Maximal workload was estimated by dividing the workload of the final stage by %HRR. VO2max was then estimated using the ACSM metabolic equation for cycling. After the 6-min stage was completed, an incremental test to maximal effort was used to measure actual VO2max. Forty-nine subjects completed a pilot study using one protocol to reach the 6-min stage, and 50 additional subjects completed a modified protocol. The pilot study obtained a valid estimate of VO2max (r = 0.91, SEE = 3.4 mL x min(-1) x kg-1) with no over- or underestimation (mean estimated VO2max = 35.3 mL x min(-1) x kg(-1), mean measured VO2max = 36.1 mL x min(-1) x kg(-1)), but the average %HRR achieved in the 6-min stage was 78%, with several subjects attaining heart rates considered too high for submaximal fitness testing. The second study also obtained a valid estimate of VO2max (r = 0.89, SEE = 4.0 mL x min(-1) x kg(-1)) with no over- or underestimation (mean estimated VO2max = 36.7 mL x min(-1) x kg(-1), mean measured VO2max = 36.9 mL x min(-1) x kg(-1), and the average %HRR achieved in the 6-min stage was 64%. A new method for estimating VO2max from submaximal cycling based on VO2 reserve has been found to be valid and more accurate than previous methods.
Latitudinal GRBR-TEC estimation in Southeast Asia region based on the two-station method
Watthanasangmechai, Kornyanat; Yamamoto, Mamoru; Saito, Akinori; Tsugawa, Takuya; Yokoyama, Tatsuhiro; Supnithi, Pornchai; Yatini, Clara Yono
2014-10-01
Total electron content (TEC) is an important parameter for revealing latitudinal ionospheric structures, such as the equatorial ionization anomaly (EIA) in Southeast Asia. Understanding the EIA is beneficial for studying equatorial spread F. To reveal the structures, the absolute TEC as a function of latitude must be accurately determined. In early 2012, we expanded a GNU Radio Beacon Receiver (GRBR) network to provide latitudinal coverage in the Thailand-Indonesia sector. We employed the GRBR network to receive VHF and UHF signals from polar low-Earth-orbit satellites. The TEC offset is an unknown parameter in the absolute TEC estimation process. We propose a new technique based on the two-station method to estimate the offset for the latitudinal TEC estimation, and it works better than the original method for a sparse network. The TEC estimation system requires two iterations to minimize the root-mean-square error (RMSE). Once the RMSE reaches the global minimum, the absolute TECs are estimated simultaneously over five GRBR stations. GPS-TECs from local stations are used as the initial guess of the offset estimation. The height of the ionospheric pierce point is determined from the ionosonde hmF2. As a result, the latitudinal GRBR-TEC was successfully estimated from the polar orbit satellites. The two EIA humps were clearly captured by the GRBR-TEC. The result was well verified with the TEC reconstructed from the C/NOFS density data and the ionosonde bottomside data. This is a significant step showing that the GRBR is a useful tool for the study of low-latitude ionospheric features.
DYNAMIC PARAMETERS ESTIMATION OF INTERFEROMETRIC SIGNALS BASED ON SEQUENTIAL MONTE CARLO METHOD
M. A. Volynsky
2014-05-01
Full Text Available The paper deals with sequential Monte Carlo method applied to problem of interferometric signals parameters estimation. The method is based on the statistical approximation of the posterior probability density distribution of parameters. Detailed description of the algorithm is given. The possibility of using the residual minimum between prediction and observation as a criterion for the selection of multitude elements generated at each algorithm step is shown. Analysis of input parameters influence on performance of the algorithm has been conducted. It was found that the standard deviation of the amplitude estimation error for typical signals is about 10% of the maximum amplitude value. The phase estimation error was shown to have a normal distribution. Analysis of the algorithm characteristics depending on input parameters is done. In particular, the influence analysis for a number of selected vectors of parameters on evaluation results is carried out. On the basis of simulation results for the considered class of signals, it is recommended to select 30% of the generated vectors number. The increase of the generated vectors number over 150 does not give significant improvement of the obtained estimates quality. The sequential Monte Carlo method is recommended for usage in dynamic processing of interferometric signals for the cases when high immunity is required to non-linear changes of signal parameters and influence of random noise.
The Interpolation Method for Estimating the Above-Ground Biomass Using Terrestrial-Based Inventory
I Nengah Surati Jaya
2014-08-01
Full Text Available This paper examined several methods for interpolating biomass on logged-over dry land forest using terrestrial-based forest inventory in Labanan, East Kalimantan and Lamandau, Kota Wringing Barat, Central Kalimantan. The plot-distances examined was 1,000−1,050 m for Labanan and 1,000−899m for Lawanda. The main objective of this study was to obtain the best interpolation method having the most accurate prediction on spatial distribution of forest biomass for dry land forest. Two main interpolation methods were examined: (1 deterministic approach using the IDW method and (2 geo-statistics approach using Kriging with spherical, circular, linear, exponential, and Gaussian models. The study results at both sites consistently showed that the IDW method was better than the Kriging method for estimating the spatial distribution of biomass. The validation results using chi-square test showed that the IDW interpolation provided accurate biomass estimation. Using the percentage of mean deviation value (MD(%, it was also recognized that the IDWs with power parameter (p of 2 provided relatively low value , i.e., only 15% for Labanan, East Kalimantan Province and 17% for Lamandau, Kota Wringing Barat Central Kalimantan Province. In general, IDW interpolation method provided better results than the Kriging, where the Kriging method provided MD(% of about 27% and 21% for Lamandau and Labanan sites, respectively.
Cavuoti, S.; Amaro, V.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-02-01
A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine-learning-based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z probability density function (PDF), due to the fact that the analytical relation mapping the photometric parameters on to the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use of the MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule), with the possibility to easily replace the specific machine-learning model chosen to predict photo-z. We present a summary of results on SDSS-DR9 galaxy data, used also to perform a direct comparison with PDFs obtained by the LE PHARE spectral energy distribution template fitting. We show that METAPHOR is capable to estimate the precision and reliability of photometric redshifts obtained with three different self-adaptive techniques, i.e. MLPQNA, Random Forest and the standard K-Nearest Neighbors models.
A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks
Xuerong Cui
2015-11-01
Full Text Available Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR environments.
A NEW DE-NOISING METHOD BASED ON 3-BAND WAVELET AND NONPARAMETRIC ADAPTIVE ESTIMATION
Li Li; Peng Yuhua; Yang Mingqiang; Xue Peijun
2007-01-01
Wavelet de-noising has been well known as an important method of signal de-noising.Recently,most of the research efforts about wavelet de-noising focus on how to select the threshold,where Donoho method is applied widely.Compared with traditional 2-band wavelet,3-band wavelet has advantages in many aspects.According to this theory,an adaptive signal de-noising method in 3-band wavelet domain based on nonparametric adaptive estimation is proposed.The experimental results show that in 3-band wavelet domain,the proposed method represents better characteristics than Donoho method in protecting detail and improving the signal-to-noise ratio of reconstruction signal.
Data Based Parameter Estimation Method for Circular-scanning SAR Imaging
Chen Gong-bo
2013-06-01
Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.
Weijian Si
2014-01-01
Full Text Available A novel direction of arrival (DOA estimation method in compressed sensing (CS is presented, in which DOA estimation is considered as the joint sparse recovery from multiple measurement vectors (MMV. The proposed method is obtained by minimizing the modified-based covariance matching criterion, which is acquired by adding penalties according to the regularization method. This minimization problem is shown to be a semidefinite program (SDP and transformed into a constrained quadratic programming problem for reducing computational complexity which can be solved by the augmented Lagrange method. The proposed method can significantly improve the performance especially in the scenarios with low signal to noise ratio (SNR, small number of snapshots, and closely spaced correlated sources. In addition, the Cramér-Rao bound (CRB of the proposed method is developed and the performance guarantee is given according to a version of the restricted isometry property (RIP. The effectiveness and satisfactory performance of the proposed method are illustrated by simulation results.
A radiative transfer model-based method for the estimation of grassland aboveground biomass
Quan, Xingwen; He, Binbin; Yebra, Marta; Yin, Changming; Liao, Zhanmang; Zhang, Xueting; Li, Xing
2017-02-01
This paper presents a novel method to derive grassland aboveground biomass (AGB) based on the PROSAILH (PROSPECT + SAILH) radiative transfer model (RTM). Two variables, leaf area index (LAI, m2m-2, defined as a one-side leaf area per unit of horizontal ground area) and dry matter content (DMC, gcm-2, defined as the dry matter per leaf area), were retrieved using PROSAILH and reflectance data from Landsat 8 OLI product. The result of LAI × DMC was regarded as the estimated grassland AGB according to their definitions. The well-known ill-posed inversion problem when inverting PROSAILH was alleviated using ecological criteria to constrain the simulation scenario and therefore the number of simulated spectra. A case study of the presented method was applied to a plateau grassland in China to estimate its AGB. The results were compared to those obtained using an exponential regression, a partial least squares regression (PLSR) and an artificial neural networks (ANN). The RTM-based method offered higher accuracy (R2 = 0.64 and RMSE = 42.67 gm-2) than the exponential regression (R2 = 0.48 and RMSE = 41.65 gm-2) and the ANN (R2 = 0.43 and RMSE = 46.26 gm-2). However, the proposed method offered similar performance than PLSR as presented better determination coefficient than PLSR (R2 = 0.55) but higher RMSE (RMSE = 37.79 gm-2). Although it is still necessary to test these methodologies in other areas, the RTM-based method offers greater robustness and reproducibility to estimate grassland AGB at large scale without the need to collect field measurements and therefore is considered the most promising methodology.
D. Sümeyra Demirkıran
2014-03-01
Full Text Available Concept of age estimation plays an important role on both civil law and regulation of criminal behaviors. In forensic medicine, age estimation is practiced for individual requests as well for request of the court. In this study it is aimed to compile the methods of age estimation and to make recommendations for the solution of the problems encountered. In radiological method the epiphyseal lines of the bones and views of the teeth are used. In order to estimate the age by comparing bone radiographs; Greulich-Pyle Atlas (GPA, Tanner-Whitehouse Atlas (TWA and “Adli Tıpta Yaş Tayini (ATYT” books are used. Bone age is found to be 2 years older averagely than chronologic age, especially in puberty, according to the forensic age estimations described in the ATYT book. For the age estimation with teeth, Demirjian method is used. In time different methods are developed by modifying Demirjian method. However no accurate method was found. Histopathological studies are done on bone marrow cellularity and dermis cells. No correlation was found between histopathoogical findings and choronologic age. Important ethical and legal issues are brought with current age estimation methods especially in teenage period. Therefore it is required to prepare atlases of bone age compatible with our society by collecting the findings of the studies in Turkey. Another recommendation could be to pay attention to the courts of age raising trials of teenage women and give special emphasis on birth and population records
A Novel Method Based on Oblique Projection Technology for Mixed Sources Estimation
Weijian Si
2014-01-01
Full Text Available Reducing the computational complexity of the near-field sources and far-field sources localization algorithms has been considered as a serious problem in the field of array signal processing. A novel algorithm caring for mixed sources location estimation based on oblique projection is proposed in this paper. The sources are estimated at two different stages and the sensor noise power is estimated and eliminated from the covariance which improve the accuracy of the estimation of mixed sources. Using the idea of compress, the range information of near-field sources is obtained by searching the partial area instead of the whole Fresnel area which can reduce the processing time. Compared with the traditional algorithms, the proposed algorithm has the lower computation complexity and has the ability to solve the two closed-spaced sources with high resolution and accuracy. The duplication of range estimation is also avoided. Finally, simulation results are provided to demonstrate the performance of the proposed method.
Cui Jia
2017-05-01
Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua
2017-05-01
With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
A Novel Position Estimation Method Based on Displacement Correction in AIS
Yi Jiang
2014-09-01
Full Text Available A new position estimation method by using the signals from two automatic identification system (AIS stations is proposed in this paper. The time of arrival (TOA method is enhanced with the displacement correction, so that the vessel’s position can be determined even for the situation where it can receive the signals from only two AIS base stations. Its implementation scheme based on the mathematical model is presented. Furthermore, performance analysis is carried out to illustrate the relation between the positioning errors and the displacement vector provided by auxiliary sensors. Finally, the positioning method is verified and its performance is evaluated by simulation. The results show that the positioning accuracy is acceptable.
Ghzaiel, Walid; Jebali-Ben Ghorbal, Manel; Slama-Belkhodja, Ilhem
2014-01-01
This paper presents a hybrid islanding detection algorithm integrated on the distributed generation unit more close to the point of common coupling of a Microgrid based on parallel inverters where one of them is responsible to control the system. The method is based on resonance excitation under...... parameters, both resistive and inductive parts, from the injected resonance frequency determination. Finally, the inverter will disconnect the microgrid from the faulty grid and reconnect the parallel inverter system to the controllable distributed system in order to ensure high power quality. This paper...... shows that grid impedance variation detection estimation can be an efficient method for islanding detection in microgrid systems. Theoretical analysis and simulation results are presented to validate the proposed method....
The rejection of vibrations in adaptive optics systems using a DFT-based estimation method
Kania, Dariusz; Borkowski, Józef
2016-04-01
Adaptive optics systems are commonly used in many optical structures to reduce perturbations and to increase the system performance. The problem in such systems is undesirable vibrations due to some effects as shaking of the whole structure or the tracking process. This paper presents a frequency, amplitude and phase estimation method of a multifrequency signal that can be used to reject these vibrations in an adaptive method. The estimation method is based on using the FFT procedure. The undesirable signals are usually exponentially damped harmonic oscillations. The estimation error depends on several parameters and consists of a systematic component and a random component. The systematic error depends on the signal phase, the number of samples N in a measurement window, the value of CiR (number of signal periods in a measurement window), the THD value and the time window order H. The random error depends mainly on the variance of noise and the SNR value. This paper shows research on the sinusoidal signal phase and the estimation of exponentially damped sinusoids parameters. The shape of errors signals is periodical and it is associated with the signal period and with the sliding measurement window. For CiR=1.6 and the damping ratio 0.1% the error was in the order of 10-5 Hz/Hz, 10-4 V/V and 10-4 rad for the frequency, the amplitude and the phase estimation respectively. The information provided in this paper can be used to determine the approximate level of the efficiency of the vibrations elimination process before starting it.
Capesius, Joseph P.; Arnold, L. Rick
2012-01-01
The U.S. Geological Survey, in cooperation with the Colorado Water Conservation Board, compared two methods for estimating base flow in three reaches of the South Platte River between Denver and Kersey, Colorado. The two methods compared in this study are the Mass Balance and the Pilot Point methods. Base-flow estimates made with the two methods were based upon a 54-year period of record (1950 to 2003).
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.
Novel Direction Of Arrival Estimation Method Based on Coherent Accumulation Matrix Reconstruction
Li Lei
2015-04-01
Full Text Available Based on coherent accumulation matrix reconstruction, a novel Direction Of Arrival (DOA estimation decorrelation method of coherent signals is proposed using a small sample. First, the Signal to Noise Ratio (SNR is improved by performing coherent accumulation operation on an array of observed data. Then, according to the structure characteristics of the accumulated snapshot vector, the equivalent covariance matrix, whose rank is the same as the number of array elements, is constructed. The rank of this matrix is proved to be determined just by the number of incident signals, which realize the decorrelation of coherent signals. Compared with spatial smoothing method, the proposed method performs better by effectively avoiding aperture loss with high-resolution characteristics and low computational complexity. Simulation results demonstrate the efficiency of the proposed method.
Chu Kiong Loo
2011-01-01
Full Text Available A brain computer interface BCI enables direct communication between a brain and a computer translating brain activity into computer commands using preprocessing, feature extraction, and classification operations. Feature extraction is crucial, as it has a substantial effect on the classification accuracy and speed. While fractal dimension has been successfully used in various domains to characterize data exhibiting fractal properties, its usage in motor imagery-based BCI has been more recent. In this study, commonly used fractal dimension estimation methods to characterize time series Katz's method, Higuchi's method, rescaled range method, and Renyi's entropy were evaluated for feature extraction in motor imagery-based BCI by conducting offline analyses of a two class motor imagery dataset. Different classifiers fuzzy k-nearest neighbours FKNN, support vector machine, and linear discriminant analysis were tested in combination with these methods to determine the methodology with the best performance. This methodology was then modified by implementing the time-dependent fractal dimension TDFD, differential fractal dimension, and differential signals methods to determine if the results could be further improved. Katz's method with FKNN resulted in the highest classification accuracy of 85%, and further improvements by 3% were achieved by implementing the TDFD method.
Bu, Guochao; Wang, Pei
2016-04-01
Terrestrial laser scanning (TLS) has been used to extract accurate forest biophysical parameters for inventory purposes. The diameter at breast height (DBH) is a key parameter for individual trees because it has the potential for modeling the height, volume, biomass, and carbon sequestration potential of the tree based on empirical allometric scaling equations. In order to extract the DBH from the single-scan data of TLS automatically and accurately within a certain range, we proposed an adaptive circle-ellipse fitting method based on the point cloud transect. This proposed method can correct the error caused by the simple circle fitting method when a tree is slanted. A slanted tree was detected by the circle-ellipse fitting analysis, then the corresponding slant angle was found based on the ellipse fitting result. With this information, the DBH of the trees could be recalculated based on reslicing the point cloud data at breast height. Artificial stem data simulated by a cylindrical model of leaning trees and the scanning data acquired with the RIEGL VZ-400 were used to test the proposed adaptive fitting method. The results shown that the proposed method can detect the trees and accurately estimate the DBH for leaning trees.
Diaz, P. M. A.; Feitosa, R. Q.; Sanches, I. D.; Costa, G. A. O. P.
2016-06-01
This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF) based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.
Bi, Hui; Zhang, Bingchen; Hong, Wen
2016-07-01
The elevation image quality of tomographic synthetic aperture radar (TomoSAR) data depends mainly on the elevation aperture size, number of baselines, and baseline distribution. In TomoSAR, due to the restricted number of baselines with irregular distributions, the elevation imaging quality is always unacceptable using the conventional spectral analysis approach. Therefore, for a given limited number of irregular baselines, the completion of data for the unobserved virtual uniform baseline distribution should be addressed to improve the spectral analysis-based TomoSAR reconstruction quality. We propose an Lq(0optimization problem, before calculating the data for virtual baseline distribution based on the acquisitions and the transformation matrix. Finally, the elevation reflectivity function is recovered using the spectral analysis method based on the estimated data. Compared with the reconstructed results only based on the limited irregular acquisitions, the image recovered using the dataset with a virtual uniform baseline distribution can improve the elevation image quality in an efficient manner.
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
A depth estimation method based on geometric transformation for stereo light microscope.
Fan, Shengli; Yu, Mei; Wang, Yigang; Jiang, Gangyi
2014-01-01
Stereo light microscopes (SLM) with narrow vision and shallow depth of field are widely used in micro-domain research. In this paper, we propose a depth estimation method of micro objects based on geometric transformation. By analyzing the optical imaging geometry, the definition of geometric transformation distance is given and the depth-distance relation express is obtained. The parameters of geometric transformation and express are calibrated with calibration board images captured in aid of precise motorized stage. The depth of micro object can be estimated by calculating the geometric transformation distance. The proposed depth-distance relation express is verified using an experiment in which the depth map of an Olanzapine tablet surface is reconstructed.
Yong Huang
2017-01-01
Full Text Available Relationships between radar reflectivity factor and rainfall are different in various precipitation cloud systems. In this study, the cloud systems are firstly classified into five categories with radar and satellite data to improve radar quantitative precipitation estimation (QPE algorithm. Secondly, the errors of multiradar QPE algorithms are assumed to be different in convective and stratiform clouds. The QPE data are then derived with methods of Z-R, Kalman filter (KF, optimum interpolation (OI, Kalman filter plus optimum interpolation (KFOI, and average calibration (AC based on error analysis on the Huaihe River Basin. In the case of flood on the early of July 2007, the KFOI is applied to obtain the QPE product. Applications show that the KFOI can improve precision of estimating precipitation for multiple precipitation types.
A feedback-based inverse heat transfer method to estimate unperturbed temperatures in wellbores
Espinosa-Paredes, Gilberto [Area de Ingenieria en Recursos Energeticos, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico D.F. 09340 (Mexico); Espinosa-Martinez, Erick G. [Retorno Quebec 6, Col. Burgos de Cuernavaca 62580, Temixco, Mor. (Mexico)
2009-01-15
This paper presents a feedback-based strategy to solve an inverse heat transfer problem for the estimation of unperturbed formation temperatures (UFT) from measured temperatures in wellbores. The feedback function uses the error between the measured and estimated temperatures during the shut-in process. Thus, an inverse heat transfer problem was solved in this way since the UFT represents the unknown initial conditions and the measured temperatures in the wellbore represents the particular solution of the PDE'S governing the heat transfer process in the formation and in the wellbore system. The performance of the method is illustrated via numerical simulations of two wells: (a) oil well FE-1227 from the Gulf of Mexico maritime zone and (b) well CP-0512 from Cerro Prieto Mexican geothermal field. (author)
无
2010-01-01
Quantitative estimation of vegetation water content(VWC) using optical remote sensing techniques is helpful in forest fire as-sessment,agricultural drought monitoring and crop yield estimation.This paper reviews the research advances of VWC retrieval using spectral reflectance,spectral water index and radiative transfer model(RTM) methods.It also evaluates the reli-ability of VWC estimation using spectral water index from the observation data and the RTM.Focusing on two main definitions of VWC-the fuel moisture content(FMC) and the equivalent water thickness(EWT),the retrieval accuracies of FMC and EWT using vegetation water indices are analyzed.Moreover,the measured information and the dataset are used to estimate VWC,the results show there are significant correlations among three kinds of vegetation water indices(i.e.,WSI,NDⅡ,NDWI1640,WI/NDVI) and canopy FMC of winter wheat(n=45).Finally,the future development directions of VWC detection based on optical remote sensing techniques are also summarized.
Comparison of machine-learning methods for above-ground biomass estimation based on Landsat imagery
Wu, Chaofan; Shen, Huanhuan; Shen, Aihua; Deng, Jinsong; Gan, Muye; Zhu, Jinxia; Xu, Hongwei; Wang, Ke
2016-07-01
Biomass is one significant biophysical parameter of a forest ecosystem, and accurate biomass estimation on the regional scale provides important information for carbon-cycle investigation and sustainable forest management. In this study, Landsat satellite imagery data combined with field-based measurements were integrated through comparisons of five regression approaches [stepwise linear regression, K-nearest neighbor, support vector regression, random forest (RF), and stochastic gradient boosting] with two different candidate variable strategies to implement the optimal spatial above-ground biomass (AGB) estimation. The results suggested that RF algorithm exhibited the best performance by 10-fold cross-validation with respect to R2 (0.63) and root-mean-square error (26.44 ton/ha). Consequently, the map of estimated AGB was generated with a mean value of 89.34 ton/ha in northwestern Zhejiang Province, China, with a similar pattern to the distribution mode of local forest species. This research indicates that machine-learning approaches associated with Landsat imagery provide an economical way for biomass estimation. Moreover, ensemble methods using all candidate variables, especially for Landsat images, provide an alternative for regional biomass simulation.
A NOVEL ESTIMATION METHOD OF TIMING OFFSET FOR OFDM BASED WLAN SYSTEMS
Liu Shouyin; Chong Jongwha
2005-01-01
The conventional timing synchronization methods based on time domain correlation have the problems of timing metric plateau in Additive White Gaussian Noise(AWGN) channel and estimation error in multipath fading channel. To resolve the problems, this paper proposes a novel timing metric using the characteristics of long training symbols in IEEE802.11a and a new timing recovery method based on the new timing metric for Orthogonal Frequency Division Multiplexing(OFDM)-based WLAN systems. The proposed timing metric is defined as a sum of absolute values of the imaginary parts of all the subcarrier samples. It exhibits a unique characteristic that is very sensitive to the true synchronization point since it has minimum value at the true synchronization point and maximum around the true synchronization point. The simulation results show that the performance of timing synchronization is significantly improved, as a result, the probability of error estimation is lower than 10-4 when Signal-to-Noise Ratio(SNR) is more than 10 dB.
Costate Estimation of PMP-Based Control Strategy for PHEV Using Legendre Pseudospectral Method
Hanbing Wei
2016-01-01
Full Text Available Costate value plays a significant role in the application of PMP-based control strategy for PHEV. It is critical for terminal SOC of battery at destination and corresponding equivalent fuel consumption. However, it is not convenient to choose the approximate costate in real driving condition. In the paper, the optimal control problem of PHEV based on PMP has been converted to nonlinear programming problem. By means of KKT condition costate can be approximated as KKT multipliers of NLP divided by the LGL weights. A kind of general costate estimation approach is proposed for predefined driving condition in this way. Dynamic model has been established in Matlab/Simulink in order to prove the effectiveness of the method. Simulation results demonstrate that the method presented in the paper can deduce the closer value of global optimal value than constant initial costate value. This approach can be used for initial costate and jump condition estimation of PMP-based control strategy for PHEV.
Power System Real-Time Monitoring by Using PMU-Based Robust State Estimation Method
Zhao, Junbo; Zhang, Gexiang; Das, Kaushik;
2016-01-01
. To be specific, an adaptive weight assignment function to dynamically adjust the measurement weight based on the distance of big unwanted disturbances from the PMU measurements is proposed to increase algorithm robustness. Furthermore, a statistical test-based interpolation matrix H updating judgment strategy...... is proposed. The processed and resynced PMU information are used as priori information and incorporated to the modified weighted least square estimation to address the measurements imperfect synchronization between supervisory control and data acquisition and PMU measurements. Finally, the innovation analysis......-based bad data (BD) detection method, which can handle the smearing effect and critical measurement errors, is presented. We evaluate PRSEM by using IEEE benchmark test systems and a realistic utility system. The numerical results indicate that, in short computation time, PRSEM can effectively track...
The Interpolation Method for Estimating the Above-Ground Biomass Using Terrestrial-Based Inventory
I Nengah Surati Jaya
2014-09-01
Full Text Available This paper examined several methods for interpolating biomass on logged-over dry land forest using terrestrial-based forest inventory in Labanan, East Kalimantan and Lamandau, Kota Wringing Barat, Central Kalimantan. The plot-distances examined was 1,000−1,050 m for Labanan and 1,000−899m for Lawanda. The main objective of this study was to obtain the best interpolation method having the most accurate prediction on spatial distribution of forest biomass for dry land forest. Two main interpolation methods were examined: (1 deterministic approach using the IDW method and (2 geo-statistics approach using Kriging with spherical, circular, linear, exponential, and Gaussian models. The study results at both sites consistently showed that the IDW method was better than the Kriging method for estimating the spatial distribution of biomass. The validation results using chi-square test showed that the IDW interpolation provided accurate biomass estimation. Using the percentage of mean deviation value (MD(%, it was also recognized that the IDWs with power parameter (p of 2 provided relatively low value , i.e., only 15% for Labanan, East Kalimantan Province and 17% for Lamandau, Kota Wringing Barat Central Kalimantan Province. In general, IDW interpolation method provided better results than the Kriging, where the Kriging method provided MD(% of about 27% and 21% for Lamandau and Labanan sites, respectively.Keywords: deterministic, geostatistics, IDW, Kriging, above-groung biomass
Carletta, Nicholas D.; Mullendore, Gretchen L.; Starzec, Mariusz; Xi, Baike; Feng, Zhe; Dong, Xiquan
2016-08-01
Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which affects cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study was to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from a single ground-based radar. Four methods were used to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields for six cases with a variety of storm types. The best method for locating the LMD was determined to be the method that used a reflectivity texture technique to determine convective cores and a multi-layer echo identification to determine anvil locations. Although an improvement over previously published methods, the new methodology still produced unreliable results in certain regimes. The methodology worked best when applied to mature updrafts, as the anvil needs time to grow to a detectable size. Thus, radar reflectivity is found to be valuable in estimating the LMD, but storm maturity must also be considered for best results.
Meng, S.; Xie, X.
2014-12-01
Hydrological model performance is usually not as acceptable as expected due to limited measurements and imperfect parameterization which is attributable to the uncertainties from model parameters and model structures. In applications, a general assumption is hold that model parameters are constant in a stationary condition during the simulation period, and the parameters are generally prescribed though calibration with observed data. In reality, but the model parameters related to the physical or conceptual characteristics of a catchment will travel in nonstationary conditions in response to climate transition and land use alteration. The travels or changes of parameters are especially evident for long-term hydrological simulations. Therefore, the assumption of using constant parameters under nonstationary condition is inappropriate, and it will deliver errors from the parameters to the outputs during the simulation and prediction. Even though a few of studies have acknowledged the parameter travel or change, little attention has been paid on the estimation of changing parameters. In this study, we employ an ensemble Kalman filter (EnKF) based method to trace parameter changes in real time. Through synthetic experiments, the capability of the EnKF-based is demonstrated by assimilating runoff observations into a rainfall-runoff model, i.e., the Xinanjing Model. In addition to the stationary condition, three typical nonstationary conditions are considered, i.e., the leap, linear and Ω-shaped transitions. To examine the robustness of the method, different errors from rainfall input, modelling and observations are investigated. The shuffled complex evolution (SCE-UA) algorithm is applied under the same conditions to make a comparison. The results show that the EnKF-based method is capable of capturing the general pattern of the parameter travels even for high levels of uncertainties. It provides better estimates than the SCE-UA method does by taking advantages of real
De-interlacing using nonlocal costs and Markov-chain-based estimation of interpolation methods.
Vedadi, Farhang; Shirani, Shahram
2013-04-01
A new method of de-interlacing is proposed. De-interlacing is revisited as the problem of assigning a sequence of interpolation methods (interpolators) to a sequence of missing pixels of an interlaced frame (field). With this assumption, our de-interlacing algorithm (de-interlacer), undergoes transitions from one interpolation method to another, as it moves from one missing pixel position to the horizontally adjacent missing pixel position in a missing row of a field. We assume a discrete countable-state Markov-chain model on the sequence of interpolators (Markov-chain states) which are selected from a user-defined set of candidate interpolators. An estimation of the optimum sequence of interpolators with the aforementioned Markov-chain model requires the definition of an efficient cost function as well as a global optimization technique. Our algorithm introduces for the first time using a nonlocal cost (NLC) scheme. The proposed algorithm uses the NLC to not only measure the fitness of an interpolator at a missing pixel position, but also to derive an approximation for transition matrix (TM) of the Markov-chain of interpolators. The TM in our algorithm is a frame-variate matrix, i.e., the algorithm updates the TM for each frame automatically. The algorithm finally uses a Viterbi algorithm to find the global optimum sequence of interpolators given the cost function defined and neighboring original pixels in hand. Next, we introduce a new MAP-based formulation for the estimation of the sequence of interpolators this time not by estimating the best sequence of interpolators but by successive estimations of the best interpolator at each missing pixel using Forward-Backward algorithm. Simulation results prove that, while competitive with each other on different test sequences, the proposed methods (one using Viterbi and the other Forward-Backward algorithm) are superior to state-of-the-art de-interlacing algorithms proposed recently. Finally, we propose motion
Wang, Z.; Lu, K.; Ye, Y.
2011-01-01
According to saliency of permanent magnet synchronous motor (PMSM), the information of rotor position is implied in performance of stator inductances due to the magnetic saturation effect. Researches focused on the initial rotor position estimation of PMSM by injecting modulated pulse voltage vec....... The experimental results show that the proposed method estimates the initial rotor position reliably and efficently. The method is also simple and can achieve satisfied estimation accuracy....
Kevin S. Laves; Susan C. Loeb
2005-01-01
It is commonly assumed that population estimates derived from trapping small mammals are accurate and unbiased or that estimates derived from different capture methods are comparable. We captured southern flying squirrels (Glaucmrtys volam) using two methods to study their effect on red-cockaded woodpecker (Picoides bumah) reproductive success. Southern flying...
Zhi-jia LIN; Zhuo ZHUANG BU
2014-01-01
An enriched goal-oriented error estimation method with extended degrees of freedom is developed to estimate the error in the continuum-based shell extended finite element method. It leads to high quality local error bounds in three-dimensional fracture mechanics simulation which involves enrichments to solve the singularity in crack tip. This enriched goal-oriented error estimation gives a chance to evaluate this continuum-based shell extended finite element method simulation. With comparisons of reliability to the stress intensity factor calculation in stretching and bending, the accuracy of the continuum-based shell extended finite element method simulation is evaluated, and the reason of error is discussed.
MR-based water content estimation in cartilage: design and validation of a method
Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen
map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue contains) and we measured the water they contained. Results:The 37 Celsius degree system and the analysis can be reproduced in a similar way. MR T1...
Laves, Kevin S.; Loeb, Susan C.
2006-01-01
ABSTRACT.—It is commonly assumed that population estimates derived from trapping small mammals are accurate and unbiased or that estimates derived from different capture methods are comparable. We captured southern flying squirrels (Glaucomys volans) using two methods to study their effect on red-cockaded woodpecker (Picoides borealis) reproductive success. Southern flying squirrels were captured at and removed from 30 red-cockaded woodpecker cluster sites during March to July 1994 and 1995 using Sherman traps placed in a grid encompassing a red-cockaded woodpecker nest tree and by hand from red-cockaded woodpecker cavities. Totals of 195 (1994) and 190 (1995) red-cockaded woodpecker cavities were examined at least three times each year. Trappability of southern flying squirrels in Sherman traps was significantly greater in 1995 (1.18%; 22,384 trap nights) than in 1994 (0.42%; 20,384 trap nights), and capture rate of southern flying squirrels in cavities was significantly greater in 1994 (22.7%; 502 cavity inspections) than in 1995 (10.8%; 555 cavity inspections). However, more southern flying squirrels were captured per cavity inspection than per Sherman trap night in both years. Male southern flying squirrels were more likely to be captured from cavities than in Sherman traps in 1994, but not in 1995. Both male and female juveniles were more likely to be captured in cavities than in traps in both years. In 1994 males in reproductive condition were more likely to be captured in cavities than in traps and in 1995 we captured significantly more reproductive females in cavities than in traps. Our data suggest that population estimates based solely on one trapping method may not represent true population size or structure of southern flying squirrels.
Motion estimation using low-band-shift method for wavelet-based moving-picture coding.
Park, H W; Kim, H S
2000-01-01
The discrete wavelet transform (DWT) has several advantages of multiresolution analysis and subband decomposition, which has been successfully used in image processing. However, the shift-variant property is intrinsic due to the decimation process of the wavelet transform, and it makes the wavelet-domain motion estimation and compensation inefficient. To overcome the shift-variant property, a low-band-shift method is proposed and a motion estimation and compensation method in the wavelet-domain is presented. The proposed method has a superior performance to the conventional motion estimation methods in terms of the mean absolute difference (MAD) as well as the subjective quality. The proposed method can be a model method for the motion estimation in wavelet-domain just like the full-search block matching in the spatial domain.
The Description of Shale Reservoir Pore Structure Based on Method of Moments Estimation
Li, Wenjie; Wang, Changcheng; Shi, Zejin; Wei, Yi; Zhou, Huailai; Deng, Kun
2016-01-01
Shale has been considered as good gas reservoir due to its abundant interior nanoscale pores. Thus, the study of the pore structure of shale is of great significance for the evaluation and development of shale oil and gas. To date, the most widely used approaches for studying the shale pore structure include image analysis, radiation and fluid invasion methods. The detailed pore structures can be studied intuitively by image analysis and radiation methods, but the results obtained are quite sensitive to sample preparation, equipment performance and experimental operation. In contrast, the fluid invasion method can be used to obtain information on pore size distribution and pore structure, but the relative simple parameters derived cannot be used to evaluate the pore structure of shale comprehensively and quantitatively. To characterize the nanoscale pore structure of shale reservoir more effectively and expand the current research techniques, we proposed a new method based on gas adsorption experimental data and the method of moments to describe the pore structure parameters of shale reservoir. Combined with the geological mixture empirical distribution and the method of moments estimation principle, the new method calculates the characteristic parameters of shale, including the mean pore size (x¯), standard deviation (σ), skewness (Sk) and variation coefficient (c). These values are found by reconstructing the grouping intervals of observation values and optimizing algorithms for eigenvalues. This approach assures a more effective description of the characteristics of nanoscale pore structures. Finally, the new method has been applied to analyze the Yanchang shale in the Ordos Basin (China) and Longmaxi shale from the Sichuan Basin (China). The results obtained well reveal the pore characteristics of shale, indicating the feasibility of this new method in the study of the pore structure of shale reservoir. PMID:26992168
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
Savanevych, V E; Sokovikova, N S; Bezkrovny, M M; Vavilova, I B; Ivashchenko, Yu M; Elenin, L V; Khlamov, S V; Movsesian, Ia S; Dashkova, A M; Pogorelov, A V
2015-01-01
We describe a new iteration method to estimate asteroid coordinates, which is based on the subpixel Gaussian model of a discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixels potential) of the CCD frame. In this model, a kind of the coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The developed method, being more flexible in adapting to any form of the object image, has a high measurement accuracy along with a low calculating complexity due to a maximum likelihood procedure, which is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for the minimisation of the quadratic form. Since 2010, the method was tested as the basis of our CoLiTec (Collection Light Technology) software, which has been installed at several observatories of the world with the ai...
Case-Based Reasoning Method in Cost Estimation of Drilling Wells
Hossein Shams Mianaei; Seyed Hossein Iranmanesh
2013-01-01
Aim of study is the cost estimation of drilling6T8T 6T8Twells6T8T 6T8Tusing6T Case-Based 6TReasoning6T8T (CBR) method which6T8T is created based6T8T 6T8Ton6T8T 6T8Tthe6T8T 6T8Tviewpoint of using presented6T8T 6T8Tsolutions6T8T 6T8Tfor6T8T 6T8Tprevious6T8T 6T8Tsolved6T8T 6T8Tproblems6T8T in order 6T8Tto solve6T8T new 6T8Tsimilar problems6T8T. 6T8TIn companies6T8T 6T8Tor6T8T 6T8Torganizations6T8T which 6T8Tcost estimation, scheduling,6T8T 6T8Tdesign6T8T, planning 6T8Tand project activities6T8T ...
Case-Based Reasoning Method in Cost Estimation of Drilling Wells
Hossein Shams Mianaei; Seyed Hossein Iranmanesh
2013-01-01
Aim of study is the cost estimation of drilling6T8T 6T8Twells6T8T 6T8Tusing6T Case-Based 6TReasoning6T8T (CBR) method which6T8T is created based6T8T 6T8Ton6T8T 6T8Tthe6T8T 6T8Tviewpoint of using presented6T8T 6T8Tsolutions6T8T 6T8Tfor6T8T 6T8Tprevious6T8T 6T8Tsolved6T8T 6T8Tproblems6T8T in order 6T8Tto solve6T8T new 6T8Tsimilar problems6T8T. 6T8TIn companies6T8T 6T8Tor6T8T 6T8Torganizations6T8T which 6T8Tcost estimation, scheduling,6T8T 6T8Tdesign6T8T, planning 6T8Tand project activities6T8T ...
Novel method of ordinal bearing estimation for more sources based on oblique projector
Sun Wei; Bai Jianlin; Wang Kai
2009-01-01
A novel direction of arrival (DOA) estimation method is proposed when uncorrelated, correlated, and coherent sources coexist under color noise field. The uncorrelated and correlated sources are firstly estimated using the conventional spatial spectrum estimation method, then the noise and uncorrelated sources in Toeplitz structure are eliminated using differencing, finally by exploiting the property of oblique projection, the contributions of correlated sources are then eliminated from the covariance matrix and only the coherent sources remain. So the coherent sources can be estimated by the technique of modified spatial smoothing. The number of sources resolved by this approach can exceed the number of array elements without repeatedly estimating correlated sources. Simulation results demonstrate the effectiveness and efficiency of our proposed method.
Suh, Jong Hwan
2016-01-01
In recent years, the anonymous nature of the Internet has made it difficult to detect manipulated user reputations in social media, as well as to ensure the qualities of users and their posts. To deal with this, this study designs and examines an automatic approach that adopts writing style features to estimate user reputations in social media. Under varying ways of defining Good and Bad classes of user reputations based on the collected data, it evaluates the classification performance of the state-of-art methods: four writing style features, i.e. lexical, syntactic, structural, and content-specific, and eight classification techniques, i.e. four base learners-C4.5, Neural Network (NN), Support Vector Machine (SVM), and Naïve Bayes (NB)-and four Random Subspace (RS) ensemble methods based on the four base learners. When South Korea's Web forum, Daum Agora, was selected as a test bed, the experimental results show that the configuration of the full feature set containing content-specific features and RS-SVM combining RS and SVM gives the best accuracy for classification if the test bed poster reputations are segmented strictly into Good and Bad classes by portfolio approach. Pairwise t tests on accuracy confirm two expectations coming from the literature reviews: first, the feature set adding content-specific features outperform the others; second, ensemble learning methods are more viable than base learners. Moreover, among the four ways on defining the classes of user reputations, i.e. like, dislike, sum, and portfolio, the results show that the portfolio approach gives the highest accuracy.
Estimating the Capacity of Urban Transportation Networks with an Improved Sensitivity Based Method
Muqing Du
2015-01-01
Full Text Available The throughput of a given transportation network is always of interest to the traffic administrative department, so as to evaluate the benefit of the transportation construction or expansion project before its implementation. The model of the transportation network capacity formulated as a mathematic programming with equilibrium constraint (MPEC well defines this problem. For practical applications, a modified sensitivity analysis based (SAB method is developed to estimate the solution of this bilevel model. The high-efficient origin-based (OB algorithm is extended for the precise solution of the combined model which is integrated in the network capacity model. The sensitivity analysis approach is also modified to simplify the inversion of the Jacobian matrix in large-scale problems. The solution produced in every iteration of SAB is restrained to be feasible to guarantee the success of the heuristic search. From the numerical experiments, the accuracy of the derivatives for the linear approximation could significantly affect the converging of the SAB method. The results also show that the proposed method could obtain good suboptimal solutions from different starting points in the test examples.
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Su, Ying; Wang, Xiulin; Li, Keqiang; Liang, Shengkang; Qian, Guodong; Jin, Hong; Dai, Aiquan
2014-09-01
At present, the monitoring network of China cannot provide sufficient data to estimate land-based pollutant loads that enter the sea, and estimation methods are imprecisely used. In this study, the selection of monitoring stations, monitoring frequency, and pollutant load estimation methods was studied in Qingdao City, a typical coastal city in China, taken as an example. Land-based pollutant loads from Qingdao were estimated, and load distribution, density, and composition were analyzed to identify the key pollution source regions (SRs) that need to be monitored and controlled. Results show that the administrative land area of Qingdao can be divided into 25 sea-sink source regions (SSRs). A total of 14 more rivers and 62 industrial enterprises should be monitored to determine the comprehensive pollutant loads of the city. Furthermore, the monitoring frequency of rivers should not be less than three times/year; a monitoring frequency of five or more times is preferable. The findings on pollutant load estimation with the use of different estimation methods substantially vary; estimation results with the use of ratio-based methods were 10 and 22 % higher than those with the use of monitoring-based methods in terms of chemical oxygen demand (COD) and total nitrogen (TN), respectively. None-point sources contributed the majority of the pollutant loads at about 70 % of the total COD and 60 % of the total TN.
Hardware design and implementation of fast DOA estimation method based on multicore DSP
Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-10-01
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
A Novel OD Estimation Method Based on Automatic Vehicle Identification Data
Sun, Jian; Feng, Yu
With the development and application of Automatic Vehicle Identification (AVI) technologies, a novel high resolution OD estimation method was proposed based on AVI detector information. 4 detected categories (Ox + Dy, Ox/Dy + Path(s), Ox/Dy, Path(s)) were divided at the first step. Then the initial OD matrix was updated using the Ox + Dy sample information considering the AVI detector errors. Referenced by particle filter, the link-path relationship data were revised using the last 3 categories information based on Bayesian inference and the possible trajectory and OD were determined using Monte Carlo random process at last. Finally, according to the current application of video detector in Shanghai, the North-South expressway was selected as the testbed which including 17 OD pairs and 9 AVI detectors. The results show that the calculated average relative error is 12.09% under the constraints that the simulation error is under 15% and the detector error is about 10%. It also shows that this method is highly efficient and can fully using the partial vehicle trajectory which can be satisfied with the dynamic traffic management application in reality.
Interpolation method for live weight estimation based on age in Japanese quails
Senol Celik
Full Text Available ABSTRACT The objective of this study was to demonstrate live weight estimation based on age by using Newton Interpolation method for male and female quails for seven weeks of fattening. A total of 138-day-old quail chicks were used in the study. The study demonstrated a 6th-degree polynomial interpolation for the function values obtained at seven equal intervals from 7 to 49 days. Live weight increase prediction was calculated for male and female quails between the 7th and 49th days using Newton Interpolation. Daily live weight increase for male and female quails based on observed live weights was determined. Female quails displayed more live weight increase after the 19th day compared with males. Average live weight increase in male quails was 3.81 g, and 4.63 g for females until the 49th day. The highest live weight increase was observed during the fourth week for all quails. Sum of squared errors and coefficient of determination (R2 for fit of the model were calculated and the F test was performed. F, sum of squared errors, and R2 obtained by Newton Interpolation for male quails and female quail were very large: 0 (approximately zero and 0.999, respectively. The interpolation method is suitable for breeding studies.
Unbiased risk estimation method for covariance estimation
Lescornel, Hélène; Chabriac, Claudie
2011-01-01
We consider a model selection estimator of the covariance of a random process. Using the Unbiased Risk Estimation (URE) method, we build an estimator of the risk which allows to select an estimator in a collection of model. Then, we present an oracle inequality which ensures that the risk of the selected estimator is close to the risk of the oracle. Simulations show the efficiency of this methodology.
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
Liu Estimator Based on An M Estimator
Hatice ŞAMKAR
2010-01-01
Full Text Available Objective: In multiple linear regression analysis, multicollinearity and outliers are two main problems. In the presence of multicollinearity, biased estimation methods like ridge regression, Stein estimator, principal component regression and Liu estimator are used. On the other hand, when outliers exist in the data, the use of robust estimators reducing the effect of outliers is prefered. Material and Methods: In this study, to cope with this combined problem of multicollinearity and outliers, it is studied Liu estimator based on M estimator (Liu M estimator. In addition, mean square error (MSE criterion has been used to compare Liu M estimator with Liu estimator based on ordinary least squares (OLS estimator. Results: OLS, Huber M, Liu and Liu M estimates and MSEs of these estimates have been calculated for a data set which has been taken form a study of determinants of physical fitness. Liu M estimator has given the best performance in the data set. It is found as both MSE (?LM = 0.0078< MSE (?M = 0.0508 and MSE (?LM = 0.0078< MSE (?L= 0.0085. Conclusion: When there is both outliers and multicollinearity in a dataset, while using of robust estimators reduces the effect of outliers, it could not solve problem of multicollinearity. On the other hand, using of biased methods could solve the problem of multicollinearity, but there is still the effect of outliers on the estimates. In the occurence of both multicollinearity and outliers in a dataset, it has been shown that combining of the methods designed to deal with this problems is better than using them individually.
A Five-Parameter Wind Field Estimation Method Based on Spherical Upwind Lidar Measurements
Kapp, S.; Kühn, M.
2014-12-01
Turbine mounted scanning lidar systems of focussed continuous-wave type are taken into consideration to sense approaching wind fields. The quality of wind information depends on the lidar technology itself but also substantially on the scanning technique and reconstruction algorithm. In this paper a five-parameter wind field model comprising mean wind speed, vertical and horizontal linear shear and homogeneous direction angles is introduced. A corresponding parameter estimation method is developed based on the assumption of upwind lidar measurements scanned over spherical segments. As a main advantage of this method all relevant parameters, in terms of wind turbine control, can be provided. Moreover, the ability to distinguish between shear and skew potentially increases the quality of the resulting feedforward pitch angles when compared to three-parameter methods. It is shown that minimal three measurements, each in turn from two independent directions are necessary for the application of the algorithm, whereas simpler measurements, each taken from only one direction, are not sufficient.
Kyoung Ae Kong
2016-04-01
Full Text Available Background: Smoking is a major modifiable risk factor for premature mortality. Estimating the smoking-attributable burden is important for public health policy. Typically, prevalence- or smoking impact ratio (SIR-based methods are used to derive estimates, but there is controversy over which method is more appropriate for country-specific estimates. We compared smoking-attributable fractions (SAFs of deaths estimated by these two methods. Methods: To estimate SAFs in 2012, we used several different prevalence-based approaches using no lag and 10- and 20-year lags. For the SIR-based method, we obtained lung cancer mortality rates from the Korean Cancer Prevention Study (KCPS and from the United States-based Cancer Prevention Study-II (CPS-II. The relative risks for the diseases associated with smoking were also obtained from these cohort studies. Results: For males, SAFs obtained using KCPS-derived SIRs were similar to those obtained using prevalence-based methods. For females, SAFs obtained using KCPS-derived SIRs were markedly greater than all prevalence-based SAFs. Differences in prevalence-based SAFs by time-lag period were minimal among males, but SAFs obtained using longer-lagged prevalence periods were significantly larger among females. SAFs obtained using CPS-II-based SIRs were lower than KCPS-based SAFs by >15 percentage points for most diseases, with the exceptions of lung cancer and chronic obstructive pulmonary disease. Conclusions: SAFs obtained using prevalence- and SIR-based methods were similar for males. However, neither prevalence-based nor SIR-based methods resulted in precise SAFs among females. The characteristics of the study population should be carefully considered when choosing a method to estimate SAF.
Methods for estimating the semivariogram
Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle
2002-01-01
. In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...... is insensitive to the choice of estimation method, but also that the uncertainties of predictions were reduced when applying maximum likelihood....
Permanent Magnet Flux Online Estimation Based on Zero-Voltage Vector Injection Method
Xie, Ge; Lu, Kaiyuan; Kumar, Dwivedi Sanjeet;
2015-01-01
In this paper, a simple signal injection method is proposed for sensorless control of PMSM at low speed, which ideally requires one voltage vector only for position estimation. The proposed method is easy to implement resulting in low computation burden. No filters are needed for extracting the h...
A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays
Fenggang Sun
2016-08-01
Full Text Available The problem of direction-of-arrival (DOA estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT, the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity–performance tradeoff as compared to other existing methods.
A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays.
Sun, Fenggang; Gao, Bin; Chen, Lizhen; Lan, Peng
2016-08-25
The problem of direction-of-arrival (DOA) estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT), the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity-performance tradeoff as compared to other existing methods.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches.
Residual-based a posteriori error estimation for multipoint flux mixed finite element methods
Du, Shaohong
2015-10-26
A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.
Wang, Z.; Lu, K.; Ye, Y.
2011-01-01
According to saliency of permanent magnet synchronous motor (PMSM), the information of rotor position is implied in performance of stator inductances due to the magnetic saturation effect. Researches focused on the initial rotor position estimation of PMSM by injecting modulated pulse voltage...... vectors. The relationship between the inductance variations and voltage vector positions was studied. The inductance variation effect on estimation accuracy was studied as well. An improved five-pulses injection method was proposed, to improve the estimation accuracy by choosing optimaized voltage vectors...
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-02-20
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.
An Entropy-Based Propagation Speed Estimation Method for Near-Field Subsurface Radar Imaging
Pistorius Stephen
2010-01-01
Full Text Available During the last forty years, Subsurface Radar (SR has been used in an increasing number of noninvasive/nondestructive imaging applications, ranging from landmine detection to breast imaging. To properly assess the dimensions and locations of the targets within the scan area, SR data sets have to be reconstructed. This process usually requires the knowledge of the propagation speed in the medium, which is usually obtained by performing an offline measurement from a representative sample of the materials that form the scan region. Nevertheless, in some novel near-field SR scenarios, such as Microwave Wood Inspection (MWI and Breast Microwave Radar (BMR, the extraction of a representative sample is not an option due to the noninvasive requirements of the application. A novel technique to determine the propagation speed of the medium based on the use of an information theory metric is proposed in this paper. The proposed method uses the Shannon entropy of the reconstructed images as the focal quality metric to generate an estimate of the propagation speed in a given scan region. The performance of the proposed algorithm was assessed using data sets collected from experimental setups that mimic the dielectric contrast found in BMI and MWI scenarios. The proposed method yielded accurate results and exhibited an execution time in the order of seconds.
Nishii, Taiki; Komada, Satoshi; Yashiro, Daisuke; Hirai, Junji
2013-01-01
Conventional estimation methods distribute tension to muscles by solving optimization problems, because the system is redundant. The theory of functionally different effective muscle, based on 3 antagonistic pairs of muscle groups in limbs, has enabled to calculate the maximum joint torque of each pair, i.e. functionally different effective muscle force. Based on this theory, a method to estimate muscular tension has been proposed, where joint torque of each muscle group is derived by multiplying functionally different effective muscle force, the muscular activity of muscular activity pattern for direction of tip force, and ratio of tip force to maximum output force. The estimation of this method is as good as Crowninshield's method, moreover this method also reduce the computation time if the estimation concerns a selected muscle group.
Cumulant-Based Coherent Signal Subspace Method for Bearing and Range Estimation
Bourennane Salah
2007-01-01
Full Text Available A new method for simultaneous range and bearing estimation for buried objects in the presence of an unknown Gaussian noise is proposed. This method uses the MUSIC algorithm with noise subspace estimated by using the slice fourth-order cumulant matrix of the received data. The higher-order statistics aim at the removal of the additive unknown Gaussian noise. The bilinear focusing operator is used to decorrelate the received signals and to estimate the coherent signal subspace. A new source steering vector is proposed including the acoustic scattering model at each sensor. Range and bearing of the objects at each sensor are expressed as a function of those at the first sensor. This leads to the improvement of object localization anywhere, in the near-field or in the far-field zone of the sensor array. Finally, the performances of the proposed method are validated on data recorded during experiments in a water tank.
Estimation Method of Path-Selecting Proportion for Urban Rail Transit Based on AFC Data
Feng Zhou
2015-01-01
Full Text Available With the successful application of automatic fare collection (AFC system in urban rail transit (URT, the information of passengers’ travel time is recorded, which provides the possibility to analyze passengers’ path-selecting by AFC data. In this paper, the distribution characteristics of the components of travel time were analyzed, and an estimation method of path-selecting proportion was proposed. This method made use of single path ODs’ travel time data from AFC system to estimate the distribution parameters of the components of travel time, mainly including entry walking time (ewt, exit walking time (exwt, and transfer walking time (twt. Then, for multipath ODs, the distribution of each path’s travel time could be calculated under the condition of its components’ distributions known. After that, each path’s path-selecting proportion can be estimated. Finally, simulation experiments were designed to verify the estimation method, and the results show that the error rate is less than 2%. Compared with the traditional models of flow assignment, the estimation method can reduce the cost of artificial survey significantly and provide a new way to calculate the path-selecting proportion for URT.
Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew
2017-01-11
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the.
Ishiyama, Gail; Geiger, Christopher; Lopez, Ivan A; Ishiyama, Akira
2011-03-15
The objective of this study was to make direct comparisons of the estimates of spiral and vestibular neuronal number in human archival temporal bone specimens using design-based stereology with those using the assumption-based Abercrombie method. Archival human temporal bone specimens from subjects ranging in age from 16 to 80 years old were used. The number of spiral and vestibular ganglia neurons within the counting areas was estimated using the stereology-optical disector technique and compared with estimates obtained using the assumption-based Abercrombie method on the same specimens. Using the optical disector method, there was an average of 41,480 (coefficient of variation=0.12) spiral ganglia neurons and 28,930 (coefficient of variation=0.15) vestibular ganglia neurons. The mean coefficient of error was 0.076 for the spiral ganglion estimates, and 0.091 for the vestibular ganglion estimates. Using the Abercrombie correction method of two-dimensional analysis, an average of 23,110 (coefficient of variation of 0.08) spiral ganglia neurons, and 16,225 vestibular ganglia neurons (coefficient of variation of 0.15) was obtained. We found that there was a large disparity between the estimates with a significant 44% underestimation of the spiral and vestibular ganglion counts obtained using the Abercrombie method when compared with estimates using the optical disector method.
无
2007-01-01
Representing earthquake ground motion as time varying ARMA model, the instantaneous spectrum can only be determined by the time varying coefficients of the corresponding ARMA model. In this paper, unscented Kalman filter is applied to estimate the time varying coefficients. The comparison between the estimation results of unscented Kalman filter and Kalman filter methods shows that unscented Kalman filter can more precisely represent the distribution of the spectral peaks in time-frequency plane than Kalman filter, and its time and frequency resolution is finer which ensures its better ability to track the local properties of earthquake ground motions and to identify the systems with nonlinearity or abruptness. Moreover, the estimation results of ARMA models with different orders indicate that the theoretical frequency resolving power ofARMA model which was usually ignored in former studies has great effect on the estimation precision of instantaneous spectrum and it should be taken as one of the key factors in order selection of ARMA model.
A SO(3)-invariant variational method for depth field estimation based on inertial and camera data
Zarrouati, Nadege; Rouchon, Pierre
2011-01-01
In this paper, we use known camera motion associated to a video sequence of a static scene in order to estimate and incrementally refine the surrounding depth field. We exploit the SO(3)-invariance of brightness and depth fields dynamics to customize standard image processing techniques. Inspired by the Horn-Schunck method, we propose a SO(3)-invariant cost minimized by the depth field. For each time, this provides a diffusion equation on the unit Riemannian sphere that characterizes the estimated depth field. Written in pinhole coordinates, this scalar diffusion equation is numerically solved to provide in real time a depth field estimation of the entire field of view. On synthetic data, quantitative comparison with asymptotic observers merging direct optical flow estimation (by Horn-Schunck and TV-L1 methods) and camera motion illustrate the performance of the proposed method. Implementation on a real sequence of images shows that these estimations are accurate in regions where the depth field is continuous...
Kohei Arai
2016-10-01
Full Text Available Method for Near Infrared: NIR reflectance estimation with visible camera data based on regression for Normalized Vegetation Index: NDVI estimation is proposed together with its application for insect damage detection of rice paddy fields. Through experiments at rice paddy fields which is situated at Saga Prefectural Agriculture Research Institute SPARI in Saga city, Kyushu, Japan, it is found that there is high correlation between NIR reflectance and Green color reflectance. Therefore, it is possible to estimate NIR reflectance with visible camera data which results in possibility of estimation of NDVI with drone mounted visible camera data. As is well known that the protein content in rice crops is highly correlated with NIR intensity, or reflectance of rice leaves, it is possible to estimate rice crop quality with drone based visible camera data.
Shielding and activity estimator for template-based nuclide identification methods
Nelson, Karl Einar
2013-04-09
According to one embodiment, a method for estimating an activity of one or more radio-nuclides includes receiving one or more templates, the one or more templates corresponding to one or more radio-nuclides which contribute to a probable solution, receiving one or more weighting factors, each weighting factor representing a contribution of one radio-nuclide to the probable solution, computing an effective areal density for each of the one more radio-nuclides, computing an effective atomic number (Z) for each of the one more radio-nuclides, computing an effective metric for each of the one or more radio-nuclides, and computing an estimated activity for each of the one or more radio-nuclides. In other embodiments, computer program products, systems, and other methods are presented for estimating an activity of one or more radio-nuclides.
WANG Zi-yang; WU Gang; CHEN Wei
2007-01-01
A new model predictive control (MPC) algorithm for nonlinear systems is presented, its stabilizing property is proved, and its attractive regions are estimated. The presented method is based on the feasible solution,which makes the attractive regions much larger than those of the normal MPC controller that is based on the optimal solution.
Lin, Mo; Li, Rui; Li, Jilin
2007-11-01
This paper deals with several key points including parameter estimation such as frequency of arrival (FOA), time of arrival (TOA) estimation algorithm and signal processing techniques in Medium-altitude Earth Orbit Local User Terminals (MEOLUT) based on Cospas-Sarsat Medium-altitude Earth Orbit Search and Rescue system (MEOSAR). Based on an analytical description of distress beacon, improved TOA and FOA estimation methods have been proposed. An improved FOA estimation method which integrates bi-FOA measurement, FFT method, Rife algorithm and Gaussian window is proposed to improve the accuracy of FOA estimation. In addition, TPD algorithm and signal correlation techniques are used to achieve a high performance of TOA estimation. Parameter estimation problems are solved by proposed FOA/TOA methods under quite poor Carrier-to-Noise (C/N0). A number of simulations are done to show the improvements. FOA and TOA estimation error are lower than 0.1Hz and 11μs respectively which is very high system requirement for MEOSAR system MEOLUT.
Erik Cuevas
2015-01-01
Full Text Available In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC algorithm and the evolutionary method harmony search (HS. With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.
Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Li Jianbao; Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)
2010-05-15
If the energy bands of a phononic crystal are calculated by the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT), good estimation of the eigenfrequencies can only be ensured by the postprocessing of sufficiently long time series generated by a large number of FDTD iterations. In this paper, a postprocessing method based on the high-resolution spectral estimation via the Yule-Walker method is proposed to overcome this difficulty. Numerical simulation results for three-dimensional acoustic and two-dimensional elastic systems show that, compared with the classic FFT-based postprocessing method, the proposed method can give much better estimation of the eigenfrequencies when the FDTD is run with relatively few iterations.
Inventory-based estimation of forest biomass in Shitai County, China: A comparison of five methods
X. Tang
2016-12-01
Full Text Available Several comparative studies have reported that there can be great discrepancies between different methods used to estimate forest biomass. With the development of carbon markets, an accurate estimation at the regional scale (i.e. county level is becoming increasingly important for local government. In this study, we applied five methodologies [continuous biomass expansion factor (CBEF approach, mean biomass density (MB approach, mean biomass expansion factor (MBEF approach, national continuous biomass expansion factors (NCBEF proposed by Fang et al (2002, standard IPCC approach] to estimate the total biomass for Shitai County, China. The CBEF is generally considered to provide the most realistic estimates in term of regional biomass because CBEF reflects the change of BEF to stand density, stand age and site conditions. The forests of the whole county were divided into four forest types, namely Chinese fir plantations (CF, hardwood broadleaved forests (HB, softwood–broadleaved forests (SB and mason pine forests (MP according to the local forest management inventory of 2004. Generally, the MBEF approach overestimated forest biomass while the IPCC approach underestimated forest biomass for all forest types when CBEF derived biomass was used as a control. The MB approach provided the most similar biomass estimates for all forest types and could be an alternative approach when a CBEF equation is lacking in the study area. The total biomass derived from MBEF was highest at 1.44×107 t, followed by 1.32 ×107 t from CBEF, 1.31 ×107 t from NCBEF, 1.25 ×107 t from MB and 1.16 ×107 t from IPCC. Our results facilitate method selection for regional forest biomass estimation and provide statistical evidence for local government planning to enter the potential carbon market.
Inventory-based estimation of forest biomass in Shitai County, China: A comparison of five methods
X. Tang
2016-12-01
Full Text Available Several comparative studies have reported that there can be great discrepancies between different methods used to estimate forest biomass. With the development of carbon markets, an accurate estimation at the regional scale (i.e. county level is becoming increasingly important for local government. In this study, we applied five methodologies [continuous biomass expansion factor (CBEF approach, mean biomass density (MB approach, mean biomass expansion factor (MBEF approach, national continuous biomass expansion factors (NCBEF proposed by Fang et al (2002, standard IPCC approach] to estimate the total biomass for Shitai County, China. The CBEF is generally considered to provide the most realistic estimates in term of regional biomass because CBEF reflects the change of BEF to stand density, stand age and site conditions. The forests of the whole county were divided into four forest types, namely Chinese fir plantations (CF, hardwood broadleaved forests (HB, softwood–broadleaved forests (SB and mason pine forests (MP according to the local forest management inventory of 2004. Generally, the MBEF approach overestimated forest biomass while the IPCC approach underestimated forest biomass for all forest types when CBEF derived biomass was used as a control. The MB approach provided the most similar biomass estimates for all forest types and could be an alternative approach when a CBEF equation is lacking in the study area. The total biomass derived from MBEF was highest at 1.44×107 t, followed by 1.32 ×107 t from CBEF, 1.31 ×107 t from NCBEF, 1.25 ×107 t from MB and 1.16 ×107 t from IPCC. Our results facilitate method selection for regional forest biomass estimation and provide statistical evidence for local government planning to enter the potential carbon market.
Zhou En; Wang Wenbo
2006-01-01
In this paper, Moose scheme is used for frequency offset estimation in OFDMA uplink systems due to that the signals from different users can be easily distinguished in frequency domain. However, differential multiple access interference (MAI) will deteriorate the frequency offset estimation performances,especially in interleaved OFDMA system. Analysis and simulation results manifest that frequency offset estimation by Moose scheme in block OFDMA system is more robust than that in interleaved OFDMA system. And an iterative interference cancellation method has been proposed to suppress the differential MAI interference for interleaved OFDMA system, in which Moose scheme is the special case of the number of iteration is equal to one. Simulation results demonstrate that the proposed method can improve the performance with the increase of the number of iterations. In consideration of the performance and complexity,the proposed method with two iterations is selected. And the full comparison results of the proposed iterative method with two iterations and that with one iteration (conventional Moose scheme) are given in the paper, which sufficiently demonstrate that the performance gain can be obtained by the interference cancellation operation in interleaved OFDMA system.
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
Numerical Validation of a Diurnal Streamflow-Pattern- Based Evapotranspiration Estimation Method
GRIBOVSZKI , Zoltán
2011-01-01
Full Text Available The evapotranspiration (ET estimation method by Gribovszki et al. (2010b has so farbeen validated only at one catchment because good quality discharge time series with the requiredhigh enough temporal resolution can probably be found at only a handful of watersheds worldwide. Tofill in the gap of measured data, synthetic groundwater discharge values were produced by a 2D finiteelement model representing a small catchment. Geometrical and soil physical parameters of thenumerical model were changed systematically and it was checked how well the model reproduced theprescribed ET time series. The tests corroborated that the ET-estimation method is applicable forcatchments underlain by a shallow aquifer. The slope of the riparian zone has a strong impact on theaccuracy of the ET results when the slope is steep, however, the method proved to be reliable forgentle or horizontal riparian zone surfaces, which are more typical in reality. Likewise, errors slightlyincrease with the decrease of riparian zone width, and unless this width is comparable to the width ofthe stream (the case of a narrow riparian zone, the ET estimates stay fairly accurate. The steepness ofthe valley slope had no significant effect on the results but the increase of the stream width (over 4mstrongly influences the ET estimation results, so this method can only be used for small headwatercatchments. Finally, even a magnitude change in the prescribed ET rates had only a small effect on theestimation accuracy. The soil physical parameters, however, strongly influence the accuracy of themethod. The model-prescribed ET values are recovered exactly only for the sandy-loam aquifer,because only in this case was the model groundwater flow system similar to the assumed, theoreticalone. For a low hydraulic conductivity aquifer (e.g. clay, silt, root water uptake creates a considerablydepressed water table under the riparian zone, therefore the method underestimates the ET. In a sandy
A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires.
Garcia-Pozuelo, Daniel; Olatunbosun, Oluremi; Yunta, Jorge; Yang, Xiaoguang; Diaz, Vicente
2017-02-10
The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic.
Doppler Spectrum-Based NRCS Estimation Method for Low-Scattering Areas in Ocean SAR Images
Hui Meng
2017-02-01
Full Text Available The image intensities of low-backscattering areas in synthetic aperture radar (SAR images are often seriously contaminated by the system noise floor and azimuthal ambiguity signal from adjacent high-backscattering areas. Hence, the image intensity of low-backscattering areas does not correctly reflect the backscattering intensity, which causes confusion in subsequent image processing or interpretation. In this paper, a method is proposed to estimate the normalized radar cross-section (NRCS of low-backscattering area by utilizing the differences between noise, azimuthal ambiguity, and signal in the Doppler frequency domain of single-look SAR images; the aim is to eliminate the effect of system noise and azimuthal ambiguity. Analysis shows that, for a spaceborne SAR with a noise equivalent sigma zero (NESZ of −25 dB and a single-look pixel of 8 m × 5 m, the NRCS-estimation precision of this method can reach −38 dB at a resolution of 96 m × 100 m. Three examples are given to validate the advantages of this method in estimating the low NRCS and the filtering of the azimuthal ambiguity.
Cheng, Wei; Zhang, Zhousuo; Cao, Hongrui; He, Zhengjia; Zhu, Guanwen
2014-04-25
This paper investigates one eigenvalue decomposition-based source number estimation method, and three information-based source number estimation methods, namely the Akaike Information Criterion (AIC), Minimum Description Length (MDL) and Bayesian Information Criterion (BIC), and improves BIC as Improved BIC (IBIC) to make it more efficient and easier for calculation. The performances of the abovementioned source number estimation methods are studied comparatively with numerical case studies, which contain a linear superposition case and a both linear superposition and nonlinear modulation mixing case. A test bed with three sound sources is constructed to test the performances of these methods on mechanical systems, and source separation is carried out to validate the effectiveness of the experimental studies. This work can benefit model order selection, complexity analysis of a system, and applications of source separation to mechanical systems for condition monitoring and fault diagnosis purposes.
Wei Cheng
2014-04-01
Full Text Available This paper investigates one eigenvalue decomposition-based source number estimation method, and three information-based source number estimation methods, namely the Akaike Information Criterion (AIC, Minimum Description Length (MDL and Bayesian Information Criterion (BIC, and improves BIC as Improved BIC (IBIC to make it more efficient and easier for calculation. The performances of the abovementioned source number estimation methods are studied comparatively with numerical case studies, which contain a linear superposition case and a both linear superposition and nonlinear modulation mixing case. A test bed with three sound sources is constructed to test the performances of these methods on mechanical systems, and source separation is carried out to validate the effectiveness of the experimental studies. This work can benefit model order selection, complexity analysis of a system, and applications of source separation to mechanical systems for condition monitoring and fault diagnosis purposes.
pyParticleEst: A Python Framework for Particle-Based Estimation Methods
Jerker Nordh
2017-06-01
Full Text Available Particle methods such as the particle filter and particle smoothers have proven very useful for solving challenging nonlinear estimation problems in a wide variety of fields during the last decade. However, there are still very few existing tools available to support and assist researchers and engineers in applying the vast number of methods in this field to their own problems. This paper identifies the common operations between the methods and describes a software framework utilizing this information to provide a flexible and extensible foundation which can be used to solve a large variety of problems in this domain, thereby allowing code reuse to reduce the implementation burden and lowering the barrier of entry for applying this exciting field of methods. The software implementation presented in this paper is freely available and permissively licensed under the GNU Lesser General Public License, and runs on a large number of hardware and software platforms, making it usable for a large variety of scenarios.
Kyu-Sik Park
2015-01-01
Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.
Toma-Danila, Dragos; Florinela Manea, Elena; Ortanza Cioflan, Carmen
2014-05-01
Bucharest, capital of Romania (with 1678000 inhabitants in 2011), is one of the most exposed big cities in Europe to seismic damage. The major earthquakes affecting the city have their origin in the Vrancea region. The Vrancea intermediate-depth source generates, statistically, 2-3 shocks with moment magnitude >7.0 per century. Although the focal distance is greater than 170 km, the historical records (from the 1838, 1894, 1908, 1940 and 1977 events) reveal severe effects in the Bucharest area, e.g. intensities IX (MSK) for the case of 1940 event. During the 1977 earthquake, 1420 people were killed and 33 large buildings collapsed. The nowadays building stock is vulnerable both due to construction (material, age) and soil conditions (high amplification, generated within the weak consolidated Quaternary deposits, their thickness is varying 250-500m throughout the city). A number of 373 old buildings, out of 2563, evaluated by experts are more likely to experience severe damage/collapse in the next major earthquake. The total number of residential buildings, in 2011, was 113900. In order to guide the mitigation measures, different studies tried to estimate the seismic risk of Bucharest, in terms of buildings, population or economic damage probability. Unfortunately, most of them were based on incomplete sets of data, whether regarding the hazard or the building stock in detail. However, during the DACEA Project, the National Institute for Earth Physics, together with the Technical University of Civil Engineering Bucharest and NORSAR Institute managed to compile a database for buildings in southern Romania (according to the 1999 census), with 48 associated capacity and fragility curves. Until now, the developed real-time estimation system was not implemented for Bucharest. This paper presents more than an adaptation of this system to Bucharest; first, we analyze the previous seismic risk studies, from a SWOT perspective. This reveals that most of the studies don't use
Methods for estimating properties of hydrocarbons comprising asphaltenes based on their solubility
Schabron, John F.; Rovani, Jr., Joseph F.
2016-10-04
Disclosed herein is a method of estimating a property of a hydrocarbon comprising the steps of: preparing a liquid sample of a hydrocarbon, the hydrocarbon having asphaltene fractions therein; precipitating at least some of the asphaltenes of a hydrocarbon from the liquid sample with one or more precipitants in a chromatographic column; dissolving at least two of the different asphaltene fractions from the precipitated asphaltenes during a successive dissolution protocol; eluting the at least two different dissolved asphaltene fractions from the chromatographic column; monitoring the amount of the fractions eluted from the chromatographic column; using detected signals to calculate a percentage of a peak area for a first of the asphaltene fractions and a peak area for a second of the asphaltene fractions relative to the total peak areas, to determine a parameter that relates to the property of the hydrocarbon; and estimating the property of the hydrocarbon.
Causal Effect Estimation Methods
2014-01-01
Relationship between two popular modeling frameworks of causal inference from observational data, namely, causal graphical model and potential outcome causal model is discussed. How some popular causal effect estimators found in applications of the potential outcome causal model, such as inverse probability of treatment weighted estimator and doubly robust estimator can be obtained by using the causal graphical model is shown. We confine to the simple case of binary outcome and treatment vari...
Estimation of Power/Energy Losses in Electric Distribution Systems based on an Efficient Method
Gheorghe Grigoras
2013-09-01
Full Text Available Estimation of the power/energy losses constitutes an important tool for an efficient planning and operation of electric distribution systems, especially in a free energy market environment. For further development of plans of energy loss reduction and for determination of the implementation priorities of different measures and investment projects, analysis of the nature and reasons of losses in the system and in its different parts is needed. In the paper, an efficient method concerning the power flow problem of medium voltage distribution networks, under condition of lack of information about the nodal loads, is presented. Using this method it can obtain the power/energy losses in power transformers and the lines. The test results, obtained for a 20 kV real distribution network from Romania, confirmed the validity of the proposed method.
A method to estimate emission rates from industrial stacks based on neural networks.
Olcese, Luis E; Toselli, Beatriz M
2004-11-01
This paper presents a technique based on artificial neural networks (ANN) to estimate pollutant rates of emission from industrial stacks, on the basis of pollutant concentrations measured on the ground. The ANN is trained on data generated by the ISCST3 model, widely accepted for evaluation of dispersion of primary pollutants as a part of an environmental impact study. Simulations using theoretical values and comparison with field data are done, obtaining good results in both cases at predicting emission rates. The application of this technique would allow the local environment authority to control emissions from industrial plants without need of performing direct measurements inside the plant.
Yi-Xiong Zhang
2017-05-01
Full Text Available In wideband radar systems, the performance of motion parameters estimation can significantly affect the performance of object detection and the quality of inverse synthetic aperture radar (ISAR imaging. Although the traditional motion parameters estimation methods can reduce the range migration (RM and Doppler frequency migration (DFM effects in ISAR imaging, the computational complexity is high. In this paper, we propose a new fast non-parameter-searching method for motion parameters estimation based on the cross-correlation of adjacent echoes (CCAE for wideband LFM signals. A cross-correlation operation is carried out for two adjacent echo signals, then the motion parameters can be calculated by estimating the frequency of the correlation result. The proposed CCAE method can be applied directly to the stretching system, which is commonly adopted in wideband radar systems. Simulation results demonstrate that the new method can achieve better estimation performances, with much lower computational cost, compared with existing methods. The experimental results on real radar datasets are also evaluated to verify the effectiveness and superiority of the proposed method compared to the state-of-the-art existing methods.
Optical Remote Sensing Method to Estimate Green Tide Biomass Based on Floating Algae Index
Hu, Lianbo; Hu, Chuanmin; He, Mingxia
2014-11-01
Floating Algae Index (FAI) has been developed to detect various floating algae in open ocean environments using the medium-resolution (250- and 500-m) data from operational MODIS (Moderate Resolution Imaging Spectroradiometer) instruments. FAI method has been routinely used to identify and calculate the covering area of green tide in the Yellow Sea (YS) since 2009. In addition to green tide covering area, knowledge of the biomass is also important in studying green tide recycling, nutrient load, carbon cycling and for government management. In this study, in situ experiments were conducted to simultaneously measure the biomass and reflectance spectra of green tide on the sea surface in coastal waters off Qingdao on 9 and 11 June 2013. The in situ measurements showed high correlation between green tide biomass and FAI, from which an empirical method to estimate biomass using FAI could be developed.
Kinematics of Gait: New Method for Angle Estimation Based on Accelerometers
Dejan B. Popović
2011-11-01
Full Text Available A new method for estimation of angles of leg segments and joints, which uses accelerometer arrays attached to body segments, is described. An array consists of two accelerometers mounted on a rigid rod. The absolute angle of each body segment was determined by band pass filtering of the differences between signals from parallel axes from two accelerometers mounted on the same rod. Joint angles were evaluated by subtracting absolute angles of the neighboring segments. This method eliminates the need for double integration as well as the drift typical for double integration. The efficiency of the algorithm is illustrated by experimental results involving healthy subjects who walked on a treadmill at various speeds, ranging between 0.15 m/s and 2.0 m/s. The validation was performed by comparing the estimated joint angles with the joint angles measured with flexible goniometers. The discrepancies were assessed by the differences between the two sets of data (obtained to be below 6 degrees and by the Pearson correlation coefficient (greater than 0.97 for the knee angle and greater than 0.85 for the ankle angle.
Kinematics of gait: new method for angle estimation based on accelerometers.
Djurić-Jovičić, Milica D; Jovičić, Nenad S; Popović, Dejan B
2011-01-01
A new method for estimation of angles of leg segments and joints, which uses accelerometer arrays attached to body segments, is described. An array consists of two accelerometers mounted on a rigid rod. The absolute angle of each body segment was determined by band pass filtering of the differences between signals from parallel axes from two accelerometers mounted on the same rod. Joint angles were evaluated by subtracting absolute angles of the neighboring segments. This method eliminates the need for double integration as well as the drift typical for double integration. The efficiency of the algorithm is illustrated by experimental results involving healthy subjects who walked on a treadmill at various speeds, ranging between 0.15 m/s and 2.0 m/s. The validation was performed by comparing the estimated joint angles with the joint angles measured with flexible goniometers. The discrepancies were assessed by the differences between the two sets of data (obtained to be below 6 degrees) and by the Pearson correlation coefficient (greater than 0.97 for the knee angle and greater than 0.85 for the ankle angle).
A Novel Sampling Method for Satellite-Based Offshore Wind Resource Estimation
Badger, Merete; Badger, Jake; Hasager, Charlotte Bay
Synthetic aperture radar (SAR) measurements from satellites can be used to estimate the spatial wind speed variation offshore in great detail. The radar senses cm-scale roughness at the sea surface which can be translated to wind speed at the height 10 m using an empirical geophysical model......-based wind climatology have improved gradually as more data were collected. The satellite scenes have been treated as random samples and weighted equally in our previous analyses. Here we introduce a novel sampling strategy based on the wind class methodology that is normally applied in numerical modeling...... climatologically representative large-scale meteorological conditions for the region of interest. The wind classes are used to make the most representative selection of satellite images from the ENVISAT image catalogue. A minimum of one satellite image is chosen per wind class. The frequency of occurrence of each...
A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires
Garcia-Pozuelo, Daniel; Olatunbosun, Oluremi; Yunta, Jorge; Yang, Xiaoguang; Diaz, Vicente
2017-01-01
The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic. PMID:28208631
A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires
Daniel Garcia-Pozuelo
2017-02-01
Full Text Available The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic.
Physically-based Methods for the Estimation of Crop Water Requirements from E.O. Optical Data
The estimation of evapotranspiration (ET) represent the basic information for the evaluation of crop water requirements. A widely used method to compute ET is based on the so-called "crop coefficient" (Kc), defined as the ratio of total evapotranspiration by reference evapotranspiration ET0. The val...
Christ, Theodore J.; Monaghen, Barbara D.; Zopluoglu, Cengiz; Van Norman, Ethan R.
2013-01-01
Curriculum-based measurement of oral reading (CBM-R) is used to index the level and rate of student growth across the academic year. The method is frequently used to set student goals and monitor student progress. This study examined the diagnostic accuracy and quality of growth estimates derived from pre-post measurement using CBM-R data. A…
Software Development Cost Estimation Methods
Bogdan Stepien
2003-01-01
Full Text Available Early estimation of project size and completion time is essential for successful project planning and tracking. Multiple methods have been proposed to estimate software size and cost parameters. Suitability of the estimation methods depends on many factors like software application domain, product complexity, availability of historical data, team expertise etc. Most common and widely used estimation techniques are described and analyzed. Current research trends in software estimation cost are also presented.
MR-based water content estimation in cartilage: design and validation of a method
Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen;
Purpose: Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Methods and Materials: Cartilage tissue T1 map based water content MR sequences were used on a 37 Celsius degree stable system. The T1 map intensity signal was analyzed on 6...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue contains) and we measured the water they contained. Results:The 37 Celsius degree system and the analysis can be reproduced in a similar way. MR T1...
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-09-27
In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017. Published by Elsevier B.V.
Lu, Tao; Hu, Guorui
2016-04-01
In the engineering seismology studies, the seismic permanent displacement of the near-fault site is often obtained by the process of the ground motion accelerogram recorded by the instrument on the station. Because of the selection differences of the estimate methods and the algorithm parameters, the strongly different results of the permanent displacement is gotten often. And the reliability of the methods has not only been proved in fact, but also the selection of the algorithm parameters has to be carefully considered. In order to solve this problem, the experimental study on the permanent displacement according to the accelerogram was carried out with the experiment program of using the large shaking table and the sliding mechanism in the earthquake engineering laboratory. In the experiments,the large shaking table genarated the dynamincs excitation without the permanent displacement,the sliding mechanism fixed on the shaking table genarated the permanent displacement, and the accelerogram including the permant information had been recorded by the instrument on the sliding mechanism.Then the permanent displacement value had been obtained according to the accelerogram, and been compared with the displacement value gotten by the displacement meter and the digital close range photogrammetry. The experimental study showed that the reliable permanent displacement could be obtained by the existing processing method under the simple laboratory conditions with the preconditions of the algorithm parameters selection carefully.
Asiri, Sharefa M.
2016-10-20
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.
Leandro Vargas-Meléndez
2016-08-01
Full Text Available This article presents a novel estimator based on sensor fusion, which combines the Neural Network (NN with a Kalman filter in order to estimate the vehicle roll angle. The NN estimates a “pseudo-roll angle” through variables that are easily measured from Inertial Measurement Unit (IMU sensors. An IMU is a device that is commonly used for vehicle motion detection, and its cost has decreased during recent years. The pseudo-roll angle is introduced in the Kalman filter in order to filter noise and minimize the variance of the norm and maximum errors’ estimation. The NN has been trained for J-turn maneuvers, double lane change maneuvers and lane change maneuvers at different speeds and road friction coefficients. The proposed method takes into account the vehicle non-linearities, thus yielding good roll angle estimation. Finally, the proposed estimator has been compared with one that uses the suspension deflections to obtain the pseudo-roll angle. Experimental results show the effectiveness of the proposed NN and Kalman filter-based estimator.
Vargas-Meléndez, Leandro; Boada, Beatriz L; Boada, María Jesús L; Gauchía, Antonio; Díaz, Vicente
2016-08-31
This article presents a novel estimator based on sensor fusion, which combines the Neural Network (NN) with a Kalman filter in order to estimate the vehicle roll angle. The NN estimates a "pseudo-roll angle" through variables that are easily measured from Inertial Measurement Unit (IMU) sensors. An IMU is a device that is commonly used for vehicle motion detection, and its cost has decreased during recent years. The pseudo-roll angle is introduced in the Kalman filter in order to filter noise and minimize the variance of the norm and maximum errors' estimation. The NN has been trained for J-turn maneuvers, double lane change maneuvers and lane change maneuvers at different speeds and road friction coefficients. The proposed method takes into account the vehicle non-linearities, thus yielding good roll angle estimation. Finally, the proposed estimator has been compared with one that uses the suspension deflections to obtain the pseudo-roll angle. Experimental results show the effectiveness of the proposed NN and Kalman filter-based estimator.
A Study on the Estimation Method of Risk Based Area for Jetty Safety Monitoring
Byeong-Wook Nam
2015-09-01
Full Text Available Recently, the importance of safety-monitoring systems was highlighted by the unprecedented collision between a ship and a jetty in Yeosu. Accordingly, in this study, we introduce the concept of risk based area and develop a methodology for a jetty safety-monitoring system. By calculating the risk based areas for a ship and a jetty, the risk of collision was evaluated. To calculate the risk based areas, we employed an automatic identification system for the ship, stopping-distance equations, and the regulation velocity near the jetty. In this paper, we suggest a risk calculation method for jetty safety monitoring that can determine the collision probability in real time and predict collisions using the amount of overlap between the two calculated risk based areas. A test was conducted at a jetty control center at GS Caltex, and the effectiveness of the proposed risk calculation method was verified. The method is currently applied to the jetty-monitoring system at GS Caltex in Yeosu for the prevention of collisions.
A real-time gaze position estimation method based on a 3-D eye model.
Park, Kang Ryoung
2007-02-01
This paper proposes a new gaze-detection method based on a 3-D eye position and the gaze vector of the human eyeball. Seven new developments compared to previous works are presented. First, a method of using three camera systems, i.e., one wide-view camera and two narrow-view cameras, is proposed. The narrow-view cameras use autozooming, focusing, panning, and tilting procedures (based on the detected 3-D eye feature position) for gaze detection. This allows for natural head and eye movement by users. Second, in previous conventional gaze-detection research, one or multiple illuminators were used. These studies did not consider specular reflection (SR) problems, which were caused by the illuminators when working with users who wore glasses. To solve this problem, a method based on dual illuminators is proposed in this paper. Third, the proposed method does not require user-dependent calibration, so all procedures for detecting gaze position operate automatically without human intervention. Fourth, the intrinsic characteristics of the human eye, such as the disparity between the pupillary and the visual axes in order to obtain accurate gaze positions, are considered. Fifth, all the coordinates obtained by the left and right narrow-view cameras, as well as the wide-view camera coordinates and the monitor coordinates, are unified. This simplifies the complex 3-D converting calculation and allows for calculation of the 3-D feature position and gaze position on the monitor. Sixth, to upgrade eye-detection performance when using a wide-view camera, the adaptive-selection method is used. This involves an IR-LED on/off scheme, an AdaBoost classifier, and a principle component analysis method based on the number of SR elements. Finally, the proposed method uses an eigenvector matrix (instead of simply averaging six gaze vectors) in order to obtain a more accurate final gaze vector that can compensate for noise. Experimental results show that the root mean square error of
Primeau, Charlotte; Friis, Laila Saidane; Sejrsen, Birgitte;
2016-01-01
on a modern population (Maresh: Human growth and development () pp 155-200), and, lastly, based on archeological data with known ages (Rissech et al.: Forensic Sci Int 180 () 1-9). As growth of long bones is known to be non-linear it was tested if the regression model could be improved by applying a quadratic...... model. RESULTS: Comparison between estimated ages revealed that the modern data result in lower estimated ages when compared to the Danish regression equations. The estimated ages using the Danish regression equations and the regression equations developed by Rissech et al. (Forensic Sci Int 180 () 1......OBJECTIVES: To develop a series of regression equations for estimating age from length of long bones for archaeological sub-adults when aging from dental development cannot be performed. Further, to compare derived ages when using these regression equations, and two other methods. MATERIAL...
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Chaozhou; LU; Yanfen; LUO
2014-01-01
The existing calculation methods for the number of agricultural surplus labor have a common flaw,that is,they can not reflect the impact of technical efficiency changes in agricultural production on the surplus labor. Based on the basic principle of stochastic frontier production function,this paper calculates the agricultural production technical efficiency of various provinces,and selects the province with the highest technical efficiency to assume that its agricultural labor is fully utilized,and there is no agricultural surplus labor. With the ratio of agricultural labor number to agricultural output value in this province as a reference,this paper calculates the number of agricultural surplus labor in other provinces. This calculation method makes up for the shortcomings of the existing calculation methods; it reflects the relationship between the number of agricultural surplus labor and production technical efficiency.
Han Wang
2016-06-01
Full Text Available The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP algorithm without a prior sparse knowledge of the channel.
Wang, Han; Du, Wencai; Xu, Lingwei
2016-06-24
The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS) problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP) algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP) algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP) algorithm without a prior sparse knowledge of the channel.
A Bayesian Target Predictor Method based on Molecular Pairing Energies estimation.
Oliver, Antoni; Canals, Vincent; Rosselló, Josep L
2017-03-06
Virtual screening (VS) is applied in the early drug discovery phases for the quick inspection of huge molecular databases to identify those compounds that most likely bind to a given drug target. In this context, there is the necessity of the use of compact molecular models for database screening and precise target prediction in reasonable times. In this work we present a new compact energy-based model that is tested for its application to Virtual Screening and target prediction. The model can be used to quickly identify active compounds in huge databases based on the estimation of the molecule's pairing energies. The greatest molecular polar regions along with its geometrical distribution are considered by using a short set of smart energy vectors. The model is tested using similarity searches within the Directory of Useful Decoys (DUD) database. The results obtained are considerably better than previously published models. As a Target prediction methodology we propose the use of a Bayesian Classifier that uses a combination of different active compounds to build an energy-dependent probability distribution function for each target.
A Bayesian Target Predictor Method based on Molecular Pairing Energies estimation
Oliver, Antoni; Canals, Vincent; Rosselló, Josep L.
2017-03-01
Virtual screening (VS) is applied in the early drug discovery phases for the quick inspection of huge molecular databases to identify those compounds that most likely bind to a given drug target. In this context, there is the necessity of the use of compact molecular models for database screening and precise target prediction in reasonable times. In this work we present a new compact energy-based model that is tested for its application to Virtual Screening and target prediction. The model can be used to quickly identify active compounds in huge databases based on the estimation of the molecule’s pairing energies. The greatest molecular polar regions along with its geometrical distribution are considered by using a short set of smart energy vectors. The model is tested using similarity searches within the Directory of Useful Decoys (DUD) database. The results obtained are considerably better than previously published models. As a Target prediction methodology we propose the use of a Bayesian Classifier that uses a combination of different active compounds to build an energy-dependent probability distribution function for each target.
Zhi Qiu
2015-02-01
Full Text Available This paper presents a hybrid damage detection method based on continuous wavelet transform (CWT and modal parameter identification techniques for beam-like structures. First, two kinds of mode shape estimation methods, herein referred to as the quadrature peaks picking (QPP and rational fraction polynomial (RFP methods, are used to identify the first four mode shapes of an intact beam-like structure based on the hammer/accelerometer modal experiment. The results are compared and validated using a numerical simulation with ABAQUS software. In order to determine the damage detection effectiveness between the QPP-based method and the RFP-based method when applying the CWT technique, the first two mode shapes calculated by the QPP and RFP methods are analyzed using CWT. The experiment, performed on different damage scenarios involving beam-like structures, shows that, due to the outstanding advantage of the denoising characteristic of the RFP-based (RFP-CWT technique, the RFP-CWT method gives a clearer indication of the damage location than the conventionally used QPP-based (QPP-CWT method. Finally, an overall evaluation of the damage detection is outlined, as the identification results suggest that the newly proposed RFP-CWT method is accurate and reliable in terms of detection of damage locations on beam-like structures.
Zhu, J. G.; Sun, Z. C.; Wei, X. Z.; Dai, H. F.
2015-01-01
The power battery thermal management problem in EV (electric vehicle) and HEV (hybrid electric vehicle) has been widely discussed, and EIS (electrochemical impedance spectroscopy) is an effective experimental method to test and estimate the status of the battery. Firstly, an electrochemical-based impedance matrix analysis for lithium-ion battery is developed to describe the impedance response of electrochemical impedance spectroscopy. Then a method, based on electrochemical impedance spectroscopy measurement, has been proposed to estimate the internal temperature of power lithium-ion battery by analyzing the phase shift and magnitude of impedance at different ambient temperatures. Respectively, the SoC (state of charge) and temperature have different effects on the impedance characteristics of battery at various frequency ranges in the electrochemical impedance spectroscopy experimental study. Also the impedance spectrum affected by SoH (state of health) is discussed in the paper preliminary. Therefore, the excitation frequency selected to estimate the inner temperature is in the frequency range which is significantly influenced by temperature without the SoC and SoH. The intrinsic relationship between the phase shift and temperature is established under the chosen excitation frequency. And the magnitude of impedance related to temperature is studied in the paper. In practical applications, through obtaining the phase shift and magnitude of impedance, the inner temperature estimation could be achieved. Then the verification experiments are conduced to validate the estimate method. Finally, an estimate strategy and an on-line estimation system implementation scheme utilizing battery management system are presented to describe the engineering value.
A Simple Fusion Method for Image Time Series Based on the Estimation of Image Temporal Validity
Mar Bisquert
2015-01-01
Full Text Available High-spatial-resolution satellites usually have the constraint of a low temporal frequency, which leads to long periods without information in cloudy areas. Furthermore, low-spatial-resolution satellites have higher revisit cycles. Combining information from high- and low- spatial-resolution satellites is thought a key factor for studies that require dense time series of high-resolution images, e.g., crop monitoring. There are several fusion methods in the bibliography, but they are time-consuming and complicated to implement. Moreover, the local evaluation of the fused images is rarely analyzed. In this paper, we present a simple and fast fusion method based on a weighted average of two input images (H and L, which are weighted by their temporal validity to the image to be fused. The method was applied to two years (2009–2010 of Landsat and MODIS (MODerate Imaging Spectroradiometer images that were acquired over a cropped area in Brazil. The fusion method was evaluated at global and local scales. The results show that the fused images reproduced reliable crop temporal profiles and correctly delineated the boundaries between two neighboring fields. The greatest advantages of the proposed method are the execution time and ease of use, which allow us to obtain a fused image in less than five minutes.
Sun, Bingxiang; Jiang, Jiuchun; Zheng, Fangdan; Zhao, Wei; Liaw, Bor Yann; Ruan, Haijun; Han, Zhiqiang; Zhang, Weige
2015-05-01
The state of health (SOH) estimation is very critical to battery management system to ensure the safety and reliability of EV battery operation. Here, we used a unique hybrid approach to enable complex SOH estimations. The approach hybridizes the Delphi method known for its simplicity and effectiveness in applying weighting factors for complicated decision-making and the grey relational grade analysis (GRGA) for multi-factor optimization. Six critical factors were used in the consideration for SOH estimation: peak power at 30% state-of-charge (SOC), capacity, the voltage drop at 30% SOC with a C/3 pulse, the temperature rises at the end of discharge and charge at 1C; respectively, and the open circuit voltage at the end of charge after 1-h rest. The weighting of these factors for SOH estimation was scored by the 'experts' in the Delphi method, indicating the influencing power of each factor on SOH. The parameters for these factors expressing the battery state variations are optimized by GRGA. Eight battery cells were used to illustrate the principle and methodology to estimate the SOH by this hybrid approach, and the results were compared with those based on capacity and power capability. The contrast among different SOH estimations is discussed.
Gualda Guilherme A.R.
2005-01-01
Full Text Available An important drawback of the electron microprobe is its inability to quantify Fe3+/Fe2+ ratios in routine work. Although these ratios can be calculated, there is no unique criterion that can be applied to all amphiboles. Using a large data set of calcic, sodic-calcic, and sodic amphibole analysis from A-type granites and syenites from southern Brazil, weassess the choices made by the method of Schumacher (1997, Canadian Mineralogist, 35: 238-246, which uses the average between selected maximum and minimum estimates. Maximum estimates selected most frequently are: 13 cations excluding Ca, Na, and K (13eCNK - 66%; sum of Si and Al equal to 8 (8SiAl - 17%; 15 cations excluding K (15eK - 8%. These selections are appropriate based on crystallochemical considerations. Minimum estimates are mostly all iron as Fe2+ (all Fe2 - 71%, and are clearly inadequate. Hence, maximum estimates should better approximate the actual values. To test this, complete analyses were selected from the literature, and calculated and measured values were compared. 13eCNK and maximum estimates are precise and accurate (concordance correlation coefficient- r c " 0.85. As expected, averages yield poor estimates (r c = 0.56. We recommend, thus, that maximum estimates be used for calcic, sodic-calcic, and sodic amphiboles.
Mente, Carsten; Prade, Ina; Brusch, Lutz; Breier, Georg; Deutsch, Andreas
2011-07-01
Lattice-gas cellular automata (LGCAs) can serve as stochastic mathematical models for collective behavior (e.g. pattern formation) emerging in populations of interacting cells. In this paper, a two-phase optimization algorithm for global parameter estimation in LGCA models is presented. In the first phase, local minima are identified through gradient-based optimization. Algorithmic differentiation is adopted to calculate the necessary gradient information. In the second phase, for global optimization of the parameter set, a multi-level single-linkage method is used. As an example, the parameter estimation algorithm is applied to a LGCA model for early in vitro angiogenic pattern formation.
Estimation of Permeability from NMR Logs Based on Formation Classification Method in Tight Gas Sands
Wei Deng-Feng
2015-10-01
Full Text Available The Schlumberger Doll Research (SDR model and cross plot of porosity versus permeability cannot be directly used in tight gas sands. In this study, the HFU approach is introduced to classify rocks, and determine the involved parameters in the SDR model. Based on the difference of FZI, 87 core samples, drilled from tight gas sandstones reservoirs of E basin in northwest China and applied for laboratory NMR measurements, were classified into three types, and the involved parameters in the SDR model are calibrated separately. Meanwhile, relationships of porosity versus permeability are also established. The statistical model is used to calculate consecutive FZI from conventional logs. Field examples illustrate that the calibrated SDR models are applicable in permeability estimation; models established from routine core analyzed results are effective in reservoirs with permeability lower than 0.3 mD, while the unified SDR model is only valid in reservoirs with permeability ranges from 0.1 to 0.3 mD.
Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid
2017-04-01
Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.
Wang, Xiuhong; Mao, Xingpeng; Wang, Yiming; Zhang, Naitong; Li, Bo
2016-09-15
Based on sparse representations, the problem of two-dimensional (2-D) direction of arrival (DOA) estimation is addressed in this paper. A novel sparse 2-D DOA estimation method, called Dimension Reduction Sparse Reconstruction (DRSR), is proposed with pairing by Spatial Spectrum Reconstruction of Sub-Dictionary (SSRSD). By utilizing the angle decoupling method, which transforms a 2-D estimation into two independent one-dimensional (1-D) estimations, the high computational complexity induced by a large 2-D redundant dictionary is greatly reduced. Furthermore, a new angle matching scheme, SSRSD, which is less sensitive to the sparse reconstruction error with higher pair-matching probability, is introduced. The proposed method can be applied to any type of orthogonal array without requirement of a large number of snapshots and a priori knowledge of the number of signals. The theoretical analyses and simulation results show that the DRSR-SSRSD method performs well for coherent signals, which performance approaches Cramer-Rao bound (CRB), even under a single snapshot and low signal-to-noise ratio (SNR) condition.
Xiuhong Wang
2016-09-01
Full Text Available Based on sparse representations, the problem of two-dimensional (2-D direction of arrival (DOA estimation is addressed in this paper. A novel sparse 2-D DOA estimation method, called Dimension Reduction Sparse Reconstruction (DRSR, is proposed with pairing by Spatial Spectrum Reconstruction of Sub-Dictionary (SSRSD. By utilizing the angle decoupling method, which transforms a 2-D estimation into two independent one-dimensional (1-D estimations, the high computational complexity induced by a large 2-D redundant dictionary is greatly reduced. Furthermore, a new angle matching scheme, SSRSD, which is less sensitive to the sparse reconstruction error with higher pair-matching probability, is introduced. The proposed method can be applied to any type of orthogonal array without requirement of a large number of snapshots and a priori knowledge of the number of signals. The theoretical analyses and simulation results show that the DRSR-SSRSD method performs well for coherent signals, which performance approaches Cramer–Rao bound (CRB, even under a single snapshot and low signal-to-noise ratio (SNR condition.
A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis
Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo
2017-06-01
Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.
Drews, Martin; Lauritzen, Bent; Madsen, Henrik
2005-01-01
A Kalman filter method is discussed for on-line estimation of radioactive release and atmospheric dispersion from a time series of off-site radiation monitoring data. The method is based on a state space approach, where a stochastic system equation describes the dynamics of the plume model...... parameters, and the observables are linked to the state variables through a static measurement equation. The method is analysed for three simple state space models using experimental data obtained at a nuclear research reactor. Compared to direct measurements of the atmospheric dispersion, the Kalman filter...... estimates are found to agree well with the measured parameters, provided that the radiation measurements are spread out in the cross-wind direction. For less optimal detector placement it proves difficult to distinguish variations in the source term and plume height; yet the Kalman filter yields consistent...
Oh, Sun Ryung; Park, Hyun Sun [POSTECH, Pohang (Korea, Republic of); Kim, Moo Hwan [KAERI, Daejeon (Korea, Republic of)
2016-05-15
The sodium-cooled fast reactor (SFR) is one of generation IV type reactors and has been extensively researched since 1950s. A strong advantage of the SFR is its liquid sodium coolant which is well-known for its superior thermal properties. However, in terms of possible pipe leakage or rupture, a liquid sodium coolant possesses a critical issue due to its high chemical reactivity which leads to fire or explosion. Due to its safety concerns, dispersion of nanoparticles in liquid sodium has been proposed to reduce the chemical reactivity of sodium. In case of sodium based titanium nanofluid (NaTiNF), the chemical reactivity suppression effect when interacting with water has been proved both experimentally and theoretically [1,2]. Suppression of chemical reactivity is critical without much loss of high heat transfer characteristic of sodium. As there is no research conducted for applying 3-omega sensor in liquid metal as well as high temperature liquid, the sensor development is performed for using in NaTiNF as well as effective thermal conductivity model validation. Based on the acquired effective thermal conductivity of NaTiNF, existing effective thermal conductivity models are evaluated. Thermal conductivity measurement is performed for liquid sodium based titanium nanofluid (NaTiNF) through 3-Omega method. The experiment is conducted at three temperature points of 120, 150, and 180 .deg. C for both pure liquid sodium and NaTiNF. By using 3- omega sensor, thermal conductivity measurement of liquid metal can be more conveniently conducted in labscale. Also, its possibility to measure the thermal conductivity of high temperature liquid metal with metallic nanoparticles being dispersed is shown. Unlike other water or oil-based nanofluids, NaTiNF exhibits reduction of thermal conductivity compare with liquid sodium. Various nanofluid models are plotted, and it is concluded that the MSBM which considers interfacial resistance and Brownian motion can be used in predicting
Ghajarnia, Navid; Arasteh, Peyman D.; Araghinejad, Shahab; Liaghat, Majid A.
2016-07-01
Incorrect estimation of rainfall occurrence, so called False Alarm (FA) is one of the major sources of bias error of satellite based precipitation estimation products and may even cause lots of problems during the bias reduction and calibration processes. In this paper, a hybrid statistical method is introduced to detect FA events of PERSIANN dataset over Urmia Lake basin in northwest of Iran. The main FA detection model is based on Bayesian theorem at which four predictor parameters including PERSIANN rainfall estimations, brightness temperature (Tb), precipitable water (PW) and near surface air temperature (Tair) is considered as its input dataset. In order to decrease the dimensions of input dataset by summarizing their most important modes of variability and correlations to the reference dataset, a technique named singular value decomposition (SVD) is used. The application of Bayesian-SVD method in FA detection of Urmia Lake basin resulted in a trade-off between FA detection and Hit events loss. The results show success of proposed method in detecting about 30% of FA events in return for loss of about 12% of Hit events while better capability of this method in cold seasons is observed.
Nielen, M.; Spronk, I.; Davids, R.; Korevaar, J.; Poos, R.; Hoeymans, N.; Opstelten, W.; Sande, M. van der; Biermans, M.; Schellevis, F.; Verheij, R.
2016-01-01
Background & Aim: Routinely recorded electronic health records (EHRs) from general practitioners (GPs) are increasingly available and provide valuable data for estimating incidence and prevalence rates of diseases in the general population. Valid morbidity rates are essential for patient management
MR-based Water Content Estimation in Cartilage: Design and Validation of a Method
Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sofie; Ringgaard, Steffen
2012-01-01
system (the closest to the body temperature) we measured, using the modified MR sequences, the T1 map intensity signal on 6 cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer...... was costumed and programmed. Finally, we validated the method after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue...... contains) and we measured the water they contained. Results We could reproduce twice the 37 Celsius degree system and could perform the measurements in a similar way. We found that the MR T1 map based water content sequences can provide information that, after being analyzed with a special software, can...
MR-based Water Content Estimation in Cartilage: Design and Validation of a Method
Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sofie; Ringgaard, Steffen;
2012-01-01
system (the closest to the body temperature) we measured, using the modified MR sequences, the T1 map intensity signal on 6 cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer...... was costumed and programmed. Finally, we validated the method after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue...... contains) and we measured the water they contained. Results We could reproduce twice the 37 Celsius degree system and could perform the measurements in a similar way. We found that the MR T1 map based water content sequences can provide information that, after being analyzed with a special software, can...
On the Estimate Method of Construction Engineering Cost Based on the RS-GA-NNA Model
Xie Zheng
2012-07-01
Full Text Available Given the low intelligent level and the low accuracy of valuation of civil architecture projects, we put forward in the study a constructional engineering assessment method based on Artificial Intelligence which taking advantage of data-calculation from rough set theory, genetic algorithm and neural network algorithm. First, the rough set theory is used to reduce the discrete attributes to optimize the input variables of BP neural network. And then use the global search feature of genetic algorithm to optimize the initial weight and the threshold value of BP neural network. The new algorithm covers both the global random search capability of genetic algorithm and the learning ability and robustness of neural network, thus the computational speed and accuracy have been more significantly improved than the traditional methods. To empirically analyze a case selected from a city in Hunan Province, the results show that the new algorithm model can rely on the engineering features, assess the construction costs scientifically and objectively and have high practical value.
Methods of statistical model estimation
Hilbe, Joseph
2013-01-01
Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th
An Estimation Method of Stress in Soft Rock Based on In-situ Measured Stress in Hard Rock
LI Wen-ping; LI Xiao-qin; SUN Ru-hua
2007-01-01
The law of variation of deep rock stress in gravitational and tectonic stress fields is analyzed based on the Hoek-Brown strength criterion. In the gravitational stress field, the rocks in the shallow area are in an elastic state and the deep, relatively soft rock may be in a plastic state. However, in the tectonic stress field, the relatively soft rock in the shallow area is in a plastic state and the deep rock in an elastic state. A method is proposed to estimate stress values in coal and soft rock based on in-situ measurements of hard rock. Our estimation method relates to the type of stress field and stress state. The equations of rock stress in various stress states are presented for the elastic, plastic and critical states. The critical state is a special stress state, which indicates the conversion of the elastic to the plastic state in the gravitational stress field and the conversion of the plastic to the elastic state in the tectonic stress field. Two cases studies show that the estimation method is feasible.
Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay
2015-01-01
In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.
Shindo, J.; Bregt, A.K.; Hakamata, T.
1995-01-01
A simplified steady-state mass balance model for estimating critical loads was applied to a test area in Japan to evaluate its applicability. Three criteria for acidification limits were used. Mean values and spatial distribution patterns of critical load values calculated by these criteria differed
Shindo, J.; Bregt, A.K.; Hakamata, T.
1995-01-01
A simplified steady-state mass balance model for estimating critical loads was applied to a test area in Japan to evaluate its applicability. Three criteria for acidification limits were used. Mean values and spatial distribution patterns of critical load values calculated by these criteria differed
Qian Bin; Yang Wanlin; Wan Qun
2007-01-01
Under dense urban fading environment, performance of joint multi-path parameter estimation method based on traditional point signal model degrades seriously.In this paper, a new space and time signal model based on multipath distribution function is given after new space and time manifold is reconstructed.Then joint spacetime signal subspace is obtained by converting acquired channel from time domain to frequency domain .Then space and time spectrum is formulated by the space sub-matrix and time sub-matrix taken out of joint space-time signal subspace, and parameters are estimated by searching the minimum eigenvalues of the space matrix and the time matrix.Lastly, A space and time parameters matching process is performed by using the orthogonal property between joint noise subspace and the space-time manifold.In contrast with tradition MUSIC, the algorithm we present here only need two 1- dimension searching and was not sensitive to different distribution function.
Rulin Huang
2017-04-01
Full Text Available Existing collision avoidance methods for autonomous vehicles, which ignore the driving intent of detected vehicles, thus, cannot satisfy the requirements for autonomous driving in urban environments because of their high false detection rates of collisions with vehicles on winding roads and the missed detection rate of collisions with maneuvering vehicles. This study introduces an intent-estimation- and motion-model-based (IEMMB method to address these disadvantages. First, a state vector is constructed by combining the road structure and the moving state of detected vehicles. A Gaussian mixture model is used to learn the maneuvering patterns of vehicles from collected data, and the patterns are used to estimate the driving intent of the detected vehicles. Then, a desirable long-term trajectory is obtained by weighting time and comfort. The long-term trajectory and the short-term trajectory, which are predicted using a constant yaw rate motion model, are fused to achieve an accurate trajectory. Finally, considering the moving state of the autonomous vehicle, collisions can be detected and avoided. Experiments have shown that the intent estimation method performed well, achieving an accuracy of 91.7% on straight roads and an accuracy of 90.5% on winding roads, which is much higher than that achieved by the method that ignores the road structure. The average collision detection distance is increased by more than 8 m. In addition, the maximum yaw rate and acceleration during an evasive maneuver are decreased, indicating an improvement in the driving comfort.
Natori, Youichi; Kawamoto, Kazuhiko; Takahashi, Hiroshi; Hirota, Kaoru
A traffic accident prediction method using a priori knowledge based on accident data is proposed for safe driving support. Implementation is achieved by an algorithm using particle filtering and fuzzy inference to estimate accident risk factors. With this method, the distance between the host vehicle and a vehicle ahead and their relative velocity and relative acceleration are obtained from the results of particle filtering of driving data and are used as attributes to build the relative driving state space. The attributes are evaluated as likelihoods and then consolidated as a risk level using fuzzy inference. Experimental validation was done using videos of general driving situations obtained with an on-vehicle CCD camera and one simulated accident situation created based on the video data. The results show that high risk levels were calculated with the proposed method in the early stages of the accident situations.
Primeau, Charlotte; Friis, Laila; Sejrsen, Birgitte; Lynnerup, Niels
2016-01-01
To develop a series of regression equations for estimating age from length of long bones for archaeological sub-adults when aging from dental development cannot be performed. Further, to compare derived ages when using these regression equations, and two other methods. A total of 183 skeletal sub-adults from the Danish medieval period, were aged from radiographic images. Linear regression formulae were then produced for individual bones. Age was then estimated from the femur length using three different methods: equations developed in this study, data based on a modern population (Maresh: Human growth and development (1970) pp 155-200), and, lastly, based on archeological data with known ages (Rissech et al.: Forensic Sci Int 180 (2008) 1-9). As growth of long bones is known to be non-linear it was tested if the regression model could be improved by applying a quadratic model. Comparison between estimated ages revealed that the modern data result in lower estimated ages when compared to the Danish regression equations. The estimated ages using the Danish regression equations and the regression equations developed by Rissech et al. (Forensic Sci Int 180 (2007) 1-9) were very similar, if not identical. This indicates that the growth between the two archaeological populations is not that dissimilar. This would suggest that the regression equations developed in this study may potentially be applied to archaeological material outside Denmark as well as later than the medieval period, although this would require further testing. The quadratic equations are suggested to yield more accurate ages then using simply linear regression equations. © 2015 Wiley Periodicals, Inc.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
LIANG Hui; ZHAO Wei; DAI Dejun; ZHANG Jun
2014-01-01
Diapycnal mixing is important in oceanic circulation. An inverse method in which a semi-explicit scheme is applied to discretize the one-dimensional temperature diffusion equation is established to estimate the vertical temperature diffusion coefficient based on the observed temperature profiles. The sensitivity of the inverse model in the idealized and actual conditions is tested in detail. It can be found that this inverse model has high feasibility under multiple situations ensuring the stability of the inverse model, and can be considered as an efficient way to estimate the temperature diffusion coefficient in the weak current regions of the ocean. Here, the hydrographic profiles from Argo floats are used to estimate the temporal and spatial distribution of the vertical mixing in the north central Pacific based on this inverse method. It is further found that the vertical mixing in the upper ocean displays a distinct seasonal variation with the amplitude decreasing with depth, and the vertical mixing over rough topography is stronger than that over smooth topography. It is suggested that the high-resolution profiles from Argo floats and a more reasonable design of the inverse scheme will serve to understand mixing processes.
Guo Yuqin; Li Fuzhu; Jiang Hong; Wang Xiaochun
2005-01-01
According to the characteristics of a complex cover panel, its geometry shape is described by the NURBS surface with great description capability. With the reference to the surface classification determined by Gauss curvature, the proportion of the mid-surface area between before and after being developed is derived from the displacement variation of the mid-surface in the normal vector direction of the sheet metal during the sheet metal forming process. Hereby, based on the curve development theory in differential geometry, a novel diagonal point by point surface development method is put forward to estimate a complex cover panel's blank contour efficiently. By comparing the sample's development result of diagonal point by point surface development method with that of available one-step method, the validity of the proposed surface development method is verified.
Hyuck-Jin Park; Jung-Yoon Jang; Jung-Hyun Lee
2017-01-01
The physically based model has been widely used in rainfall-induced shallow landslide susceptibility analysis because of its capacity to reproduce the physical processes governing landslide occurrence...
Hamideh Nouri
2016-06-01
Full Text Available Despite being the driest inhabited continent, Australia has one of the highest per capita water consumptions in the world. In addition, instead of having fit-for-purpose water supplies (using different qualities of water for different applications, highly treated drinking water is used for nearly all of Australia’s urban water supply needs, including landscape irrigation. The water requirement of urban landscapes, particularly urban parklands, is of growing concern. The estimation of evapotranspiration (ET and subsequently plant water requirements in urban vegetation needs to consider the heterogeneity of plants, soils, water, and climate characteristics. This research contributes to a broader effort to establish sustainable irrigation practices within the Adelaide Parklands in Adelaide, South Australia. In this paper, two practical ET estimation approaches are compared to a detailed Soil Water Balance (SWB analysis over a one year period. One approach is the Water Use Classification of Landscape Plants (WUCOLS method, which is based on expert opinion on the water needs of different classes of landscape plants. The other is a remote sensing approach based on the Enhanced Vegetation Index (EVI from Moderate Resolution Imaging Spectroradiometer (MODIS sensors on the Terra satellite. Both methods require knowledge of reference ET calculated from meteorological data. The SWB determined that plants consumed 1084 mm·yr−1 of water in ET with an additional 16% lost to drainage past the root zone, an amount sufficient to keep salts from accumulating in the root zone. ET by MODIS EVI was 1088 mm·yr−1, very close to the SWB estimate, while WUCOLS estimated the total water requirement at only 802 mm·yr−1, 26% lower than the SWB estimate and 37% lower than the amount actually added including the drainage fraction. Individual monthly ET by MODIS was not accurate, but these errors were cancelled out to give good agreement on an annual time step. We
Estimating the direction of innovative change based on theory an mixed methods
Geurts, Petrus A.T.M.; Roosendaal, Hans E.
2001-01-01
In predicting the direction of innovative changethe question arises of the valid measurement ofyet unknown variables. We developed and applied aresearch method that combines qualitativeand quantitative elements in one interview formatand an analysis tool suitable for these data. Animportant
CHAKKOR SAAD
2014-05-01
Full Text Available Electrical energy production based on wind power has become the most popular renewable resources in the recent years because it gets reliable clean energy with minimum cost. The major challenge for wind turbines is the electrical and the mechanical failures which can occur at any time causing prospective breakdowns and damages and therefore it leads to machine downtimes and to energy production loss. To circumvent this problem, several tools and techniques have been developed and used to enhance fault detection and diagnosis to be found in the stator current signature for wind turbines generators. Among these methods, parametric or super-resolution frequency estimation methods, which provides typical spectrum estimation, can be useful for this purpose. Facing on the plurality of these algorithms, a comparative performance analysis is made to evaluate robustness based on differents metrics: accuracy, dispersion, computation cost, perturbations and faults severity. Finally, simulation results in Matlab with most occurring faults indicate that ESPRIT and R-MUSIC algorithms have high capability of correctly identifying the frequencies of fault characteristic components, a performance ranking had been carried out to demonstrate the efficiency of the studied methods in faults detecting.
Deceuster, John; Etienne, Adélaïde; Robert, Tanguy; Nguyen, Frédéric; Kaufmann, Olivier
2014-04-01
Several techniques are available to estimate the depth of investigation or to identify possible artifacts in dc resistivity surveys. Commonly, the depth of investigation (DOI) is mainly estimated by using an arbitrarily chosen cut-off value on a selected indicator (resolution, sensitivity or DOI index). Ranges of cut-off values are recommended in the literature for the different indicators. However, small changes in threshold values may induce strong variations in the estimated depths of investigation. To overcome this problem, we developed a new statistical method to estimate the DOI of dc resistivity surveys based on a modified DOI index approach. This method is composed of 5 successive steps. First, two inversions are performed by using different resistivity reference models for the inversion (0.1 and 10 times the arithmetic mean of the logarithm of the observed apparent resistivity values). Inversion models are extended to the edges of the survey line and to a depth range of three times the pseudodepth of investigation of the largest array spacing used. In step 2, we compute the histogram of a newly defined scaled DOI index. Step 3 consists of the fitting of the mixture of two Gaussian distributions (G1 and G2) to the cumulative distribution function of the scaled DOI index values. Based on this fitting, step 4 focuses on the computation of an interpretation index (II) defined for every cell j of the model as the relative probability density that the cell j belongs to G1, which describes the Gaussian distribution of the cells with a scaled DOI index close to 0.0. In step 5, a new inversion is performed by using a third resistivity reference model (the arithmetic mean of the logarithm of the observed apparent resistivity values). The final electrical resistivity image is produced by using II as alpha blending values allowing the visual discrimination between well-constrained areas and poorly-constrained cells.
Wu, Xianhua; Wei, Guo; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian
2014-01-01
Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27-1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30-1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.
Xianhua Wu
2014-01-01
Full Text Available Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27–1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30–1 : 51. Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.
Liang, Xian-hua; Sun, Wei-dong
2011-06-01
Inventory checking is one of the most significant parts for grain reserves, and plays a very important role on the macro-control of food and food security. Simple, fast and accurate method to obtain internal structure information and further to estimate the volume of the grain storage is needed. Here in our developed system, a special designed multi-site laser scanning system is used to acquire the range data clouds of the internal structure of the grain storage. However, due to the seriously uneven distribution of the range data, this data should firstly be preprocessed by an adaptive re-sampling method to reduce the data redundancy as well as noise. Then the range data is segmented and useful features, such as plane and cylinder information, are extracted. With these features a coarse registration between all of these single-site range data is done, and then an Iterative Closest Point (ICP) algorithm is carried out to achieve fine registration. Taking advantage of the structure of the grain storage being well defined and the types of them are limited, a fast automatic registration method based on the priori model is proposed to register the multi-sites range data more efficiently. Then after the integration of the multi-sites range data, the grain surface is finally reconstructed by a delaunay based algorithm and the grain volume is estimated by a numerical integration method. This proposed new method has been applied to two common types of grain storage, and experimental results shown this method is more effective and accurate, and it can also avoids the cumulative effect of errors when registering the overlapped area pair-wisely.
An adaptive segment method for smoothing lidar signal based on noise estimation
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
Risser, Dennis W.; Thompson, Ronald E.; Stuckey, Marla H.
2008-01-01
A method was developed for making estimates of long-term, mean annual ground-water recharge from streamflow data at 80 streamflow-gaging stations in Pennsylvania. The method relates mean annual base-flow yield derived from the streamflow data (as a proxy for recharge) to the climatic, geologic, hydrologic, and physiographic characteristics of the basins (basin characteristics) by use of a regression equation. Base-flow yield is the base flow of a stream divided by the drainage area of the basin, expressed in inches of water basinwide. Mean annual base-flow yield was computed for the period of available streamflow record at continuous streamflow-gaging stations by use of the computer program PART, which separates base flow from direct runoff on the streamflow hydrograph. Base flow provides a reasonable estimate of recharge for basins where streamflow is mostly unaffected by upstream regulation, diversion, or mining. Twenty-eight basin characteristics were included in the exploratory regression analysis as possible predictors of base-flow yield. Basin characteristics found to be statistically significant predictors of mean annual base-flow yield during 1971-2000 at the 95-percent confidence level were (1) mean annual precipitation, (2) average maximum daily temperature, (3) percentage of sand in the soil, (4) percentage of carbonate bedrock in the basin, and (5) stream channel slope. The equation for predicting recharge was developed using ordinary least-squares regression. The standard error of prediction for the equation on log-transformed data was 9.7 percent, and the coefficient of determination was 0.80. The equation can be used to predict long-term, mean annual recharge rates for ungaged basins, providing that the explanatory basin characteristics can be determined and that the underlying assumption is accepted that base-flow yield derived from PART is a reasonable estimate of ground-water recharge rates. For example, application of the equation for 370
Estimating kinetic parameters of complex catalytic reactions using a curve resolution based method
Cruz, S.C.; Rothenberg, G.; Westerhuis, J.A.; Smilde, A.K.
2008-01-01
A Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) based algorithm is used to extract kinetic parameters from on-line FT - NIR data of a series of Heck reactions between iodobenzene and n-butyl acrylate (NBA), measured at different temperatures with different catalysts. Four
A method for real-time condition monitoring of haul roads based on Bayesian parameter estimation
Heyns, T
2012-04-01
Full Text Available and to the vehicles. A recent idea is that vehicle on-board data collection systems could be used to monitor haul roads on a real-time basis by means of vibration signature analysis. This paper proposes a methodology based on Bayesian regression to isolate the effect...
Train velocity estimation method based on an adaptive filter with fuzzy logic
Pichlík, Petr; Zděnek, Jiří
2017-03-01
The train velocity is difficult to determine when the velocity is measured only on the driven or braked locomotive wheelsets. In this case, the calculated train velocity is different from the actual train velocity due to slip velocity or skid velocity respectively. The train velocity is needed for a locomotive controller proper work. For this purpose, an adaptive filter that is tuned by a fuzzy logic is designed and described in the paper. The filter calculates the train longitudinal velocity based on locomotive wheelset velocity. The fuzzy logic is used for the tuning of the filter according to actual wheelset acceleration and wheelset jerk. The simulation results are based on real measured data on a freight train. The results show that the calculated velocity corresponds to the actual train velocity.
Hulley, Glynn C.; Hook, Simon J.
2012-10-01
Land Surface Temperature (LST) has been identified by NASA and other international organizations as an important Earth System Data Record (ESDR). An ESDR is defined as a long-term, well calibrated and validated data set. Identifying uncertainties in LST products with coarse spatial resolutions (>10 km) such as those from hyperspectral infrared sounders is notoriously difficult due to the challenges of making reliable in situ measurements representative of the spatial scales of the output products. In this study we utilize a Radiance-based (R-based) LST method for estimating uncertainties in the Atmospheric Infrared Sounder (AIRS) v5 LST product. The R-based method provides estimates of the true LST using a radiative closure simulation without the need for in situ measurements, and requires input air temperature, relative humidity profiles and emissivity data. The R-based method was employed at three validation sites over the Namib Desert, Gran Desierto, and Redwood National Park for all AIRS observations from 2002 to 2010. Results showed daytime LST root-mean square errors (RMSE) of 2-3 K at the Namib and Desierto sites, and 1.5 K at the Redwood site. Nighttime LST RMSEs at the two desert sites were a factor of two less when compared to daytime results. Positive daytime LST biases were found at each site due to an underestimation of the daytime AIRS v5 longwave spectral emissivity, while the reverse occurred at nighttime. In the AIRS v6 product (release 2012), LST biases and RMSEs will be reduced significantly due to improved methodologies for the surface retrieval and emissivity first guess.
A blowing-based method of detecting trunk and estimating root position for weeding mobile robots
Zhang, Fan; Matsushita, Akihiko; Kaneko, Shun'ichi; Tanaka, Takayuki
2008-11-01
Due to the area of the vineyard in Hokkaido is extremely large, it is very difficult and hard to eradicate weeds by human being. In order to solve this problem, we developed a dynamic image measure technique, which can be applied to the weeding robots in vineyards. The outstanding of this technique is that it can discriminate the weed and the trunk correctly and efficiently. Meanwhile, we also attempt to measure the root of trunk accurately. And a new method to measure the blocked trunk of grapes in vineyards has also been developed in this paper.
Yao, Yu; Cheng, Kai; Zhou, Zhi-Jie; Zhang, Bang-Cheng; Dong, Chao; Zheng, Sen
2015-11-01
A tracked vehicle has been widely used in exploring unknown environments and military fields. In current methods for suiting soil conditions, soil parameters need to be given and the traction performance cannot always be satisfied on soft soil. To solve the problem, it is essential to estimate track-soil parameters in real-time. Therefore, a detailed mathematical model is proposed for the first time. Furthermore, a novel algorithm which is composed of Kalman filter (KF) and improved strong tracking filter (STF) is developed for online track-soil estimation and named as KF-ISTF. By this method, the KF is used to estimate slip parameters, and the ISTF is used to estimate motion states. Then the key soil parameters can be estimated by using a suitable soil model. The experimental results show that equipped with the estimation algorithm, the proposed model can be used to estimate the track-soil parameters, and make the traction performance satisfied with soil conditions.
Matsumoto, S.
2016-09-01
The stress field is a key factor controlling earthquake occurrence and crustal evolution. In this study, we propose an approach for determining the stress field in a region using seismic moment tensors, based on the classical equation in plasticity theory. Seismic activity is a phenomenon that relaxes crustal stress and creates plastic strain in a medium because of faulting, which suggests that the medium could behave as a plastic body. Using the constitutive relation in plastic theory, the increment of the plastic strain tensor is proportional to the deviatoric stress tensor. Simple mathematical manipulation enables the development of an inversion method for estimating the stress field in a region. The method is tested on shallow earthquakes occurring on Kyushu Island, Japan.
Song, Sangha; Elguezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G
2014-01-01
Skin surface irregularity is the most common side effect after liposuction. To reduce this, it is necessary to devise a systematic method to provide structural composition details of skin layers, such as fat thickness and fat boundary tilt angle, for the plastic surgeon. Several commercial portable devices are available to measure skin layer information, working on the principle of a near-infrared technique using the light penetration properties of tissue in optical windows. However, these can only measure general fat thickness and not the structural compositions of skin layers with irregularities. Therefore, our goal in this paper is to propose a method to estimate the structural compositions of skin layers by analyzing and validating the relationship between light distribution and structural composition from simulation data based on specific structural conditions.
Alaniz, Alex; Kallel, Faouzi; Hungerford, Ed; Ophir, Jonathan
2002-01-01
The effects of high intensity focused ultrasound (HIFU)-induced continuously varying thermal gradients on sound ray propagation were modeled theoretically. This modeling was based on Fermat's variational principle of least time for rays propagating in a continuously varying thermal gradient described by a radially symmetric heat equation. Such thermal lenses dynamically affect HIFU beam focusing, and simultaneously create ultrasonic geometric and intensity distortions and artifacts in monitoring devices. Techniques which are based upon ultrasonic cross-correlation methods, such as elastography and two-dimensional temperature estimation, also suffer distortion effects and generate artifacts.
基于截尾估计的概率估计方法%Probability Estimation Method Based on Truncated Estimation
李熔
2014-01-01
能否以高概率正确重建稀疏信号是压缩感知理论中的重要研究内容。信号的稀疏度及冗余字典原子间的相关特性是研究该内容的关键因素。文中运用累积增量的概念，提出了一种基于截尾概率的累积增量满足约束界的概率估计的方法。运用该方法，判断能否利用选取的测量矩阵正确重构原始信号。通过Matlab仿真，验证了将高斯随机矩阵作为观测矩阵，在OMP重构算法下，可以高概率地正确重构出原始信号，也验证了文中所提方法的合理性。%It's an important research content in compressive sensing theory whether reconstruct the sparse signals with a high probability. The sparsity of the signals and the relevant characteristics of the atoms in the redundant dictionary are the key factors of the study. In this paper,taking use of the concept of cumulative coherence,propose a probability estimation method to estimate the probability of the cumu-lative coherence which satisfies the constraint boundary that based on the truncated estimation. It can be found whether the selected meas-urement matrix can correctly reconstruct the original signal with this method. The Matlab simulation verifies that the original signal can be reconstructed using OMP algorithm with a high probability by taking the Gaussian random matrix as the measurement matrix,at the same time,it verifies that the proposed method is reasonable.
Shen Min-Fen; Liu Ying; Lin Lan-Xin
2009-01-01
A novel computationally efficient algorithm in terms of the time-varying symbolic dynamic method is proposed to estimate the unknown initial conditions of coupled map lattices (CMLs). The presented method combines symbolic dynamics with time-varying control parameters to develop a time-varying scheme for estimating the initial condition of multi-dimensional spatiotemporal chaotic signals. The performances of the presented time-varying estimator in both noiseless and noisy environments are analysed and compared with the common time-invariant estimator. Simulations are carried out and the obtained results show that the proposed method provides an efficient estimation of the initial condition of each lattice in the coupled system. The algorithm cannot yield an asymptotically unbiased estimation due to the effect of the coupling term, but the estimation with the time-varying algorithm is closer to the Cramer-Rao lower bound (CRLB) than that with the time-invariant estimation method, especially at high signal-to-noise ratios (SNRs).
Jiang Wei; Xiang Haige
2004-01-01
This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.
Yub Raj Neupane
2014-07-01
Full Text Available The aim of our present work was to develop and validate a reverse phase high-performance liquid chromatography (RP-HPLC method for the determination of Decitabine (DCB. The developed method was further applied to observe the degradation of DCB under various stress conditions. Methods: Chromatographic separation was achieved on C18, 250 × 4.6 mm, particle size 5 μm, Agilent column, using ammonium acetate (0.01M as mobile phase with flow rate of 1mL/min and injection volume was 20 μL. Quantification was carried out with UV detector at 230 nm with a linear calibration curve in the concentration range of 10–100 μg/mL based on peak area. Thus, developed method was validated for linearity, accuracy, precision, and robustness. Results: Linearity was found to be in the range between 10–100 μg/mL with a significantly higher value of correlation coefficient r2 = 0.9994. The limits of detection (LOD and the limits of quantification (LOQ were found to be 1.92μg/mL and 5.82 μg/mL respectively. Moreover, validated method was applied to study the degradation profile of DCB under various stress degradation conditions. Examination of different stress conditions on degradation of DCB showed that its degradation was highly susceptible to oxidative condition as 31.24% of drug was degraded. In acidic and alkaline conditions, the drug was degraded by 21.03% and 12.16% respectively, while thermal and photolytic condition causes least degradation, i.e. 0.21% and 0.3% respectively. Conclusion: The proposed method was found to be sensitive, specific and was successfully applied for the estimation of DCB in bulk drug, and lipid based nanoparticles.
Age Estimation Methods in Forensic Odontology
Phuwadon Duangto
2016-12-01
Full Text Available Forensically, age estimation is a crucial step for biological identification. Currently, there are many methods with variable accuracy to predict the age for dead or living persons such as a physical examination, radiographs of the left hand, and dental assessment. Age estimation using radiographic tooth development has been found to be an accurate method because it is mainly genetically influenced and less affected by nutritional and environmental factors. The Demirjian et al. method has long been the most commonly used for dental age estimation using radiological technique in many populations. This method, based on tooth developmental changes, is an easy-to-apply method since different stages of tooth development is clearly defined. The aim of this article is to elaborate age estimation by using tooth development with a focus on the Demirjian et al. method.
Corrado Dimauro
2010-01-01
Full Text Available Two methods of SNPs pre-selection based on single marker regression for the estimation of genomic breeding values (G-EBVs were compared using simulated data provided by the XII QTL-MAS workshop: i Bonferroni correction of the significance threshold and ii Permutation test to obtain the reference distribution of the null hypothesis and identify significant markers at P<0.01 and P<0.001 significance thresholds. From the set of markers significant at P<0.001, random subsets of 50% and 25% markers were extracted, to evaluate the effect of further reducing the number of significant SNPs on G-EBV predictions. The Bonferroni correction method allowed the identification of 595 significant SNPs that gave the best G-EBV accuracies in prediction generations (82.80%. The permutation methods gave slightly lower G-EBV accuracies even if a larger number of SNPs resulted significant (2,053 and 1,352 for 0.01 and 0.001 significance thresholds, respectively. Interestingly, halving or dividing by four the number of SNPs significant at P<0.001 resulted in an only slightly decrease of G-EBV accuracies. The genetic structure of the simulated population with few QTL carrying large effects, might have favoured the Bonferroni method.
Eunmi Kim
2014-09-01
Full Text Available Recently, flood damage by frequent localized downpours in cities is on the increase on account of abnormal climate phenomena and the growth of impermeable areas due to urbanization. This study suggests a method to estimate real-time flood risk on roads for drivers based on the accumulated rainfall. The amount of rainfall of a road link, which is an intensive type, is calculated by using the revised method of missing rainfall in meteorology, because the rainfall is not measured on roads directly. To process in real time with a computer, we use the inverse distance weighting (IDW method, which is a suitable method in the computing system and is commonly used in relation to precipitation due to its simplicity. With real-time accumulated rainfall, the flooding history, rainfall range causing flooding from previous rainfall information and frequency probability of precipitation are used to determine the flood risk on roads. The result of simulation using the suggested algorithms shows the high concordance rate between actual flooded areas in the past and flooded areas derived from the simulation for the research region in Busan, Korea.
Li, Meng; Huang, Zhonghua
2016-10-01
Signal processing for an ultra-wideband radio fuze receiver involves some challenges: it requires high real-time performance; the output signal is mixed with broadband noise; and the signal-to-noise ratio (SNR) decreases with increased detection range. The adaptive line enhancement method is used to filter the output signal of the ultra-wideband radio fuze receiver, and thus suppress the wideband noise from the output signal of the receiver and extract the target characteristic signal. The filter input correlation matrix estimation algorithm is based on the delay factor of an adaptive line enhancer. The proposed adaptive algorithm was used to filter and reduce noise in the output signal from the fuze receiver. Simulation results showed that the SNR of the output signal after adaptive noise reduction was improved by 20 dB, which was higher than the SNR of the output signal after finite impulse response (FIR) filtering of around 10 dB.
Binnendijk Erika
2012-10-01
Full Text Available Abstract Background Most healthcare spending in developing countries is private out-of-pocket. One explanation for low penetration of health insurance is that poorer individuals doubt their ability to enforce insurance contracts. Community-based health insurance schemes (CBHI are a solution, but launching CBHI requires obtaining accurate local data on morbidity, healthcare utilization and other details to inform package design and pricing. We developed the “Illness Mapping” method (IM for data collection (faster and cheaper than household surveys. Methods IM is a modification of two non-interactive consensus group methods (Delphi and Nominal Group Technique to operate as interactive methods. We elicited estimates from “Experts” in the target community on morbidity and healthcare utilization. Interaction between facilitator and experts became essential to bridge literacy constraints and to reach consensus. The study was conducted in Gaya District, Bihar (India during April-June 2010. The intervention included the IM and a household survey (HHS. IM included 18 women’s and 17 men’s groups. The HHS was conducted in 50 villages with1,000 randomly selected households (6,656 individuals. Results We found good agreement between the two methods on overall prevalence of illness (IM: 25.9% ±3.6; HHS: 31.4% and on prevalence of acute (IM: 76.9%; HHS: 69.2% and chronic illnesses (IM: 20.1%; HHS: 16.6%. We also found good agreement on incidence of deliveries (IM: 3.9% ±0.4; HHS: 3.9%, and on hospital deliveries (IM: 61.0%. ± 5.4; HHS: 51.4%. For hospitalizations, we obtained a lower estimate from the IM (1.1% than from the HHS (2.6%. The IM required less time and less person-power than a household survey, which translate into reduced costs. Conclusions We have shown that our Illness Mapping method can be carried out at lower financial and human cost for sourcing essential local data, at acceptably accurate levels. In view of the good fit of
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Tunnel Cost-Estimating Methods.
1981-10-01
8 ae1e 066 c LINING CALCULATES THE LINING COSTS AND THE FORMWORK COST FOR A 982928 ees C TUNNEL OR SHAFT SEGMENT 682636 0066...AD-AIO . 890 ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURGETC F/B 13/13 TUNNEL COST-ESTIMATING METNDS(U) OCT 81 R D BENNETT UNCLASSIFIED WES...TR/L-81-101-3lEEEEEE EIIIl-IIIIIIIu IIIIEIIIEIIIIE llllEEEEllEEEI EEEEEEEEEIIII C EllTE-CHNICAL RGPORT GL-81-10 LI10 TUNNEL COST-ESTIMATING METHODS by
Cisternas, Miriam G.; Murphy, Louise; Sacks, Jeffrey J.; Solomon, Daniel H.; Pasta, David J.; Helmick, Charles G.
2015-01-01
Objective Provide a contemporary estimate of osteoarthritis (OA) by comparing accuracy and prevalence of alternative definitions of OA. Methods The Medical Expenditure Panel Survey (MEPS) household component (HC) records respondent-reported medical conditions as open-ended responses; professional coders translate these responses into ICD-9-CM codes for the medical conditions files. Using these codes and other data from the MEPS-HC medical conditions files, we constructed three case definitions of OA and assessed them against medical provider diagnoses of ICD-9-CM 715 [osteoarthrosis and allied disorders] in a MEPS subsample. The three definitions were: 1) strict = ICD-9-CM 715; 2) expanded = ICD-9-CM 715, 716 [other and unspecified arthropathies], OR 719 [other and unspecified disorders of joint]); and 3) probable = strict OR expanded + respondent-reported prior diagnosis of OA or other arthritis excluding rheumatoid arthritis (RA). Results Sensitivity and specificity of the three definitions were: strict – 34.6% and 97.5%; expanded – 73.8% and 90.5%; and probable – 62.9% and 93.5%. Conclusion The strict definition for OA (ICD-9-CM 715) excludes many individuals with OA. The probable definition of OA has the optimal combination of sensitivity and specificity relative to the two other MEPS-based definitions and yields a national annual estimate of 30.8 million adults with OA (13.4% of US adult population) for 2008 – 2011. PMID:26315529
Fries, K. J.; Kerkez, B.; Gronewold, A.; Lenters, J. D.
2014-12-01
We introduce a novel energy balance method to estimate evaporation across large lakes using real-time data from moored buoys and mobile, satellite-tracked drifters. Our work is motivated by the need to improve our understanding of the water balance of the Laurentian Great Lakes basin, a complex hydrologic system that comprises 90% of the United States' and 20% of the world's fresh surface water. Recently, the lakes experienced record-setting water level drops despite above-average precipitation, and given that lake surface area comprises nearly one third of the entire basin, evaporation is suspected to be the primary driver behind the decrease in water levels. There has historically been a need to measure evaporation over the Great Lakes, and recent hydrological phenomena (including not only record low levels, but also extreme changes in ice cover and surface water temperatures) underscore the urgency of addressing that need. Our method tracks the energy fluxes of the lake system - namely net radiation, heat storage and advection, and Bowen ratio. By measuring each of these energy budget terms and combining the results with mass-transfer based estimates, we can calculate real-time evaporation rates on sub-hourly timescales. To mitigate the cost prohibitive nature of large-scale, distributed energy flux measurements, we present a novel approach in which we leverage existing investments in seasonal buoys (which, while providing intensive, high quality data, are costly and sparsely distributed across the surface of the Great Lakes) and then integrate data from less costly satellite-tracked drifter data. The result is an unprecedented, hierarchical sensor and modeling architecture that can be used to derive estimates of evaporation in real-time through cloud-based computing. We discuss recent deployments of sensor-equipped buoys and drifters, which are beginning to provide us with some of the first in situ measurements of overlake evaporation from Earth's largest lake
Abdollah BORHANIFAR
2013-01-01
Full Text Available In this study fractional Poisson equation is scrutinized through finite difference using shifted Grünwald estimate. A novel method is proposed numerically. The existence and uniqueness of solution for the fractional Poisson equation are proved. Exact and numerical solution are constructed and compared. Then numerical result shows the efficiency of the proposed method.
Azarm, M.A.; Hsu, F.; Martinez-Guridi, G. [Brookhaven National Lab., Upton, NY (United States); Vesely, W.E. [Science Applications International Corp., Dublin, OH (United States)
1993-07-01
This report introduces a new perspective on the basic concept of dependent failures where the definition of dependency is based on clustering in failure times of similar components. This perspective has two significant implications: firstly, it relaxes the conventional assumption that dependent failures must be simultaneous and result from a severe shock; secondly, it allows the analyst to use all the failures in a time continuum to estimate the potential for multiple failures in a window of time (e.g., a test interval), therefore arriving at a more accurate value for system unavailability. In addition, the models developed here provide a method for plant-specific analysis of dependency, reflecting the plant-specific maintenance practices that reduce or increase the contribution of dependent failures to system unavailability. The proposed methodology can be used for screening analysis of failure data to estimate the fraction of dependent failures among the failures. In addition, the proposed method can evaluate the impact of the observed dependency on the system unavailability and plant risk. The formations derived in this report have undergone various levels of validations through computer simulation studies and pilot applications. The pilot applications of these methodologies showed that the contribution of dependent failures of diesel generators in one plant was negligible, while in another plant, it was quite significant. It also showed that in the plant with significant contribution of dependency to Emergency Power System (ESP) unavailability, the contribution changed with time. Similar findings were reported for the Containment Fan Cooler breakers. Drawing such conclusions about system performance would not have been possible with any other reported dependency methodologies.
Zhang, Dong; Zhang, Xiaolei; Yuan, Jianzheng; Ke, Rui; Yang, Yan; Hu, Ying
2016-01-01
The Laplace-Fourier domain full waveform inversion can simultaneously restore both the long and intermediate short-wavelength information of velocity models because of its unique characteristics of complex frequencies. This approach solves the problem of conventional frequency-domain waveform inversion in which the inversion result is excessively dependent on the initial model due to the lack of low frequency information in seismic data. Nevertheless, the Laplace-Fourier domain waveform inversion requires substantial computational resources and long computation time because the inversion must be implemented on different combinations of multiple damping constants and multiple frequencies, namely, the complex frequencies, which are much more numerous than the Fourier frequencies. However, if the entire target model is computed on every complex frequency for the Laplace-Fourier domain inversion (as in the conventional frequency domain inversion), excessively redundant computation will occur. In the Laplace-Fourier domain waveform inversion, the maximum depth penetrated by the seismic wave decreases greatly due to the application of exponential damping to the seismic record, especially with use of a larger damping constant. Thus, the depth of the area effectively inverted on a complex frequency tends to be much less than the model depth. In this paper, we propose a method for quantitative estimation of the effective inversion depth in the Laplace-Fourier domain inversion based on the principle of seismic wave propagation and mathematical analysis. According to the estimated effective inversion depth, we can invert and update only the model area above the effective depth for every complex frequency without loss of accuracy in the final inversion result. Thus, redundant computation is eliminated, and the efficiency of the Laplace-Fourier domain waveform inversion can be improved. The proposed method was tested in numerical experiments. The experimental results show that
2008-01-01
Two residual-based a posteriori error estimators of the nonconforming Crouzeix-Raviart element are derived for elliptic problems with Dirac delta source terms.One estimator is shown to be reliable and efficient,which yields global upper and lower bounds for the error in piecewise W1,p seminorm.The other one is proved to give a global upper bound of the error in Lp-norm.By taking the two estimators as refinement indicators,adaptive algorithms are suggested,which are experimentally shown to attain optimal convergence orders.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization. The applicability and accuracy of the regional regression equations depend on the basin characteristics measured for an ungaged location on a stream being within range of those used to develop the equations.
Freeman, Vincent L; Boylan, Emma E; Pugach, Oksana; Mclafferty, Sara L; Tossas-Milligan, Katherine Y; Watson, Karriem S; Winn, Robert A
2017-08-20
To address locally relevant cancer-related health issues, health departments frequently need data beyond that contained in standard census area-based statistics. We describe a geographic information system-based method for calculating age-standardized cancer incidence rates in non-census defined geographical areas using publically available data. Aggregated records of cancer cases diagnosed from 2009 through 2013 in each of Chicago's 77 census-defined community areas were obtained from the Illinois State Cancer Registry. Areal interpolation through dasymetric mapping of census blocks was used to redistribute populations and case counts from community areas to Chicago's 50 politically defined aldermanic wards, and ward-level age-standardized 5-year cumulative incidence rates were calculated. Potential errors in redistributing populations between geographies were limited to r = 0.99, mean difference = -4 persons). A map overlay of safety-net primary care clinic locations and ward-level incidence rates for advanced-staged cancers revealed potential pathways for prevention. Areal interpolation through dasymetric mapping can estimate cancer rates in non-census defined geographies. This can address gaps in local cancer-related health data, inform health resource advocacy, and guide community-centered cancer prevention and control.
Dong, Guangzhong; Chen, Zonghai; Wei, Jingwen; Zhang, Chenbin; Wang, Peng
2016-01-01
The state-of-energy of lithium-ion batteries is an important evaluation index for energy storage systems in electric vehicles and smart grids. To improve the battery state-of-energy estimation accuracy and reliability, an online model-based estimation approach is proposed against uncertain dynamic load currents and environment temperatures. Firstly, a three-dimensional response surface open-circuit-voltage model is built up to improve the battery state-of-energy estimation accuracy, taking various temperatures into account. Secondly, a total-available-energy-capacity model that involves temperatures and discharge rates is reconstructed to improve the accuracy of the battery model. An extended-Kalman-filter and particle-filter based dual filters algorithm is then developed to establish an online model-based estimator for the battery state-of-energy. The extended-Kalman-filter is employed to update parameters of the battery model using real-time battery current and voltage at each sampling interval, while the particle-filter is applied to estimate the battery state-of-energy. Finally, the proposed approach is verified by experiments conducted on a LiFePO4 lithium-ion battery under different operating currents and temperatures. Experimental results indicate that the battery model simulates battery dynamics robustly with high accuracy, and the estimates of the dual filters converge to the real state-of-energy within an error of ±4%.
Loefgren, Martin (Kemakta Konsult AB, Stockholm (Sweden)); Vecernik, Petr; Havlova, Vaclava (Waste Disposal Dept., Nuclear Research Institute Rez plc. (Czech Republic))
2009-11-15
factors and generic surface conductivities, and fairly good agreement was obtained. Part 1 suffered from methodology problems, which ultimately lead to poor reproducibility and accuracy. Here a single sample was in sequence saturated with the 0.001, 0.03, 0.5, 0.1 and 1.0 M NaCl electrolytes. The aim was to see if the apparent formation factor increasingly overestimates the formation factor with decreasing electrical conductivity of the pore water. Notwithstanding the experimental problems and errors, it was shown that this is clearly the case. For the electrolyte 0.001 M NaCl, and for this particular sample, the apparent formation factor overestimates the formation factor by at least one order of magnitude. The measured apparent formation factors were compared with modelled apparent formation factors, where input data were the sample's measured formation factor and surface conductivity, and fairly good agreement was obtained. The formation factors obtained by the TEM method were comparable with those obtained in the previous through diffusion experiments on the same samples. Especially for the Forsmark samples of part 2, the TEM results agreed with the through diffusion results, indicating that anion exclusion is not a major issue. From comparison of the TEM formation factors, obtained with anionic tracer iodide, and estimated formation factors based on the resistivity methods, it is indicated that anion exclusion should not reduce the effective diffusivity by more than a few factors
Parameter estimation methods for chaotic intercellular networks.
Mariño, Inés P; Ullner, Ekkehard; Zaikin, Alexey
2013-01-01
We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
T.A. Dyachenko
2012-12-01
Full Text Available The concepts of market competitive environment, force of a competitive position of the enterprise and the connection between its terms have been defined. The subjects and factors that affect them have been discovered. The methodical approach of their estimation has been developed.
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan
2017-07-01
We estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV’s ability to meet estimated city electricity consumption varies widely—from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city’s estimated electricity consumption. In addition to city-wide results, we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.
Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.
2012-12-01
The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.
Zeeshan Ali Siddiqui
2016-01-01
Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.
JinKui Wu; ShiWei Liu; LePing Ma; Jia Qin; JiaXin Zhou; Hong Wei
2016-01-01
The accuracy of spatial interpolation of precipitation data is determined by the actual spatial variability of the precipitation, the interpolation method, and the distribution of observatories whose selections are particularly important. In this paper, three spatial sampling programs, including spatial random sampling, spatial stratified sampling, and spatial sandwich sampling, are used to analyze the data from meteorological stations of northwestern China. We compared the accuracy of ordinary Kriging interpolation methods on the basis of the sampling results. The error values of the regional annual pre-cipitation interpolation based on spatial sandwich sampling, including ME (0.1513), RMSE (95.91), ASE (101.84), MSE (−0.0036), and RMSSE (1.0397), were optimal under the premise of abundant prior knowledge. The result of spatial stratified sampling was poor, and spatial random sampling was even worse. Spatial sandwich sampling was the best sampling method, which minimized the error of regional precipitation estimation. It had a higher degree of accuracy compared with the other two methods and a wider scope of application.
Chang, Koan-Yuh [Department of Electronic Engineering, Chienkuo Technology University, No. 1, Chiehshou N. Rd., Changhua City 500 (China); Lin, Huan-Jung [Department of Aeronautical Engineering, National Formosa University, No. 64, Wen-Hwa Rd., Huwei Jen, Yunlin 632 (China); Chen, Pang-Chia [Department of Electrical Engineering, Kao Yuan University, No. 1821, Jhong-Shan Rd., Lu-Jhu Township, Kaohsiung 821 (China)
2009-02-15
In this paper, a new approach to estimate the optimal performance of an unknown proton exchange membrane fuel cell (PEMFC) has been proposed. This proposed approach combines the Taguchi method and the numerical PEMFC model. Simulation results obtained using the Taguchi method help to determine the value of control factors that represent the tested unknown PEMFC. The objective of reducing both fuel consumption and operation cost can be achieved by determining the parameters for the unknown PEMFC. In addition, the optimal operation power for the tested unknown PEMFC can also be predicted. Experimental results on the test equipment show that the proposed approach is effective in optimal performance estimation for the tested unknown PEMFC, thus demonstrating the success achieved by combining the Taguchi method and the numerical PEMFC model. (author)
Yu.S. Shypulina
2013-03-01
Full Text Available The aim of the article. The aim of the article is to study the role of society and organization innovative culture in the formation of a favorable environment for active transition to an innovative way of development; development of methodological principles of multifactor analysis of societys innovative culture and organization in their logical interrelation.The results of the analysis. The author has defined an extended set of factors that influence on realization of the main functions of society innovative culture (innovation, selection, translation, which the author considers as one of macroenvironment factors that creates favorable conditions for innovative development.Methodical principles of society innovative culture quantitative evaluation, including evaluation of its individual functions are developed. On this basis, the current state of innovative culture in Ukraine is estimated, and indicates its low level.Proved that innovative culture of the organization (company or institution is part of innovative development potential and part of intellectual capital. It confirms a major role of innovative culture in forming the innovative and favorable environment of management. The relationship between innovative culture structural elements, intellectual capital and potential innovation are defined.Criteria base and methodical approach to the quantitative multifactor assessment of organization innovative culture are formed. It includes assessment of its individual components.Methodical approach to assessment of the innovativen culture of Sumy region innovative SMEs is approbated.Conclusions and directions of further researches. The results, conclusions and recommendations of the article allow concluding that the author developed a theoretical, methodological and methodical basis of societys innovative culture analysis. The author sees it as part of macroenvironment and internal innovative culture of the organization, and also as part of its
Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti
2014-06-01
Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.
WU Jing-min; ZUO Hong-fu; CHEN Yong
2005-01-01
A particle swarm optimization (PSO) algorithm improved by immunity algorithm (IA) was presented.Memory and self-regulation mechanisms of IA were used to avoid PSO plunging into local optima. Vaccination and immune selection mechanisms were used to prevent the undulate phenomenon during the evolutionary process. The algorithm was introduced through an application in the direct maintenance cost (DMC) estimation of aircraft components. Experiments results show that the algorithm can compute simply and run quickly. It resolves the combinatorial optimization problem of component DMC estimation with simple and available parameters. And it has higher accuracy than individual methods, such as PLS, BP and v-SVM, and also has better performance than other combined methods, such as basic PSO and BP neural network.
Ying-Chih Lai
2016-05-01
Full Text Available The demand for pedestrian navigation has increased along with the rapid progress in mobile and wearable devices. This study develops an accurate and usable Step Length Estimation (SLE method for a Pedestrian Dead Reckoning (PDR system with features including a wide range of step lengths, a self-contained system, and real-time computing, based on the multi-sensor fusion and Fuzzy Logic (FL algorithms. The wide-range SLE developed in this study was achieved by using a knowledge-based method to model the walking patterns of the user. The input variables of the FL are step strength and frequency, and the output is the estimated step length. Moreover, a waist-mounted sensor module has been developed using low-cost inertial sensors. Since low-cost sensors suffer from various errors, a calibration procedure has been utilized to improve accuracy. The proposed PDR scheme in this study demonstrates its ability to be implemented on waist-mounted devices in real time and is suitable for the indoor and outdoor environments considered in this study without the need for map information or any pre-installed infrastructure. The experiment results show that the maximum distance error was within 1.2% of 116.51 m in an indoor environment and was 1.78% of 385.2 m in an outdoor environment.
Sun, Yong; Ma, Zilin; Tang, Gongyou; Chen, Zheng; Zhang, Nong
2016-07-01
Since the main power source of hybrid electric vehicle(HEV) is supplied by the power battery, the predicted performance of power battery, especially the state-of-charge(SOC) estimation has attracted great attention in the area of HEV. However, the value of SOC estimation could not be greatly precise so that the running performance of HEV is greatly affected. A variable structure extended kalman filter(VSEKF)-based estimation method, which could be used to analyze the SOC of lithium-ion battery in the fixed driving condition, is presented. First, the general lower-order battery equivalent circuit model(GLM), which includes column accumulation model, open circuit voltage model and the SOC output model, is established, and the off-line and online model parameters are calculated with hybrid pulse power characteristics(HPPC) test data. Next, a VSEKF estimation method of SOC, which integrates the ampere-hour(Ah) integration method and the extended Kalman filter(EKF) method, is executed with different adaptive weighting coefficients, which are determined according to the different values of open-circuit voltage obtained in the corresponding charging or discharging processes. According to the experimental analysis, the faster convergence speed and more accurate simulating results could be obtained using the VSEKF method in the running performance of HEV. The error rate of SOC estimation with the VSEKF method is focused in the range of 5% to 10% comparing with the range of 20% to 30% using the EKF method and the Ah integration method. In Summary, the accuracy of the SOC estimation in the lithium-ion battery cell and the pack of lithium-ion battery system, which is obtained utilizing the VSEKF method has been significantly improved comparing with the Ah integration method and the EKF method. The VSEKF method utilizing in the SOC estimation in the lithium-ion pack of HEV can be widely used in practical driving conditions.
SUN Yong; MA Zilin; TANG Gongyou; CHEN Zheng; ZHANG Nong
2016-01-01
Since the main power source of hybrid electric vehicle(HEV) is supplied by the power battery, the predicted performance of power battery, especially the state-of-charge(SOC) estimation has attracted great attention in the area of HEV. However, the value of SOC estimation could not be greatly precise so that the running performance of HEV is greatly affected. A variable structure extended kalman filter(VSEKF)-based estimation method, which could be used to analyze the SOC of lithium-ion battery in the fixed driving condition, is presented. First, the general lower-order battery equivalent circuit model(GLM), which includes column accumulation model, open circuit voltage model and the SOC output model, is established, and the off-line and online model parameters are calculated with hybrid pulse power characteristics(HPPC) test data. Next, a VSEKF estimation method of SOC, which integrates the ampere-hour(Ah) integration method and the extended Kalman filter(EKF) method, is executed with different adaptive weighting coefficients, which are determined according to the different values of open-circuit voltage obtained in the corresponding charging or discharging processes. According to the experimental analysis, the faster convergence speed and more accurate simulating results could be obtained using the VSEKF method in the running performance of HEV. The error rate of SOC estimation with the VSEKF method is focused in the range of 5%to 10%comparing with the range of 20%to 30%using the EKF method and the Ah integration method. In Summary, the accuracy of the SOC estimation in the lithium-ion battery cell and the pack of lithium-ion battery system, which is obtained utilizing the VSEKF method has been significantly improved comparing with the Ah integration method and the EKF method. The VSEKF method utilizing in the SOC estimation in the lithium-ion pack of HEV can be widely used in practical driving conditions.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization
Engelsma, K A; Veerkamp, R F; Calus, M P L; Bijma, P; Windig, J J
2012-06-01
Genetic diversity is often evaluated using pedigree information. Currently, diversity can be evaluated in more detail over the genome based on large numbers of SNP markers. Pedigree- and SNP-based diversity were compared for two small related groups of Holstein animals genotyped with the 50 k SNP chip, genome-wide, per chromosome and for part of the genome examined. Diversity was estimated with coefficient of kinship (pedigree) and expected heterozygosity (SNP). SNP-based diversity at chromosome regions was determined using 5-Mb sliding windows, and significance of difference between groups was determined by bootstrapping. Both pedigree- and SNP-based diversity indicated more diversity in one of the groups; 26 of the 30 chromosomes showed significantly more diversity for the same group, as did 25.9% of the chromosome regions. Even in small populations that are genetically close, differences in diversity can be detected. Pedigree- and SNP-based diversity give comparable differences, but SNP-based diversity shows on which chromosome regions these differences are based. For maintaining diversity in a gene bank, SNP-based diversity gives a more detailed picture than pedigree-based diversity. © 2012 Blackwell Verlag GmbH.
A simple method to estimate interwell autocorrelation
Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
System and method for traffic signal timing estimation
Dumazert, Julien
2015-12-30
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
Parameter estimation methods for chaotic intercellular networks.
Inés P Mariño
Full Text Available We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
Chen, Tianwen; Ryali, Srikanth; Qin, Shaozheng; Menon, Vinod
2013-11-15
Intrinsic functional connectivity analysis using resting-state functional magnetic resonance imaging (rsfMRI) has become a powerful tool for examining brain functional organization. Global artifacts such as physiological noise pose a significant problem in estimation of intrinsic functional connectivity. Here we develop and test a novel random subspace method for functional connectivity (RSMFC) that effectively removes global artifacts in rsfMRI data. RSMFC estimates the partial correlation between a seed region and each target brain voxel using multiple subsets of voxels sampled randomly across the whole brain. We evaluated RSMFC on both simulated and experimental rsfMRI data and compared its performance with standard methods that rely on global mean regression (GSReg) which are widely used to remove global artifacts. Using extensive simulations we demonstrate that RSMFC is effective in removing global artifacts in rsfMRI data. Critically, using a novel simulated dataset we demonstrate that, unlike GSReg, RSMFC does not artificially introduce anti-correlations between inherently uncorrelated networks, a result of paramount importance for reliably estimating functional connectivity. Furthermore, we show that the overall sensitivity, specificity and accuracy of RSMFC are superior to GSReg. Analysis of posterior cingulate cortex connectivity in experimental rsfMRI data from 22 healthy adults revealed strong functional connectivity in the default mode network, including more reliable identification of connectivity with left and right medial temporal lobe regions that were missed by GSReg. Notably, compared to GSReg, negative correlations with lateral fronto-parietal regions were significantly weaker in RSMFC. Our results suggest that RSMFC is an effective method for minimizing the effects of global artifacts and artificial negative correlations, while accurately recovering intrinsic functional brain networks.
Lifetime estimation methods in power transformer insulation
Mohammad Ali Taghikhani
2012-10-01
Full Text Available Mineral oil in the power transformer has an important role in the cooling, insulation aging and chemical reactions such as oxidation. Oil temperature increases will cause quality loss. The oil should be regularly control in necessary time. Studies have been done on power transformers oils that are used in different age in Iranian power grid to identify the true relationship between age and other characteristics of power transformer oil. In this paper the first method to estimate the life of power transformer insulation (oil is based on Arrhenius law. The Arrhenius law can provide loss of power transformer oil quality and estimates remaining life. The second method that is studies to estimate the life of power transformer is the paper insulation life prediction at temperature160 ° C.
Drews, Martin; Lauritzen, Bent; Madsen, Henrik
2005-01-01
parameters, and the observables are linked to the state variables through a static measurement equation. The method is analysed for three simple state space models using experimental data obtained at a nuclear research reactor. Compared to direct measurements of the atmospheric dispersion, the Kalman filter...... estimates are found to agree well with the measured parameters, provided that the radiation measurements are spread out in the cross-wind direction. For less optimal detector placement it proves difficult to distinguish variations in the source term and plume height; yet the Kalman filter yields consistent...... scheme are outlined, to account for realistic accident scenarios....
Nizamuddin, Mohammad; Akhand, Kawsar; Roytman, Leonid; Kogan, Felix; Goldberg, Mitch
2015-06-01
Rice is a dominant food crop of Bangladesh accounting about 75 percent of agricultural land use for rice cultivation and currently Bangladesh is the world's fourth largest rice producing country. Rice provides about two-third of total calorie supply and about one-half of the agricultural GDP and one-sixth of the national income in Bangladesh. Aus is one of the main rice varieties in Bangladesh. Crop production, especially rice, the main food staple, is the most susceptible to climate change and variability. Any change in climate will, thus, increase uncertainty regarding rice production as climate is major cause year-to-year variability in rice productivity. This paper shows the application of remote sensing data for estimating Aus rice yield in Bangladesh using official statistics of rice yield with real time acquired satellite data from Advanced Very High Resolution Radiometer (AVHRR) sensor and Principal Component Regression (PCR) method was used to construct a model. The simulated result was compared with official agricultural statistics showing that the error of estimation of Aus rice yield was less than 10%. Remote sensing, therefore, is a valuable tool for estimating crop yields well in advance of harvest, and at a low cost.
Statistical Method of Estimating Nigerian Hydrocarbon Reserves
Jeffrey O. Oseh
2015-01-01
Full Text Available Hydrocarbon reserves are basic to planning and investment decisions in Petroleum Industry. Therefore its proper estimation is of considerable importance in oil and gas production. The estimation of hydrocarbon reserves in the Niger Delta Region of Nigeria has been very popular, and very successful, in the Nigerian oil and gas industry for the past 50 years. In order to fully estimate the hydrocarbon potentials in Nigerian Niger Delta Region, a clear understanding of the reserve geology and production history should be acknowledged. Reserves estimation of most fields is often performed through Material Balance and Volumetric methods. Alternatively a simple Estimation Model and Least Squares Regression may be useful or appropriate. This model is based on extrapolation of additional reserve due to exploratory drilling trend and the additional reserve factor which is due to revision of the existing fields. This Estimation model used alongside with Linear Regression Analysis in this study gives improved estimates of the fields considered, hence can be used in other Nigerian Fields with recent production history
Valdivia, Valeska
2014-01-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims. Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods. We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results. We find that the accuracy for the extinction of the tree-based method is better than 10%, while the ...
Fedorov, Andriy; Fluckiger, Jacob; Ayers, Gregory D; Li, Xia; Gupta, Sandeep N; Tempany, Clare; Mulkern, Robert; Yankeelov, Thomas E; Fennessy, Fiona M
2014-05-01
Multi-parametric Magnetic Resonance Imaging, and specifically Dynamic Contrast Enhanced (DCE) MRI, play increasingly important roles in detection and staging of prostate cancer (PCa). One of the actively investigated approaches to DCE MRI analysis involves pharmacokinetic (PK) modeling to extract quantitative parameters that may be related to microvascular properties of the tissue. It is well-known that the prescribed arterial blood plasma concentration (or Arterial Input Function, AIF) input can have significant effects on the parameters estimated by PK modeling. The purpose of our study was to investigate such effects in DCE MRI data acquired in a typical clinical PCa setting. First, we investigated how the choice of a semi-automated or fully automated image-based individualized AIF (iAIF) estimation method affects the PK parameter values; and second, we examined the use of method-specific averaged AIF (cohort-based, or cAIF) as a means to attenuate the differences between the two AIF estimation methods. Two methods for automated image-based estimation of individualized (patient-specific) AIFs, one of which was previously validated for brain and the other for breast MRI, were compared. cAIFs were constructed by averaging the iAIF curves over the individual patients for each of the two methods. Pharmacokinetic analysis using the Generalized kinetic model and each of the four AIF choices (iAIF and cAIF for each of the two image-based AIF estimation approaches) was applied to derive the volume transfer rate (K(trans)) and extravascular extracellular volume fraction (ve) in the areas of prostate tumor. Differences between the parameters obtained using iAIF and cAIF for a given method (intra-method comparison) as well as inter-method differences were quantified. The study utilized DCE MRI data collected in 17 patients with histologically confirmed PCa. Comparison at the level of the tumor region of interest (ROI) showed that the two automated methods resulted in
Contour Estimation by Array Processing Methods
Bourennane Salah
2006-01-01
Full Text Available This work is devoted to the estimation of rectilinear and distorted contours in images by high-resolution methods. In the case of rectilinear contours, it has been shown that it is possible to transpose this image processing problem to an array processing problem. The existing straight line characterization method called subspace-based line detection (SLIDE leads to models with orientations and offsets of straight lines as the desired parameters. Firstly, a high-resolution method of array processing leads to the orientation of the lines. Secondly, their offset can be estimated by either the well-known method of extension of the Hough transform or another method, namely, the variable speed propagation scheme, that belongs to the array processing applications field. We associate it with the method called "modified forward-backward linear prediction" (MFBLP. The signal generation process devoted to straight lines retrieval is retained for the case of distorted contours estimation. This issue is handled for the first time thanks to an inverse problem formulation and a phase model determination. The proposed method is initialized by means of the SLIDE algorithm.
BAI Xingyu; YANG Desen; ZHAO Chunhui
2007-01-01
In order to solve the problem of DOA (Direction of Arrival) estimation of underwater distant wideband targets, a novel coherent signal-subspace method based on the cross spectral matrix of pressure and particle velocity using the Acoustic Vector Sensor Array (AVSA)is proposed in this paper. The proposed method is different from existing AVSA based DOA estimation methods in using particle velocity information of Acoustic Vector Sensor (AVS) as an independent array element. It is entirely based on the combined information processing of pressure and particle velocity, namely, the P-V cross spectrum, has better DOA estimation performance than existing methods in isotropic noise field. By theoretical analysis, both focusing principle and eigendecomposition theory based on the P-V cross spectral matrix are given.At the same time, the corresponding criteria for source number detection is also presented.Computer simulations with data from lake trials demonstrate that the proposed method is effective and obviously outperforms existing methods in resolution and accuracy in the case of low Signal-to-Noise Ratio (SNR).
Brix, Gunnar [Federal Office for Radiation Protection, Department of Medical and Occupational Radiation Protection, Oberschleissheim (Germany); Bundesamt fuer Strahlenschutz, Abteilung fuer Medizinischen und Beruflichen Strahlenschutz, Oberschleissheim (Germany); Zwick, Stefan [German Cancer Research Center (DKFZ), Department of Medical Physics in Radiology, Heidelberg (Germany); University Hospital Freiburg, Department of Radiology, Medical Physics, Freiburg (Germany); Griebel, Juergen [Federal Office for Radiation Protection, Department of Medical and Occupational Radiation Protection, Oberschleissheim (Germany); Fink, Christian [University Medical Center Mannheim, University of Heidelberg, Institute of Clinical Radiology and Nuclear Medicine, Mannheim (Germany); Kiessling, Fabian [RWTH-Aachen University, Department of Experimental Molecular Imaging, Aachen (Germany)
2010-09-15
Tissue perfusion is frequently determined from dynamic contrast-enhanced CT or MRI image series by means of the steepest slope method. It was thus the aim of this study to systematically evaluate the reliability of this analysis method on the basis of simulated tissue curves. 9600 tissue curves were simulated for four noise levels, three sampling intervals and a wide range of physiological parameters using an axially distributed reference model and subsequently analysed by the steepest slope method. Perfusion is systematically underestimated with errors becoming larger with increasing perfusion and decreasing intravascular volume. For curves sampled after rapid contrast injection with a temporal resolution of 0.72 s, the bias was less than 23% when the mean residence time of tracer molecules in the intravascular distribution space was greater than 6 s. Increasing the sampling interval and the noise level substantially reduces the accuracy and precision of estimates, respectively. The steepest slope method allows absolute quantification of tissue perfusion in a computationally simple and numerically robust manner. The achievable degree of accuracy and precision is considered to be adequate for most clinical applications. (orig.)
Tejesh C Anandaswamy
2016-01-01
Full Text Available Background and Aims: Subclavian central venous catheterisation (CVC is employed in critically ill patients requiring long-term central venous access. There is no gold standard for estimating their depth of insertion. In this study, we compared the landmark topographic method with the formula technique for estimating depth of insertion of right subclavian CVCs. Methods: Two hundred and sixty patients admitted to Intensive Care Unit requiring subclavian CVC were randomly assigned to either topographic method or formula method (130 in each group. Catheter tip position in relation to the carina was measured on a post-procedure chest X-ray. The primary endpoint was the need for catheter repositioning. Mann–Whitney test and Chi-square test was performed for statistical analysis using SPSS for windows version 18.0 (Armonk, NY: IBM Corp. Results: Nearly, half the catheters positioned by both the methods were situated >1 cm below the carina and required repositioning. Conclusion: Both the techniques were not effective in estimating the approximate depth of insertion of right subclavian CVCs.
Broekroelofs, J; Stegeman, CA; Navis, GJ; de Haan, J; van der Bij, W; de Zeeuw, D; de Jong, PE
2000-01-01
Background: Progressive renal function loss during long-term follow up is common after lung transplantation and close monitoring is warranted, Since changes in creatinine generation and excretion may occur after lung transplantation, the reliability of creatinine-based methods of renal function asse
刘文卿
2011-01-01
Ridge estimate is an effective method to solve the problem of multicollinearity in multiple linear regression, and is a biased shrinkage estimate. Against ordinary least squares estimate, ridge estimate decreases mean square errors, but increases residual sum of squares. This paper proposes an improved method based universal ridge estimate for the excess shrinkage of ridge estimate. Which method can make better the effect of fit, reduces the residual sum of squares contrast to ridge estimate.%岭估计是解决多元线性回归多重共线性问题的有效方法，是有偏的压缩估计。与普通最小二乘估计相比，岭估计可以降低参数估计的均方误差，但是却增大残差平方和，拟合效果变差。本文提出一种基于泛岭估计对岭估计过度压缩的改进方法，可以改进岭估计的拟合效果，减小岭估计残差平方和的增加幅度。
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Research on Estimates of Xi’an City Life Garbage Pay-As-You-Throw Based on Two-part Tariff method
Yaobo, Shi; Xinxin, Zhao; Fuli, Zheng
2017-05-01
Domestic waste whose pricing can’t be separated from the pricing of public economics category is quasi public goods. Based on Two-part Tariff method on urban public utilities, this paper designs the pricing model in order to match the charging method and estimates the standard of pay-as-you-throw using data of the past five years in Xi’an. Finally, this paper summarizes the main results and proposes corresponding policy recommendations.
Kamio, Koichiro; Matsushita, Ikumi; Tanaka, Goh; Ohashi, Jun; Hijikata, Minako; Nakata, Koh; Tokunaga, Katsushi; Azuma, Arata; Kudoh, Shoji; Keicho, Naoto
2004-09-01
Haplotype-based human genome research is important in identifying disease susceptibility genes efficiently. Although haplotype reconstruction by statistical methods is widely used, direct haplotype determination by molecular techniques has also been developed as a complementary method for statistical estimation. In this study, we demonstrate a molecular haplotyping method making use of single-strand conformation polymorphism (SSCP) gels. We identified 10 common SNPs and a dinucleotide insertion/deletion polymorphism within 2-kb region upstream of the transcription initiation site of MUC5B and determined haplotype structure, dividing the region into two DNA fragments. Real haplotypes were determined unambiguously by our SSCP-based analysis with fragments longer than 1 kb. Haplotypes reconstructed from diploid genotypes in the same region by the statistical methods including EM algorithm were also evaluated. Direct comparison between statistical estimation and direct determination of haplotypes revealed that major haplotypes containing multiple marker sites showing strong LD are estimated in great accuracy but that a variety of haplotypes reflecting weak LD are not reconstructed precisely enough. Our data can be helpful in implementing molecular haplotyping or statistical estimation, since usage of these methods may be determined depending on the haplotype structures.
A Modified Extended Bayesian Method for Parameter Estimation
无
2007-01-01
This paper presents a modified extended Bayesian method for parameter estimation. In this method the mean value of the a priori estimation is taken from the values of the estimated parameters in the previous iteration step. In this way, the parameter covariance matrix can be automatically updated during the estimation procedure, thereby avoiding the selection of an empirical parameter. Because the extended Bayesian method can be regarded as a Tikhonov regularization, this new method is more stable than both the least-squares method and the maximum likelihood method. The validity of the proposed method is illustrated by two examples: one based on simulated data and one based on real engineering data.
朱正为; 周建江; 郭玉英
2015-01-01
The accuracy of background clutter model is a key factor which determines the performance of a constant false alarm rate (CFAR) target detection method. G0 distribution is one of the optimal statistic models in the synthetic aperture radar (SAR) image background clutter modeling and can accurately model various complex background clutters in the SAR images. But the application of the distribution is greatly limited by its disadvantages that the parameter estimation is complex and the local detection threshold is difficult to be obtained. In order to solve the above-mentioned problems, an synthetic aperture radar CFAR target detection method using the logarithmic cumulant (MoLC) + method of moment (MoM)-based G0distribution clutter model is proposed. In the method, G0 distribution is used for modeling the background clutters, a new MoLC+MoM-based parameter estimation method coupled with a fast iterative algorithm is used for estimating the parameters of G0 distribution and an exquisite dichotomy method is used for obtaining the local detection threshold of CFAR detection, which greatly improves the computational efficiency, detection performance and environmental adaptability of CFAR detection. Experimental results show that the proposed SAR CFAR target detection method has good target detection performance in various complex background clutter environments.
Berg, Casper Willestofte; Nielsen, Anders; Kristensen, Kasper
2014-01-01
Indices of abundance from fishery-independent trawl surveys constitute an important source of information for many fish stock assessments. Indices are often calculated using area stratified sample means on age-disaggregated data, and finally treated in stock assessment models as independent...... observations. We evaluate a series of alternative methods for calculating indices of abundance from trawl survey data (delta-lognormal, delta-gamma, and Tweedie using Generalized Additive Models) as well as different error structures for these indices when used as input in an age-based stock assessment model...... the different indices produced. The stratified mean method is found much more imprecise than the alternatives based on GAMs, which are found to be similar. Having time-varying index variances is found to be of minor importance, whereas the independence assumption is not only violated but has significant impact...
Kim, Huiyong; Hwang, Sung June; Lee, Kwang Soon
2015-02-03
Among various CO2 capture processes, the aqueous amine-based absorption process is considered the most promising for near-term deployment. However, the performance evaluation of newly developed solvents still requires complex and time-consuming procedures, such as pilot plant tests or the development of a rigorous simulator. Absence of accurate and simple calculation methods for the energy performance at an early stage of process development has lengthened and increased expense of the development of economically feasible CO2 capture processes. In this paper, a novel but simple method to reliably calculate the regeneration energy in a standard amine-based carbon capture process is proposed. Careful examination of stripper behaviors and exploitation of energy balance equations around the stripper allowed for calculation of the regeneration energy using only vapor-liquid equilibrium and caloric data. Reliability of the proposed method was confirmed by comparing to rigorous simulations for two well-known solvents, monoethanolamine (MEA) and piperazine (PZ). The proposed method can predict the regeneration energy at various operating conditions with greater simplicity, greater speed, and higher accuracy than those proposed in previous studies. This enables faster and more precise screening of various solvents and faster optimization of process variables and can eventually accelerate the development of economically deployable CO2 capture processes.
Mishra, Wageesh; Davies, Jackie A
2014-01-01
A study of the kinematics and arrival times of CMEs at Earth, derived from time-elongation maps (J-maps) constructed from STEREO/Heliospheric Imager (HI) observations, provides an opportunity to understand the heliospheric evolution of CMEs in general. We implement various reconstruction techniques, based on the use of time-elongation profiles of propagating CMEs viewed from single or multiple vantage points, to estimate the dynamics of three geo-effective CMEs. We use the kinematic properties, derived from analysis of the elongation profiles, as inputs to the Drag Based Model for the distance beyond which the CMEs cannot be tracked unambiguously in the J-maps. The ambient solar wind into which these CMEs, which travel with different speeds, are launched, is different. Therefore, these CMEs will evolve differently throughout their journey from the Sun to 1 AU. We associate the CMEs, identified and tracked in the J-maps, with signatures observed in situ near 1 AU by the WIND spacecraft. By deriving the kinemat...
Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.
2015-11-01
Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate
XIAGuangrong; LIUXingzhao
2004-01-01
In this paper, the existing moment-type method is firstly analyzed, and it easily diverges. Based on Empirical characteristic function (ECF), two innovative moment-type methods are then proposed to estimate parameters of the Probability density function (PDF) of the mixture of Gaussian and Symmetric α stable (SαS) distributions. One is named modified moment-type method, the other is named improved moment-type method. The latter method is mainly discussed. Compared with the existing moment-type method, it is more robust, enhances convergence and overcomes the constraint in using ECF. Further more. it avoids solving complicated equation. Monte Carlo simulation experiments show that the improved momenttype method has excellent accuracy, and consumes less computation than the existing moment-type method.
Peng, Da; Yin, Cheng; Zhao, Hu; Liu, Wei
2016-12-01
Pore structure and mineral matrix elastic moduli are indispensable in rock physics models. We propose an estimation method of pore structure and mineral moduli based on Kuster-Toksöz model and Biot's coefficient. In this technique, pore aspect ratios of five different scales from 100 to 10-4 are considered, Biot's coefficient is used to determine bounds of mineral moduli, and an estimation procedure combined with simulated annealing (SA) algorithm to handle real logs or laboratory measurements is developed. The proposed method is applied to parameter estimations on 28 sandstone samples, the properties of which have been measured in lab. The water saturated data are used for estimating pore structure and mineral moduli, and the oil saturated data are used for testing these estimated parameters through fluid substitution in Kuster-Toksöz model. We then compare fluid substitution results with lab measurements and find that relative errors of P-wave and S-wave velocities are all less than 5%, which indicates that the estimation results are accurate.
Estimation Method of Homography Matrix Based on Statistical Optimization%基于统计优化的单应矩阵估计算法
杨利峰; 谢世朋
2012-01-01
计算机科学中研究的图像是真实世界(即二维、三维欧式空间)到像平面的射影变换.平面射影变换(单应)估计是特征目标检测、注册、识别、三维重建等方面的关键步骤,但是如何鲁棒、精确地估计单应矩阵是一个没有很好解决的问题.在研究中发现,基于点与直线的直接的单应矩阵估计方法会导致出现较大误差的情况.针对这一情况,文中介绍了一种基于统计优化的单应矩阵估计方法,这种方法是通过单应矩阵的协方差张量的计算和优化来估计单应矩阵的.最后进行了简单的实验,比较了统计优化方法与进行归一化处理后的直接线性方法的估计结果,证明了基于优化统计的估计方法更加有效.%The study of the images in computational science is projective transformation between the real world (the two-dimentional and three-dimentional Euclidean geometry) and the plane. Plane projective transformation (homography) estimation is a necessary step in many feature-based object detection, registration, recognition, three-dimention reconstruction and some other aspects, however, ho w to robustly and accurately estimate homography from images is a difficult problem. In this paper found that usual method of homography estimation based on points and lines may result in relatively big errors. Under such configurations, a new estimation method which is based on statistical optimization is proposed in this paper, this method estimates homography matrix through covariance tensor of homography matrix. Have conduced some experiments and compared the estimation results between the method of statistical optimization and the method of normalized DLT,thus proved that the estimation method based on statistical optimization is more effective.
Risser, Dennis W.; Gburek, William J.; Folmar, Gordon J.
2005-01-01
This study by the U.S. Geological Survey (USGS), in cooperation with the Agricultural Research Service (ARS), U.S. Department of Agriculture, compared multiple methods for estimating ground-water recharge and base flow (as a proxy for recharge) at sites in east-central Pennsylvania underlain by fractured bedrock and representative of a humid-continental climate. This study was one of several within the USGS Ground-Water Resources Program designed to provide an improved understanding of methods for estimating recharge in the eastern United States. Recharge was estimated on a monthly and annual basis using four methods?(1) unsaturated-zone drainage collected in gravity lysimeters, (2) daily water balance, (3) water-table fluctuations in wells, and (4) equations of Rorabaugh. Base flow was estimated by streamflow-hydrograph separation using the computer programs PART and HYSEP. Estimates of recharge and base flow were compared for an 8-year period (1994-2001) coinciding with operation of the gravity lysimeters at an experimental recharge site (Masser Recharge Site) and a longer 34-year period (1968-2001), for which climate and streamflow data were available on a 2.8-square-mile watershed (WE-38 watershed). Estimates of mean-annual recharge at the Masser Recharge Site and WE-38 watershed for 1994-2001 ranged from 9.9 to 14.0 inches (24 to 33 percent of precipitation). Recharge, in inches, from the various methods was: unsaturated-zone drainage, 12.2; daily water balance, 12.3; Rorabaugh equations with PULSE, 10.2, or RORA, 14.0; and water-table fluctuations, 9.9. Mean-annual base flow from streamflow-hydrograph separation ranged from 9.0 to 11.6 inches (21-28 percent of precipitation). Base flow, in inches, from the various methods was: PART, 10.7; HYSEP Local Minimum, 9.0; HYSEP Sliding Interval, 11.5; and HYSEP Fixed Interval, 11.6. Estimating recharge from multiple methods is useful, but the inherent differences of the methods must be considered when comparing
Gmel, G
2000-01-01
Smoking prevalence rates in Switzerland in the 1990s++ have been estimated from Perma data, which have been available quarterly since 1991, as well as from the data of the first and second Swiss Health Surveys, conducted in 1992/93 and 1997. Both sources--each providing data on more than 10,000 respondents--have been large-scale surveys that have used different but complementary survey designs. The probabilistic sampling design of the Health Surveys assures representative findings; the Perma data, although obtained through a non-probabilistic sampling design, permits trend analysis as Perma uses multiple measurement points and therefore time-series methodology can be applied. Both Perma and the Health Surveys yielded approximately the same prevalence of 37% male smokers in 1992/93 and 39% in 1997. For females Perma gave 4% higher prevalence rates than the Health Surveys (Surveys 1992/93: 24%; 1997: 31%). For both sexes the increase in total smoking prevalence was accounted for mainly by adolescents and young adults. Whereas the Surveys showed an increase from 29% to 41% (18% to 39%) in males (females) aged 15 to 19 years, the corresponding increase derived from Perma was 50% less. Except for this youngest age-group, differences between the methods remained within standard statistical norms. There is no doubt, however, that smoking in adolescents increased between 1992/93 and 1997.
Tuominen, Pekko; Tuononen, Minttu
2017-06-01
One of the key elements in short-term solar forecasting is the detection of clouds and their movement. This paper discusses a new method for extracting cloud cover and cloud movement information from ground based camera images using neural networks and the Lucas-Kanade method. Two novel features of the algorithm are that it performs well both inside and outside of the circumsolar region, i.e. the vicinity of the sun, and is capable of deciding a threefold sun state. More precisely, the sun state can be detected to be either clear, partly covered by clouds or overcast. This is possible due to the absence of a shadow band in the imaging system. Visual validation showed that the new algorithm performed well in detecting clouds of varying color and contrast in situations referred to as difficult for commonly used thresholding methods. Cloud motion field results were computed from two consecutive sky images by solving the optical flow problem with the fast to compute Lucas-Kanade method. A local filtering scheme developed in this study was used to remove noisy motion vectors and it is shown that this filtering technique results in a motion field with locally nearly uniform directions and smooth global changes in direction trends. Thin, transparent clouds still pose a challenge for detection and leave room for future improvements in the algorithm.
Lazri, Mourad; Ameur, Zohra; Ameur, Soltane; Mohia, Yacine; Brucker, Jean Michel; Testud, Jacques
2013-10-01
The ultimate objective of this paper is the estimation of rainfall over an area in Algeria using data from the SEVIRI radiometer (Spinning Enhanced Visible and Infrared Imager). To achieve this aim, we use a new Convective/Stratiform Rain Area Delineation Technique (CS-RADT). The satellite rainfall retrieval technique is based on various spectral parameters of SEVIRI that express microphysical and optical cloud properties. It uses a multispectral thresholding technique to distinguish between stratiform and convective clouds. This technique (CS-RADT) is applied to the complex situation of the Mediterranean climate of this region. The tests have been conducted during the rainy seasons of 2006/2007 and 2010/2011 where stratiform and convective precipitation is recorded. The developed scheme (CS-RADT) is calibrated by instantaneous meteorological radar data to determine thresholds, and then rain rates are assigned to each cloud type by using radar and rain gauge data. These calibration data are collocated with SEVIRI data in time and space.
Painter, Colin C.; Heimann, David C.; Lanning-Rush, Jennifer L.
2017-08-14
A study was done by the U.S. Geological Survey in cooperation with the Kansas Department of Transportation and the Federal Emergency Management Agency to develop regression models to estimate peak streamflows of annual exceedance probabilities of 50, 20, 10, 4, 2, 1, 0.5, and 0.2 percent at ungaged locations in Kansas. Peak streamflow frequency statistics from selected streamgages were related to contributing drainage area and average precipitation using generalized least-squares regression analysis. The peak streamflow statistics were derived from 151 streamgages with at least 25 years of streamflow data through 2015. The developed equations can be used to predict peak streamflow magnitude and frequency within two hydrologic regions that were defined based on the effects of irrigation. The equations developed in this report are applicable to streams in Kansas that are not substantially affected by regulation, surface-water diversions, or urbanization. The equations are intended for use for streams with contributing drainage areas ranging from 0.17 to 14,901 square miles in the nonirrigation effects region and, 1.02 to 3,555 square miles in the irrigation-affected region, corresponding to the range of drainage areas of the streamgages used in the development of the regional equations.
Estimating Decision Indices Based on Composite Scores
Knupp, Tawnya Lee
2009-01-01
The purpose of this study was to develop an IRT model that would enable the estimation of decision indices based on composite scores. The composite scores, defined as a combination of unidimensional test scores, were either a total raw score or an average scale score. Additionally, estimation methods for the normal and compound multinomial models…
Amoiridis, Anastasios; Anurag, Anup; Ghimire, Pramod
2015-01-01
Temperature estimation is of great importance for performance and reliability of IGBT power modules in converter operation as well as in active power cycling tests. It is common to be estimated through Thermo-Sensitive Electrical Parameters such as the forward voltage drop (Vce) of the chip. This...
SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE
Ming HAN; Yuanyao DING
2004-01-01
This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.
Methods of Estimating Strategic Intentions
1982-05-01
of events, coding categories. A V 2. Weighting Data: polIcy capturIng, Bayesian methods, correlation and variance analysis. 3. Characterizing Data...memory aids, fuzzy sets, factor analysis. 4. Assessing Covariations: actuarial models, backcasting . bootstrapping. 5. Cause and Effect Assessment...causae search, causal analysis, search trees, stepping analysts, hypothesis, regression analysis. 6. Predictions: Backcast !ng, boot strapping, decision
Chao WU; Lu-ping XU; Hua ZHANG; Wen-bo ZHAO
2015-01-01
Weak L1 signal acquisition in a high dynamic environment primarily faces a challenge:the integration peak is neg-atively influenced by the possible bit sign reversal every 20 ms and the frequency error. The block accumulating semi-coherent integration of correlations (BASIC) is a state-of-the-art method, but calculating the inter-block conjugate products restricts BASIC in a low signal-to-noise ratio (SNR) acquisition. We propose a block zero-padding method based on a discrete chirp-Fourier transform (DCFT) for parameter estimations in weak signal and high dynamic environments. Compared with the conventional receiver architecture that uses closed-loop acquisition and tracking, it is more suitable for open-loop acquisition. The proposed method combines DCFT and block zero-padding. In this way, the post-correlation signal is coherently post-integrated with the bit sequence stripped off, and the high dynamic parameters are precisely estimated using the threshold set based on a false alarm probability. In addition, the detection performance of the proposed method is analyzed. Simulation results show that compared with the BASIC method, the proposed method can precisely detect the high dynamic parameters in lower SNR when the length of the received signal is fixed.
Chen, Hong-Yan; Zhao, Geng-Xing; Li, Xi-Can; Wang, Xiang-Feng; Li, Yu-Ling
2013-11-01
Taking the Qihe County in Shandong Province of East China as the study area, soil samples were collected from the field, and based on the hyperspectral reflectance measurement of the soil samples and the transformation with the first deviation, the spectra were denoised and compressed by discrete wavelet transform (DWT), the variables for the soil alkali hydrolysable nitrogen quantitative estimation models were selected by genetic algorithms (GA), and the estimation models for the soil alkali hydrolysable nitrogen content were built by using partial least squares (PLS) regression. The discrete wavelet transform and genetic algorithm in combining with partial least squares (DWT-GA-PLS) could not only compress the spectrum variables and reduce the model variables, but also improve the quantitative estimation accuracy of soil alkali hydrolysable nitrogen content. Based on the 1-2 levels low frequency coefficients of discrete wavelet transform, and under the condition of large scale decrement of spectrum variables, the calibration models could achieve the higher or the same prediction accuracy as the soil full spectra. The model based on the second level low frequency coefficients had the highest precision, with the model predicting R2 being 0.85, the RMSE being 8.11 mg x kg(-1), and RPD being 2.53, indicating the effectiveness of DWT-GA-PLS method in estimating soil alkali hydrolysable nitrogen content.
Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.
2016-04-05
The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.
Haque, M F; Iqbal, M M; Ahmed, Z; Sultan, T; Rahman, M; Quddus, S; Rahman, M Q; Ahmed, N N
2013-10-01
Accurate estimation of the glomerular filtration rate (GFR) is essential for the evaluation of patient with chronic kidney disease (CKD). The present study was a comparison between modified gates GFR with laboratory measured CCR & MDRD formula based estimated GFR method. Pre-diagnosed 180 diabetic nephropathy patients were selected. All the time of evaluation the blood glucose of the patients were controlled and serum creatinine was stable. Then CCR was done and GFR was estimated by Modified Gates method & MDRD method. All the patients were categorized in 5 stages of CKD. They were matched for age, BMI, blood pressure, duration of diabetes, the blood sugar and HbA1C levels. The Gates GFR in stage-2 (70±13) & stage-3 (48±12) was closer with MDRD in stage-2 (77±8) and stage 3 (43±7). The CCR is closer in stage-1 (110±52) & stage-4 (30±10) with MDRD in stage-1 (112±13) and stage-4 (21±4). Association study showed MDRD GFR had highest correlation with Gates GFR (r=0.86; pGFR) in different methods varied significantly between each other at different stages of chronic kidney disease (CKD) in type 2 diabetic nephropathy subjects.
Xiang Wang; Zhitao Huang; Yiyu Zhou
2014-01-01
This paper deals with the blind separation of nonstation-ary sources and direction-of-arrival (DOA) estimation in the under-determined case, when there are more sources than sensors. We assume the sources to be time-frequency (TF) disjoint to a certain extent. In particular, the number of sources presented at any TF neighborhood is strictly less than that of sensors. We can identify the real number of active sources and achieve separation in any TF neighborhood by the sparse representation method. Compared with the subspace-based algorithm under the same sparseness assumption, which suffers from the extra noise effect since it can-not estimate the true number of active sources, the proposed algorithm can estimate the number of active sources and their cor-responding TF values in any TF neighborhood simultaneously. An-other contribution of this paper is a new estimation procedure for the DOA of sources in the underdetermined case, which combines the TF sparseness of sources and the clustering technique. Sim-ulation results demonstrate the validity and high performance of the proposed algorithm in both blind source separation (BSS) and DOA estimation.
Estimating Predictability Redundancy and Surrogate Data Method
Pecen, L
1995-01-01
A method for estimating theoretical predictability of time series is presented, based on information-theoretic functionals---redundancies and surrogate data technique. The redundancy, designed for a chosen model and a prediction horizon, evaluates amount of information between a model input (e.g., lagged versions of the series) and a model output (i.e., a series lagged by the prediction horizon from the model input) in number of bits. This value, however, is influenced by a method and precision of redundancy estimation and therefore it is a) normalized by maximum possible redundancy (given by the precision used), and b) compared to the redundancies obtained from two types of the surrogate data in order to obtain reliable classification of a series as either unpredictable or predictable. The type of predictability (linear or nonlinear) and its level can be further evaluated. The method is demonstrated using a numerically generated time series as well as high-frequency foreign exchange data and the theoretical ...
JIANG Xiaokui; SUN Chao; FANG Jie
2003-01-01
Phase errors in synthetic aperture sonar (SAS) imaging must be reduced to less than one eighth of a wavelength so as to avoid image destruction. Most of the phase errors occur as a result of platform motion errors, for example, sway yaw and surge that are the most important error sources. The phase error of a wide band synthetic aperture sonar is modeled and solutions to sway yaw and surge motion estimation based on the raw sonar echo data with a Displaced Phase Center Antenna (DPCA) method are proposed and their implementations are detailed in this paper. It is shown that the sway estimates can be obtained from the correlation lag and phase difference between the returns at coincident phase centers. An estimate of yaw is also possible if such a technique is applied to more than one overlapping phase center positions. Surge estimates can be obtained by identifying pairs of phase centers with a maximum correlation coefficient. The method works only if the platform velocity is low enough such that a number of phase centers from adjacent pings overlap.
Channel estimation in DCT-based OFDM.
Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing
2014-01-01
This paper derives the channel estimation of a discrete cosine transform-(DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic.
Nouri, Hamideh; Glenn, Edward P.; Beecham, Simon; Chavoshi Boroujeni, Sattar; Sutton, Paul; Alaghmand, Sina; Nagler, Pamela L.; Noori, Behnaz
2016-01-01
Despite being the driest inhabited continent, Australia has one of the highest per capita water consumptions in the world. In addition, instead of having fit-for-purpose water supplies (using different qualities of water for different applications), highly treated drinking water is used for nearly all of Australia’s urban water supply needs, including landscape irrigation. The water requirement of urban landscapes, and particularly urban parklands, is of growing concern. The estimation of ET and subsequently plant water requirements in urban vegetation needs to consider the heterogeneity of plants, soils, water and climate characteristics. Accurate estimation of evapotranspiration (ET), which is the main component of a plant’s water requirement, in urban parks is highly desirable because this water maintains the health of green infrastructure and this in turn provides essential ecosystem services. This research contributes to a broader effort to establish sustainable irrigation practices within the Adelaide Parklands in Adelaide, South Australia.
Outlier Mining Based on Principal Component Estimation
Hu Yang; Ting Yang
2005-01-01
Outlier mining is an important aspect in data mining and the outlier mining based on Cook distance is most commonly used. But we know that when the data have multicollinearity, the traditional Cook method is no longer effective. Considering the excellence of the principal component estimation, we use it to substitute the least squares estimation, and then give the Cook distance measurement based on principal component estimation, which can be used in outlier mining. At the same time, we have done some research on related theories and application problems.
Dimension reduction based on weighted variance estimate
无
2009-01-01
In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Dimension reduction based on weighted variance estimate
ZHAO JunLong; XU XingZhong
2009-01-01
In this paper,we propose a new estimate for dimension reduction,called the weighted variance estimate (WVE),which includes Sliced Average Variance Estimate (SAVE) as a special case.Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension.And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR),SAVE,etc.Many methods such as SIR,SAVE,etc.usually put the same weight on each observation to estimate central subspace (CS).By introducing a weight function,WVE puts different weights on different observations according to distance of observations from CS.The weight function makes WVE have very good performance in general and complicated situations,for example,the distribution of regressor deviating severely from elliptical distribution which is the base of many methods,such as SIR,etc.And compared with many existing methods,WVE is insensitive to the distribution of the regressor.The consistency of the WVE is established.Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Digital Forensics Analysis of Spectral Estimation Methods
Mataracioglu, Tolga
2011-01-01
Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message. In today's world, it is widely used in order to secure the information. In this paper, the traditional spectral estimation methods are introduced. The performance analysis of each method is examined by comparing all of the spectral estimation methods. Finally, from utilizing those performance analyses, a brief pros and cons of the spectral estimation methods are given. Also we give a steganography demo by hiding information into a sound signal and manage to pull out the information (i.e, the true frequency of the information signal) from the sound by means of the spectral estimation methods.
Engelsma, K.A.; Veerkamp, R.F.; Calus, M.P.L.; Bijma, P.; Windig, J.J.
2012-01-01
Genetic diversity is often evaluated using pedigree information. Currently, diversity can be evaluated in more detail over the genome based on large numbers of SNP markers. Pedigree- and SNP-based diversity were compared for two small related groups of Holstein animals genotyped with the 50 k SNP
Thampi, Smitha V.; Bagiya, Mala S.; Chakrabarty, D.; Acharya, Y. B.; Yamamoto, M.
2014-12-01
A GNU Radio Beacon Receiver (GRBR) system for total electron content (TEC) measurements using 150 and 400 MHz transmissions from Low-Earth Orbiting Satellites (LEOS) is fabricated in house and made operational at Ahmedabad (23.04°N, 72.54°E geographic, dip latitude 17°N) since May 2013. This system receives the 150 and 400 MHz transmissions from high-inclination LEOS. The first few days of observations are presented in this work to bring out the efficacy of an ensemble average method to convert the relative TECs to absolute TECs. This method is a modified version of the differential Doppler-based method proposed by de Mendonca (1962) and suitable even for ionospheric regions with large spatial gradients. Comparison of TECs derived from a collocated GPS receiver shows that the absolute TECs estimated by this method are reliable estimates over regions with large spatial gradient. This method is useful even when only one receiving station is available. The differences between these observations are discussed to bring out the importance of the spatial differences between the ionospheric pierce points of these satellites. A few examples of the latitudinal variation of TEC during different local times using GRBR measurements are also presented, which demonstrates the potential of radio beacon measurements in capturing the large-scale plasma transport processes in the low-latitude ionosphere.
István Makra
2015-01-01
• The concentration of virus nanoparticles can be calculated based on the two measured scattered light intensities by knowing the refractive index of the dispersing solution, of the polymer and virus nanoparticles as well as their relative sphere equivalent diameters.
Asakura, Tomoyuki; Miyazawa, Yoshiyuki; Usuda, Shigeru
2017-01-01
[Purpose] Fluidity in the sit-to-walk task has been quantitatively measured with three-dimensional motion analysis system. The purpose of this study was to determine the validity of an accelerometer-based method for estimating fluidity in community-dwelling elderly individuals. [Subjects and Methods] Seventeen community-dwelling elderly females performed a sit-to-walk task. The motion was recorded by an accelerometer, a three-dimensional motion analysis system and a foot pressure sensor simultaneously. The timings of events determined from the acceleration waveform were compared to the timings determined from the three-dimensional motion analysis data (task onset, maximum trunk inclination) or foot pressure sensor data (first heel strike). Regression analysis was used to estimate the fluidity index from the duration between events. [Results] The characteristics of the acceleration waveform were similar to those previously reported in younger adults. Comparisons of event timings from accelerometer and motion analysis system data indicated no systematic bias. Regression analysis showed that the duration from maximum trunk inclination to the first heel strike was the best predictor of fluidity index. [Conclusion] An accelerometer-based method using the duration between characteristic events may be used to precisely and conveniently assess fluidity in a sit-to-walk task in a community setting.
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela
2010-01-01
Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given r...
2006-03-01
cations industry are Frequency Division Multiple Access (FDMA) and Code Division Multiple Access ( CDMA ). More detailed discussion of these schemes...Rd Holloman Air Force Base, NM 88330 DSN : 349-1772 Email: Terry.Bouska@46tg.af.mil Approval for public release; distribution is unlimited. There are
Nielsen, Anker; Bertelsen, Niels Haldor; Wittchen, Kim Bjarne
2013-01-01
The Energy Performance Directive requires energy certifications for buildings. This is implemented in Denmark so that houses that are sold must have an energy performance label based on an evaluation from a visit to the building. The result is that only a small part of the existing houses has an ...
Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.
2016-01-01
18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a
一种基于人脸图像的年龄估计方法%An Age Estimation Method Based on Facial Images
罗佳佳; 蔡超
2012-01-01
Research on age estimation has a significant impact on Human-Computer Interaction. In this paper, an age estimation method based on facial images is proposed. The new method establishes a face anthropometry template based on craniofacial growth pattern theory to obtain facial geometric proportion features, and extracts texture features of facial local area using fractional differential approach, combines these two kinds of features to form personal age feature vectors. Machine learning methods such as clustering algorithms we used to obtain age-feature knowledge matrix, and in age estimating, such knowledge matrix voting on estimate age of input facial image. Experimental results show that the estimation error is small and the classification accuracy is close to human judgment.%有关年龄估计的研究在人机交互领域有着非常重要的意义.该文提出一种基于人脸图像的年龄估计方法,该方法首先基于颅面成长模式理论建立人脸测量模板,在此模板上计算面部几何比例特征,然后运用分数阶微分提取人脸局部区域的纹理特征,结合这两类特征构成个体年龄特征向量；通过聚类学习的方法训练年龄特征向量获得年龄-特征映射矩阵,最后由此矩阵表决出输人人脸的估计年龄.实验结果表明,基于这两种特征构建的年龄估计模型可以获得较好的年龄估计结果,年龄误差较小,分类准确率接近人的主观判断结果.
Nieuwenhout, F; van der Borg, N; van Sark, W.G.J.H.M.; Turkenburg, W.C.
2007-01-01
In order to evaluate the performance of solar home systems (SHSs), data on local insolation is a prerequisite. We present a new method to estimate insolation if direct measurements are unavailable. This method comprises estimation of daily irradiation by correlating photovoltaic (PV) module currents
Nieuwenhout, F; van den Borg, N.; van Sark, W.G.J.H.M.; Turkenburg, W.C.
In order to evaluate the performance of solar home systems (SHS), data on local insolation is a prerequisite. We present the outline of a new method to estimate insolation if direct measurements are unavailable. This method comprises estimation of daily irradiation by correlating photovoltaic
Nieuwenhout, F; van den Borg, N.; van Sark, W.G.J.H.M.; Turkenburg, W.C.
2006-01-01
In order to evaluate the performance of solar home systems (SHS), data on local insolation is a prerequisite. We present the outline of a new method to estimate insolation if direct measurements are unavailable. This method comprises estimation of daily irradiation by correlating photovoltaic (PV)-m
Nieuwenhout, F; van der Borg, N; van Sark, W.G.J.H.M.; Turkenburg, W.C.
2007-01-01
In order to evaluate the performance of solar home systems (SHSs), data on local insolation is a prerequisite. We present a new method to estimate insolation if direct measurements are unavailable. This method comprises estimation of daily irradiation by correlating photovoltaic (PV) module currents
Nieuwenhout, F; van den Borg, N.; van Sark, W.G.J.H.M.; Turkenburg, W.C.
2006-01-01
In order to evaluate the performance of solar home systems (SHS), data on local insolation is a prerequisite. We present the outline of a new method to estimate insolation if direct measurements are unavailable. This method comprises estimation of daily irradiation by correlating photovoltaic (PV)-m
Konrad, Paul Markus
2014-01-01
All across Europe, a drama of historical proportions is unfolding as the debt crisis continues to rock the worldwide financial landscape. Whilst insecurity rises, the general public, policy makers, scientists and academics are searching high and low for independent and objective analyses that may help to assess this unusual situation. For more than a century, rating agencies had developed methods and standards to evaluate and analyze companies, projects or even sovereign countries. However, due to their dated internal processes, the independence of these rating agencies is being questioned, ra
Tian, Jialin; Madaras, Eric I.
2009-01-01
The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.
Terashi, Genki; Takeda-Shitaka, Mayuko; Kanou, Kazuhiko; Iwadate, Mitsuo; Takaya, Daisuke; Umeyama, Hideaki
2007-12-01
We participated in rounds 6-12 of the critical assessment of predicted interaction (CAPRI) contest as the SKE-DOCK server and human teams. The SKE-DOCK server is based on simple geometry docking and a knowledge base scoring function. The procedure is summarized in the following three steps: (1) protein docking according to shape complementarity, (2) evaluating complex models, and (3) repacking side-chain of models. The SKE-DOCK server did not make use of biological information. On the other hand, the human team tried various intervention approaches. In this article, we describe in detail the processes of the SKE-DOCK server, together with results and reasons for success and failure. Good predicted models were obtained for target 25 by both the SKE-DOCK server and human teams. When the modeled receptor proteins were superimposed on the experimental structures, the smallest Ligand-rmsd values corresponding to the rmsd between the model and experimental structures were 3.307 and 3.324 A, respectively. Moreover, the two teams obtained 4 and 2 acceptable models for target 25. The overall result for both the SKE-DOCK server and human teams was medium accuracy for one (Target 25) out of nine targets. (c) 2007 Wiley-Liss, Inc.
Valdivia, Valeska; Hennebelle, Patrick
2014-11-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We
An estimated method of visibility for a remote sensing system based on LabVIEW and Arduino
Chen, Xiaochuan; Ruan, Chi; Zheng, Hairong
2017-02-01
Visibility data have long needed to traffic meteorological monitoring and warning system, but visibility data have monitored with expensive special equipment. Visibility degradation in fog is due to the light scattering of fog droplets, which are transit from aerosols via activation. Considering strong correlation between PM2.5 (Particulate matter with diameters less than 2.5μm) mass concentration and visibility, regression models can be useful tools for retrieving visibility data from available PM2.5 data. In this study, PM2.5 is measured by low cost and commercial equipment. The results of experiment indicate that relative humidity is the key factor to impact accuracy correlation between PM2.5 and visibility, the strongest correlation locates in the RH (Arduino as the controller, design and implements a wireless serial acquisition and control system based LabVIEW and Arduino, this system can achieve the function of real-time synchronization Web publishing. The result of the test indicates that this system has typical characteristics of friendly interface, high levels of reliability and expansibility, moreover it can retrieve visibility data from available PM2.5 data that can easy to access by low-cost sensor along the highway.
Kaliszan, Michał
2013-09-01
This paper presents a verification of the thermodynamic model allowing an estimation of the time of death (TOD) by calculating the post mortem interval (PMI) based on a single eyeball temperature measurement at the death scene. The study was performed on 30 cases with known PMI, ranging from 1h 35min to 5h 15min, using pin probes connected to a high precision electronic thermometer (Dostmann-electronic). The measured eye temperatures ranged from 20.2 to 33.1°C. Rectal temperature was measured at the same time and ranged from 32.8 to 37.4°C. Ambient temperatures which ranged from -1 to 24°C, environmental conditions (still air to light wind) and the amount of hair on the head were also recorded every time. PMI was calculated using a formula based on Newton's law of cooling, previously derived and successfully tested in comprehensive studies on pigs and a few human cases. Thanks to both the significantly faster post mortem decrease of eye temperature and a residual or nonexistent plateau effect in the eye, as well as practically no influence of body mass, TOD in the human death cases could be estimated with good accuracy. The highest TOD estimation error during the post mortem intervals up to around 5h was 1h 16min, 1h 14min and 1h 03min, respectively in three cases among 30, while for the remaining 27 cases it was not more than 47min. The mean error for all 30 cases was ±31min. All that indicates that the proposed method is of quite good precision in the early post mortem period, with an accuracy of ±1h for a 95% confidence interval. On the basis of the presented method, TOD can be also calculated at the death scene with the use of a proposed portable electronic device (TOD-meter).
Liang, Zhang; Yanqing, Hou; Jie, Wu
2016-06-01
The multi-antenna synchronized receiver (using a common clock) is widely applied in GNSS-based attitude determination (AD) or terrain deformations monitoring, and many other applications, since the high-accuracy single-differenced carrier phase can be used to improve the positioning or AD accuracy. Thus, the line bias (LB) parameter (fractional bias isolating) should be calibrated in the single-differenced phase equations. In the past decades, all researchers estimated the LB as a constant parameter in advance and compensated it in real time. However, the constant LB assumption is inappropriate in practical applications because of the physical length and permittivity changes of the cables, caused by the environmental temperature variation and the instability of receiver-self inner circuit transmitting delay. Considering the LB drift (or colored LB) in practical circumstances, this paper initiates a real-time estimator using auto regressive moving average-based (ARMA) prediction/whitening filter model or Moving average-based (MA) constant calibration model. In the ARMA-based filter model, four cases namely AR(1), ARMA(1, 1), AR(2) and ARMA(2, 1) are applied for the LB prediction. The real-time relative positioning model using the ARMA-based predicting LB is derived and it is theoretically proved that the positioning accuracy is better than the traditional double difference carrier phase (DDCP) model. The drifting LB is defined with a phase temperature changing rate integral function, which is a random walk process if the phase temperature changing rate is white noise, and is validated by the analysis of the AR model coefficient. The auto covariance function shows that the LB is indeed varying in time and estimating it as a constant is not safe, which is also demonstrated by the analysis on LB variation of each visible satellite during a zero and short baseline BDS/GPS experiment. Compared to the DDCP approach, in the zero-baseline experiment, the LB constant
Liang, Zhang; Yanqing, Hou; Jie, Wu
2016-12-01
The multi-antenna synchronized receiver (using a common clock) is widely applied in GNSS-based attitude determination (AD) or terrain deformations monitoring, and many other applications, since the high-accuracy single-differenced carrier phase can be used to improve the positioning or AD accuracy. Thus, the line bias (LB) parameter (fractional bias isolating) should be calibrated in the single-differenced phase equations. In the past decades, all researchers estimated the LB as a constant parameter in advance and compensated it in real time. However, the constant LB assumption is inappropriate in practical applications because of the physical length and permittivity changes of the cables, caused by the environmental temperature variation and the instability of receiver-self inner circuit transmitting delay. Considering the LB drift (or colored LB) in practical circumstances, this paper initiates a real-time estimator using auto regressive moving average-based (ARMA) prediction/whitening filter model or Moving average-based (MA) constant calibration model. In the ARMA-based filter model, four cases namely AR(1), ARMA(1, 1), AR(2) and ARMA(2, 1) are applied for the LB prediction. The real-time relative positioning model using the ARMA-based predicting LB is derived and it is theoretically proved that the positioning accuracy is better than the traditional double difference carrier phase (DDCP) model. The drifting LB is defined with a phase temperature changing rate integral function, which is a random walk process if the phase temperature changing rate is white noise, and is validated by the analysis of the AR model coefficient. The auto covariance function shows that the LB is indeed varying in time and estimating it as a constant is not safe, which is also demonstrated by the analysis on LB variation of each visible satellite during a zero and short baseline BDS/GPS experiment. Compared to the DDCP approach, in the zero-baseline experiment, the LB constant
Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia
2014-04-01
Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.
Ranran Li
2015-09-01
Full Text Available An integrated approach using the inverse method and Bayesian approach, combined with a lake eutrophication water quality model, was developed for parameter estimation and water environmental capacity (WEC analysis. The model was used to support load reduction and effective water quality management in the Taihu Lake system in eastern China. Water quality was surveyed yearly from 1987 to 2010. Total nitrogen (TN and total phosphorus (TP were selected as water quality model variables. Decay rates of TN and TP were estimated using the proposed approach. WECs of TN and TP in 2011 were determined based on the estimated decay rates. Results showed that the historical loading was beyond the WEC, thus, reduction of nitrogen and phosphorus input is necessary to meet water quality goals. Then WEC and allowable discharge capacity (ADC in 2015 and 2020 were predicted. The reduction ratios of ADC during these years were also provided. All of these enable decision makers to assess the influence of each loading and visualize potential load reductions under different water quality goals, and then to formulate a reasonable water quality management strategy.
Bayesian Inference Methods for Sparse Channel Estimation
Pedersen, Niels Lovmand
2013-01-01
This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...
Methods for estimation loads transported by rivers
T. S. Smart
1999-01-01
Full Text Available Ten methods for estimating the loads of constituents in a river were tested using data from the River Don in North-East Scotland. By treating loads derived from flow and concentration data collected every 2 days as a truth to be predicted, the ten methods were assessed for use when concentration data are collected fortnightly or monthly by sub-sampling from the original data. Estimates of coefficients of variation, bias and mean squared errors of the methods were compared; no method consistently outperformed all others and different methods were appropriate for different constituents. The widely used interpolation methods can be improved upon substantially by modelling the relationship of concentration with flow or seasonality but only if these relationships are strong enough.
Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.
2015-12-01
A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.
Robust time and frequency domain estimation methods in adaptive control
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Joint 2-D DOA and Noncircularity Phase Estimation Method
Wang Ling
2012-03-01
Full Text Available Classical joint estimation methods need large calculation quantity and multidimensional search. In order to avoid these shortcoming, a novel joint two-Dimension (2-D Direction Of Arrival (DOA and noncircularity phase estimation method based on three orthogonal linear arrays is proposed. The problem of 3-D parameter estimation can be transformed to three parallel 2-D parameter estimation according to the characteristic of three orthogonal linear arrays. Further more, the problem of 2-D parameter estimation can be transformed to 1-D parameter estimation by using the rotational invariance property among signal subspace and orthogonal property of noise subspace at the same time in every subarray. Ultimately, the algorithm can realize joint estimation and pairing parameters by one eigen-decomposition of extended covariance matrix. The proposed algorithm can be applicable for low SNR and small snapshot scenarios, and can estiame 2(M −1 signals. Simulation results verify that the proposed algorithm is effective.
A SPECTROPHOTOMETRIC METHOD TO ESTIMATE PIPERINE IN PIPER SPECIES
1998-01-01
A Simple, rapid and economical procedure for estimation of piperine by UV Spectrophotometer in different piper species was developed and is described. The method is based on method is based on the identification of piperine by TLC and on the ultra violet absorbance maxima in alcohol at 328 nm.
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
Lunde, Asger; Brix, Anne Floor
In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared to the p......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...
Estimating Tree Height-Diameter Models with the Bayesian Method
Xiongqing Zhang
2014-01-01
Full Text Available Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS and the maximum likelihood method (ML. The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
DOA estimation based on distributed subspace method%基于分布式子空间方法的DOA估计
郭俊颖; 刘庆华
2013-01-01
针对集中式子空间方法的DOA估计需要融合中心的问题,采用分布式子空间的方法进行DOA估计.分布式算法无需设置用于将所有节点的数据传递到数据融合中心的路由,仅有相邻节点之间进行通信,每个传感器仅仅估计其在子空间矩阵中所对应的行.仿真结果表明,分布式算法可以达到与集中式算法相似的性能,即能很好地跟踪信号协方差矩阵的主子空间,将其应用于DOA估计,能准确分辨出几个源信号.分布式算法解决了集中式算法数据集中处理所带来的难题,为大规模传感器网络的一些重要问题提供了解决方法.%DOA Estimation based on centralized subspace method need a fusion center.A decentralized subspace estimation method is proposed.The distributed algorithm via near-neighbor communication need not set routes to transmit the data to the fusion center.Each sensor estimates only the corresponding row of the subspace matrix.The simulation results indicate that the decentralized algorithm can achieve the similar performance as centralized algorithm and can track the principal subspace of a signal's covariance matrix well.In addition,the several source signals are distinguished accurately when the proposed method is applied to estimate the direction of arrival.The decentralized algorithm solves the problem of processing data at a fusion center and provides solutions for important problems in large sensor networks.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
Espinoza-Ojeda, O. M.; Santoyo, E.
2016-08-01
A new practical method based on logarithmic transformation regressions was developed for the determination of static formation temperatures (SFTs) in geothermal, petroleum and permafrost bottomhole temperature (BHT) data sets. The new method involves the application of multiple linear and polynomial (from quadratic to eight-order) regression models to BHT and log-transformation (Tln) shut-in times. Selection of the best regression models was carried out by using four statistical criteria: (i) the coefficient of determination as a fitting quality parameter; (ii) the sum of the normalized squared residuals; (iii) the absolute extrapolation, as a dimensionless statistical parameter that enables the accuracy of each regression model to be evaluated through the extrapolation of the last temperature measured of the data set; and (iv) the deviation percentage between the measured and predicted BHT data. The best regression model was used for reproducing the thermal recovery process of the boreholes, and for the determination of the SFT. The original thermal recovery data (BHT and shut-in time) were used to demonstrate the new method's prediction efficiency. The prediction capability of the new method was additionally evaluated by using synthetic data sets where the true formation temperature (TFT) was known with accuracy. With these purposes, a comprehensive statistical analysis was carried out through the application of the well-known F-test and Student's t-test and the error percentage or statistical differences computed between the SFT estimates and the reported TFT data. After applying the new log-transformation regression method to a wide variety of geothermal, petroleum, and permafrost boreholes, it was found that the polynomial models were generally the best regression models that describe their thermal recovery processes. These fitting results suggested the use of this new method for the reliable estimation of SFT. Finally, the practical use of the new method was
Method of moments estimation of GO-GARCH models
Boswijk, H.P.; van der Weide, R.
2009-01-01
We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to
A LEVEL-VALUE ESTIMATION METHOD FOR SOLVING GLOBAL OPTIMIZATION
WU Dong-hua; YU Wu-yang; TIAN Wei-wen; ZHANG Lian-sheng
2006-01-01
A level-value estimation method was illustrated for solving the constrained global optimization problem. The equivalence between the root of a modified variance equation and the optimal value of the original optimization problem is shown. An alternate algorithm based on the Newton's method is presented and the convergence of its implementable approach is proved. Preliminary numerical results indicate that the method is effective.
Nieuwenhout, Frans; Van der Borg, Nico [Energy Research Centre of the Netherlands, Petten (Netherlands); Van Sark, Wilfried; Turkenburg, Wim [Copernicus Institute for Sustainable Development and Innovation, Utrecht University (Netherlands). Department of Science, Technology and Society
2006-09-15
In order to evaluate the performance of solar home systems (SHS), data on local insolation is a prerequisite. We present the outline of a new method to estimate insolation if direct measurements are unavailable. This method comprises estimation of daily irradiation by correlating photovoltaic (PV)-module currents from a number of solar home systems, located a few kilometres apart. The objective is to obtain reliable daily and monthly insolation figures that are representative for an area of a few square kilometres. (author)
Maximum Likelihood DOA Estimator based on Grid Hill Climbing Method%基于网格爬山法的最大似然DOA估计算法
艾名舜; 马红光
2011-01-01
The maximum likelihood estimator for direction of arrival ( DOA) possesses optimum theoretical performance as well as high computational complexity. Taking the estimation as an optimization problem of high-dimension nonlinear function, a novel algorithm has been proposed to reduce the computational load of that. At the beginning, the beamforming method is adopted to estimate the spatial spectrum roughly, and a group of initial solutions that obey the law of the "pre-estimated distribution " are obtained according to the information of the spatial spectrum, and the initial sulotions will locate in the local attractive area of the global optimum solution in great probability. Then, one of the soultions in this group who possesses the maximum fitness is selected to be the initial point of the local search. Grid Hill-climbing Method (GHCM) is a kinds of local search methods that takes a grid as a search unit, which is an improved version of the traditional Hill-climbing Method, and the GHCM is more efficient and stable than the traditional one, so it is a-dopted to obtain the global optimum solution at last. The proposed algorithm can obtain accurate DOA estimation with lower computational cost, and the simulation shows that the propoesd algorithm is more efficient than the maximum likelihood DOA estimator based on PSO .%最大似然波达方向(DOA)估计具有最优的理论性能,但是存在计算量过大的问题.为了降低最大似然DOA估计的计算量,将参数估计转化为高维非线性函数的优化问题,并提出了一种新的优化算法.首先利用波束形成法对空间谱进行预估计并根据空间谱信息构造一组满足“预估分布”的初始解,这组初始解以较大概率落在全局最优解的局部吸引域中.然后将其中适应度最大的一个初始解作为局部搜索的起点.网格爬山法是一种以网格为单元的局部搜索方法,比传统爬山法更加高效和稳定,因此采用该方法获取全局
A comprehensive estimation method for enterprise capability
Tetiana Kuzhda
2015-11-01
Full Text Available In today’s highly competitive business world, the need for efficient enterprise capability management is greater than ever. As more enterprises begin to compete on a global scale, the effective use of enterprise capability will become imperative for them to improve their business activities. The definition of socio-economic capability of the enterprise has been given and the main components of enterprise capability have been pointed out. The comprehensive method to estimate enterprise capability that takes into account both social and economic components has been offered. The methodical approach concerning integrated estimation of the enterprise capability has been developed. Novelty deals with the inclusion of summary measure of the social component of enterprise capability to define the integrated index of enterprise capability. The practical significance of methodological approach is that the method allows assessing the enterprise capability comprehensively through combining two kinds of estimates – social and economic and converts them into a single integrated indicator. It provides a comprehensive approach to socio-economic estimation of enterprise capability, sets a formal basis for making decisions and helps allocate enterprise resources reasonably. Practical implementation of this method will affect the current condition and trends of the enterprise, help to make forecasts and plans for its development and capability efficient use.
Research on the estimation method for Earth rotation parameters
Yao, Yibin
2008-12-01
In this paper, the methods of earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in details. To estimate ERP, two different ways are involved: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. With the IGS daily SINEX files produced by GPS tracking stations can be used to estimate ERP. The parameter transformation method can simplify the process. The process result indicates that the systemic error will exist in the estimated ERP by only using GPS observations. As to the daily GPS SINEX files, why the distinct systemic error is exist in the ERP, or whether this systemic error will affect other parameters estimation, and what its influenced magnitude being, it needs further study in the future.
Training Methods for Image Noise Level Estimation on Wavelet Components
A. De Stefano
2004-12-01
Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.
On methods of estimating cosmological bulk flows
Nusser, Adi
2015-01-01
We explore similarities and differences between several estimators of the cosmological bulk flow, $\\bf B$, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of $\\bf B$ as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring $\\bf B$ for either of these definitions which coincide only for a constant velocity field. We focus on the Wiener Filtering (WF, Hoffman et al. 2015) and the Constrained Minimum Variance (CMV,Feldman et al. 2010) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute $\\bf B$ in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer $\\bf B$ directly from the observed velocities for the second definition of $\\bf B$. The WF ...
Entropy Based Modelling for Estimating Demographic Trends.
Guoqi Li
Full Text Available In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1 Prediction of the age distribution of a country's population based on an "age-structured population model"; 2 Estimation the age distribution of each individual household size with an entropy-based formulation based on an "individual household size model"; and 3 Estimation the number of each household size based on a "total household size model". The last stage is achieved by projecting the age distribution of the country's population (obtained in stage 1 onto the age distributions of individual household sizes (obtained in stage 2. The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables.
Pflugradt, Maik; Geissdoerfer, Kai; Goernig, Matthias; Orglmeister, Reinhold
2017-01-01
Automatic detection of ectopic beats has become a thoroughly researched topic, with literature providing manifold proposals typically incorporating morphological analysis of the electrocardiogram (ECG). Although being well understood, its utilization is often neglected, especially in practical monitoring situations like online evaluation of signals acquired in wearable sensors. Continuous blood pressure estimation based on pulse wave velocity considerations is a prominent example, which depends on careful fiducial point extraction and is therefore seriously affected during periods of increased occurring extrasystoles. In the scope of this work, a novel ectopic beat discriminator with low computational complexity has been developed, which takes advantage of multimodal features derived from ECG and pulse wave relating measurements, thereby providing additional information on the underlying cardiac activity. Moreover, the blood pressure estimations’ vulnerability towards ectopic beats is closely examined on records drawn from the Physionet database as well as signals recorded in a small field study conducted in a geriatric facility for the elderly. It turns out that a reliable extrasystole identification is essential to unsupervised blood pressure estimation, having a significant impact on the overall accuracy. The proposed method further convinces by its applicability to battery driven hardware systems with limited processing power and is a favorable choice when access to multimodal signal features is given anyway. PMID:28098831
SHA Yun-dong; GUO Xiao-peng; LIAO Lian-fang; XIE Li-juan
2011-01-01
As to the sonic fatigue problem of an aero-engine combustor liner structure under the random acoustic loadings, an effective method for predicting the fatigue life of a structure under random loadings was studied. Firstly, the probability distribution of Von Mises stress of thin-walled structure under random loadings was studied, analysis suggested that probability density function of Von Mises stress process accord approximately with two-parameter Weibull distribution. The formula for calculating Weibull parameters were given. Based on the Miner linear theory, the method for predicting the random sonic fatigue life based on the stress probability density was developed, and the model for fatigue life prediction was constructed. As an example, an aero-engine combustor liner structure was considered. The power spectrum density (PSD) of the vibrational stress response was calculated by using the coupled FEM/BEM (finite element method/boundary element method) model, the fatigue life was estimated by using the constructed model. And considering the influence of the wide frequency band, the calculated results were modified. Comparetive analysis shows that the estimated results of sonic fatigue of the combustor liner structure by using Weibull distribution of Von Mises stress are more conservative than using Dirlik distribution to some extend. The results show that the methods presented in this paper are practical for the random fatigue life analysis of the aeronautical thin-walled structures.
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
1995-01-01
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Kumari, Rekha; Varghese, Anitha; George, Louis
2017-01-01
Absorption and fluorescence studies on novel Schiff bases (E)-4-(4-(4-nitro benzylideneamino)benzyl)oxazolidin-2-one (NBOA) and (E)-4-(4-(4-chlorobenzylidene amino)benzyl)oxazolidin-2-one (CBOA) were recorded in a series of twelve solvents upon increasing polarity at room temperature. Large Stokes shift indicates bathochromic fluorescence band for both the molecules. The photoluminescence properties of Schiff bases containing electron withdrawing and donating substituents were analyzed. Intramolecular charge transfer behavior can be studied based on the influence of different substituents in Schiff bases. Changes in position and intensity of absorption and fluorescence spectra are responsible for the stabilization of singlet excited-states of Schiff base molecules with different substituents, in polar solvents. This is attributed to the Intramolecular charge transfer (ICT) mechanism. In case of electron donating (-Cl) substituent, ICT contributes largely to positive solvatochromism when compared to electron withdrawing (-NO2) substituent. Ground-state and singlet excited-state dipole moments of NBOA and CBOA were calculated experimentally using solvent polarity function approaches given by Lippert-Mataga, Bakhshiev, Kawskii-Chamma-Viallet and Reichardt. Due to considerable π- electron density redistribution, singlet excited-state dipole moment was found to be greater than ground-state dipole moment. Ground-state dipole moment value which was determined by quantum chemical method was used to estimate excited-state dipole moment using solvatochromic correlations. Kamlet-Abboud-Taft and Catalan multiple linear regression approaches were used to study non-specific solute-solvent interaction and hydrogen bonding interactions in detail. Optimized geometry and HOMO-LUMO energies of NBOA and CBOA have been determined by DFT and TD-DFT/PCM (B3LYP/6-311G (d, p)). Mulliken charges and molecular electrostatic potential have also been evaluated from DFT calculations.
Point estimation of root finding methods
2008-01-01
This book sets out to state computationally verifiable initial conditions for predicting the immediate appearance of the guaranteed and fast convergence of iterative root finding methods. Attention is paid to iterative methods for simultaneous determination of polynomial zeros in the spirit of Smale's point estimation theory, introduced in 1986. Some basic concepts and Smale's theory for Newton's method, together with its modifications and higher-order methods, are presented in the first two chapters. The remaining chapters contain the recent author's results on initial conditions guaranteing convergence of a wide class of iterative methods for solving algebraic equations. These conditions are of practical interest since they depend only on available data, the information of a function whose zeros are sought and initial approximations. The convergence approach presented can be applied in designing a package for the simultaneous approximation of polynomial zeros.
Muscle parameters estimation based on biplanar radiography.
Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W
2016-11-01
The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.
The estimation method of GPS instrumental biases
无
2001-01-01
A model of estimating the global positioning system (GPS) instrumental biases and the methods to calculate the relative instrumental biases of satellite and receiver are presented. The calculated results of GPS instrumental biases, the relative instrumental biases of satellite and receiver, and total electron content (TEC) are also shown. Finally, the stability of GPS instrumental biases as well as that of satellite and receiver instrumental biases are evaluated, indicating that they are very stable during a period of two months and a half.
Lifetime estimation methods in power transformer insulation
Mohammad Ali Taghikhani
2012-01-01
Mineral oil in the power transformer has an important role in the cooling, insulation aging and chemical reactions such as oxidation. Oil temperature increases will cause quality loss. The oil should be regularly control in necessary time. Studies have been done on power transformers oils that are used in different age in Iranian power grid to identify the true relationship between age and other characteristics of power transformer oil. In this paper the first method to estimate the life of p...
Landau, William Michael; Liu, Peng
2013-01-01
A central goal of RNA sequencing (RNA-seq) experiments is to detect differentially expressed genes. In the ubiquitous negative binomial model for RNA-seq data, each gene is given a dispersion parameter, and correctly estimating these dispersion parameters is vital to detecting differential expression. Since the dispersions control the variances of the gene counts, underestimation may lead to false discovery, while overestimation may lower the rate of true detection. After briefly reviewing several popular dispersion estimation methods, this article describes a simulation study that compares them in terms of point estimation and the effect on the performance of tests for differential expression. The methods that maximize the test performance are the ones that use a moderate degree of dispersion shrinkage: the DSS, Tagwise wqCML, and Tagwise APL. In practical RNA-seq data analysis, we recommend using one of these moderate-shrinkage methods with the QLShrink test in QuasiSeq R package. PMID:24349066
An evaluation of methods for estimating decadal stream loads
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between
Simultaneous estimation of esomeprazole and domperidone by UV spectrophotometric method
Prabu S
2008-01-01
Full Text Available A novel, simple, sensitive and rapid spectrophotometric method has been developed for simultaneous estimation of esomeprazole and domperidone. The method involved solving simultaneous equations based on measurement of absorbance at two wavelengths, 301 nm and 284 nm, ′λ max of esomeprazole and domperidone respectively. Beer′s law was obeyed in the concentration range of 5-20 µg/ml and 8-30 µg/ml for esomeprazole and domperidone respectively. The method was found to be precise, accurate, and specific. The proposed method was successfully applied to estimation of esomeprazole and domperidone in combined solid dosage form.
Nieuwenhout, F.; Van der Borg, N. [Energy Research Centre of the Netherlands, Petten (Netherlands); Van Sark, W.; Turkenburg, W. [Utrecht University (Netherlands). Copernicus Institute for Sustainable Development and Innovation, Department of Science, Technology and Society
2006-07-01
In order to evaluate the performance of solar home systems (SHSs), data on local insolation is a prerequisite. We present a new method to estimate insolation if direct measurements are unavailable. This method comprises estimation of daily irradiation by correlating photovoltaic (PV) module currents from a number of SHSs, located a few kilometres apart. The method was tested with a 3-year time series for nine SHS in a remote area in Indonesia. Verification with reference cell measurements over a 2-month period showed that our method could determine average daily irradiation with a mean bias error of 1.3%. Daily irradiation figures showed a standard error of 5%. The systematic error in this method is estimated to be around 10%. Especially if calibration with measurements during a short period is possible, the proposed method provides more accurate monthly insolation figures compared with the readily available satellite data from the NASA SSE database. An advantage of the proposed method over satellite data is that irradiation figures can be calculated on a daily basis, while the SSE database only provides monthly averages. It is concluded that the new method is a valuable tool to obtain information on insolation when long-term measurements are absent. (author)
Advancing methods for global crop area estimation
King, M. L.; Hansen, M.; Adusei, B.; Stehman, S. V.; Becker-Reshef, I.; Ernst, C.; Noel, J.
2012-12-01
Cropland area estimation is a challenge, made difficult by the variety of cropping systems, including crop types, management practices, and field sizes. A MODIS derived indicator mapping product (1) developed from 16-day MODIS composites has been used to target crop type at national scales for the stratified sampling (2) of higher spatial resolution data for a standardized approach to estimate cultivated area. A global prototype is being developed using soybean, a global commodity crop with recent LCLUC dynamic and a relatively unambiguous spectral signature, for the United States, Argentina, Brazil, and China representing nearly ninety percent of soybean production. Supervised classification of soy cultivated area is performed for 40 km2 sample blocks using time-series, Landsat imagery. This method, given appropriate data for representative sampling with higher spatial resolution, represents an efficient and accurate approach for large area crop type estimation. Results for the United States sample blocks have exhibited strong agreement with the National Agricultural Statistics Service's (NASS's) Cropland Data Layer (CDL). A confusion matrix showed a 91.56% agreement and a kappa of .67 between the two products. Field measurements and RapidEye imagery have been collected for the USA, Brazil and Argentina in further assessing product accuracies. The results of this research will demonstrate the value of MODIS crop type indicator products and Landsat sample data in estimating soybean cultivated area at national scales, enabling an internally consistent global assessment of annual soybean production.
A Novel Monopulse Angle Estimation Method for Wideband LFM Radars
Yi-Xiong Zhang
2016-06-01
Full Text Available Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP of monopulse. In wideband radars, linear frequency modulated (LFM signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF. Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars.
Recursive algorithm for the two-stage EFOP estimation method
LUO GuiMing; HUANG Jian
2008-01-01
A recursive algorithm for the two-stage empirical frequency-domain optimal param-eter (EFOP) estimation method Was proposed. The EFOP method was a novel sys-tem identificallon method for Black-box models that combines time-domain esti-mation and frequency-domain estimation. It has improved anti-disturbance perfor-mance, and could precisely identify models with fewer sample numbers. The two-stage EFOP method based on the boot-strap technique was generally suitable for Black-box models, but it was an iterative method and takes too much computation work so that it did not work well online. A recursive algorithm was proposed for dis-turbed stochastic systems. Some simulation examples are included to demonstrate the validity of the new method.
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
Rank-based camera spectral sensitivity estimation.
Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal
2016-04-01
In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method.
Multiadaptive Galerkin Methods for ODEs III: A Priori Error Estimates
Logg, Anders
2012-01-01
The multiadaptive continuous/discontinuous Galerkin methods mcG(q) and mdG(q) for the numerical solution of initial value problems for ordinary differential equations are based on piecewise polynomial approximation of degree q on partitions in time with time steps which may vary for different components of the computed solution. In this paper, we prove general order a priori error estimates for the mcG(q) and mdG(q) methods. To prove the error estimates, we represent the error in terms of a discrete dual solution and the residual of an interpolant of the exact solution. The estimates then follow from interpolation estimates, together with stability estimates for the discrete dual solution.
Maryam Farhadian
2015-01-01
Full Text Available Background: Prediction models are used in a variety of medical domains, and they are frequently built from experience which constitutes data acquired from actual cases. This study aimed to analyze the potential of artificial neural networks and logistic regression techniques for estimation of hearing impairment among industrial workers. Materials and Methods: A total of 210 workers employed in a steel factory (in West of Iran were selected, and their occupational exposure histories were analyzed. The hearing loss thresholds of the studied workers were determined using a calibrated audiometer. The personal noise exposures were also measured using a noise dosimeter in the workstations. Data obtained from five variables, which can influence the hearing loss, were used as input features, and the hearing loss thresholds were considered as target feature of the prediction methods. Multilayer feedforward neural networks and logistic regression were developed using MATLAB R2011a software. Results: Based on the World Health Organization classification for the grades of hearing loss, 74.2% of the studied workers have normal hearing thresholds, 23.4% have slight hearing loss, and 2.4% have moderate hearing loss. The accuracy and kappa coefficient of the best developed neural networks for prediction of the grades of hearing loss were 88.6 and 66.30, respectively. The accuracy and kappa coefficient of the logistic regression were also 84.28 and 51.30, respectively. Conclusion: Neural networks could provide more accurate predictions of the hearing loss than logistic regression. The prediction method can provide reliable and comprehensible information for occupational health and medicine experts.
Adaptive Methods for Permeability Estimation and Smart Well Management
Lien, Martha Oekland
2005-04-01
The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement
Rosecrance, R. C.; Johnson, L.; Soderstrom, D.
2016-12-01
Canopy light interception is a main driver of water use and crop yield in almond and walnut production. Fractional green canopy cover (Fc) is a good indicator of light interception and can be estimated remotely from satellite using the normalized difference vegetation index (NDVI) data. Satellite-based Fc estimates could be used to inform crop evapotranspiration models, and hence support improvements in irrigation evaluation and management capabilities. Satellite estimates of Fc in almond and walnut orchards, however, need to be verified before incorporating them into irrigation scheduling or other crop water management programs. In this study, Landsat-based NDVI and Fc from NASA's Satellite Irrigation Management Support (SIMS) were compared with four estimates of canopy cover: 1. light bar measurement, 2. in-situ and image-based dimensional tree-crown analyses, 3. high-resolution NDVI data from low flying aircraft, and 4. orchard photos obtained via Google Earth and processed by an Image J thresholding routine. Correlations between the various estimates are discussed.
Rosecrance, Richard C.; Johnson, Lee; Soderstrom, Dominic
2016-01-01
Canopy light interception is a main driver of water use and crop yield in almond and walnut production. Fractional green canopy cover (Fc) is a good indicator of light interception and can be estimated remotely from satellite using the normalized difference vegetation index (NDVI) data. Satellite-based Fc estimates could be used to inform crop evapotranspiration models, and hence support improvements in irrigation evaluation and management capabilities. Satellite estimates of Fc in almond and walnut orchards, however, need to be verified before incorporating them into irrigation scheduling or other crop water management programs. In this study, Landsat-based NDVI and Fc from NASA's Satellite Irrigation Management Support (SIMS) were compared with four estimates of canopy cover: 1. light bar measurement, 2. in-situ and image-based dimensional tree-crown analyses, 3. high-resolution NDVI data from low flying aircraft, and 4. orchard photos obtained via Google Earth and processed by an Image J thresholding routine. Correlations between the various estimates are discussed.
Hesham M. Sallam
2016-03-01
Full Text Available The Fayum Depression of Egypt has yielded fossils of hystricognathous rodents from multiple Eocene and Oligocene horizons that range in age from ∼37 to ∼30 Ma and document several phases in the early evolution of crown Hystricognathi and one of its major subclades, Phiomorpha. Here we describe two new genera and species of basal phiomorphs, Birkamys korai and Mubhammys vadumensis, based on rostra and maxillary and mandibular remains from the terminal Eocene (∼34 Ma Fayum Locality 41 (L-41. Birkamys is the smallest known Paleogene hystricognath, has very simple molars, and, like derived Oligocene-to-Recent phiomorphs (but unlike contemporaneous and older taxa apparently retained dP4∕4 late into life, with no evidence for P4∕4 eruption or formation. Mubhammys is very similar in dental morphology to Birkamys, and also shows no evidence for P4∕4 formation or eruption, but is considerably larger. Though parsimony analysis with all characters equally weighted places Birkamys and Mubhammys as sister taxa of extant Thryonomys to the exclusion of much younger relatives of that genus, all other methods (standard Bayesian inference, Bayesian “tip-dating,” and parsimony analysis with scaled transitions between “fixed” and polymorphic states place these species in more basal positions within Hystricognathi, as sister taxa of Oligocene-to-Recent phiomorphs. We also employ tip-dating as a means for estimating the ages of early hystricognath-bearing localities, many of which are not well-constrained by geological, geochronological, or biostratigraphic evidence. By simultaneously taking into account phylogeny, evolutionary rates, and uniform priors that appropriately encompass the range of possible ages for fossil localities, dating of tips in this Bayesian framework allows paleontologists to move beyond vague and assumption-laden “stage of evolution” arguments in biochronology to provide relatively rigorous age assessments of poorly
A fusion method for estimate of trajectory
吴翊; 朱炬波
1999-01-01
The multiple station method is important in missile and space tracking system. A fusion method is presented. Based on the theory of multiple tracking, and starting with the investigation of precision of location by a single station, a recognition model for occasion system error is constructed, and a principle for preventing pollution by occasion system error is presented. Theoretical analysis and simulation results prove the proposed method correct.
Renxin Xiao
2016-03-01
Full Text Available In order to properly manage lithium-ion batteries of electric vehicles (EVs, it is essential to build the battery model and estimate the state of charge (SOC. In this paper, the fractional order forms of Thevenin and partnership for a new generation of vehicles (PNGV models are built, of which the model parameters including the fractional orders and the corresponding resistance and capacitance values are simultaneously identified based on genetic algorithm (GA. The relationships between different model parameters and SOC are established and analyzed. The calculation precisions of the fractional order model (FOM and integral order model (IOM are validated and compared under hybrid test cycles. Finally, extended Kalman filter (EKF is employed to estimate the SOC based on different models. The results prove that the FOMs can simulate the output voltage more accurately and the fractional order EKF (FOEKF can estimate the SOC more precisely under dynamic conditions.
Estimating seismic demand parameters using the endurance time method
Ramin MADARSHAHIAN; Homayoon ESTEKANCHI; Akbar MAHVASHMOHAMMADI
2011-01-01
The endurance time (ET) method is a time history based dynamic analysis in which structures are subjected to gradually intensifying excitations and their performances are judged based on their responses at various excitation levels.Using this method,the computational effort required for estimating probable seismic demand parameters can be reduced by an order of magnitude.Calculation of the maximum displacement or target displacement is a basic requirement for estimating performance based on structural design.The purpose of this paper is to compare the results of the nonlinear ET method with the nonlinear static pushover (NSP) method of FEMA 356 by evaluating performances and target displacements of steel frames.This study will lead to a deeper insight into the capabilities and limitations of the ET method.The results are further compared with those of the standard nonlinear response history analysis.We conclude that results from the ET analysis are in proper agreement with those from standard procedures.
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Applicability of available methods for incidence estimation among blood donors
Shtmian Zou; Edward P.Notari IV; Roger Y.Dodd
2010-01-01
@@ Abstract Incidence rates of major transfusion transmissible viral infections have been estimated threugh widely used sereconversion approaches and recently developed methods.A quality database for blood donors and donations with the capacity to track donation history of each donor is the basis for incidence estimation and many other epidemiological studies.Depending on available data,difierent ways have been used to determine incidence rates based on conversion from uninfected to infected status among repeat donors.
A Computationally Efficient Method for Polyphonic Pitch Estimation
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Aircraft Combat Survivability Estimation and Synthetic Tradeoff Methods
LI Shu-lin; LI Shou-an; LI Wei-ji; LI Dong-xia; FENG Feng
2005-01-01
A new concept is proposed that susceptibility, vulnerability, reliability, maintainability and supportability should be essential factors of aircraft combat survivability. A weight coefficient method and a synthetic method are proposed to estimate aircraft combat survivability based on the essential factors. Considering that it takes cost to enhance aircraft combat survivability, a synthetic tradeoff model between aircraft combat survivability and life cycle cost is built. The aircraft combat survivability estimation methods and synthetic tradeoff with a life cycle cost model will be helpful for aircraft combat survivability design and enhancement.
Farhadian, Maryam; Aliabadi, Mohsen; Darvishi, Ebrahim
2015-01-01
Prediction models are used in a variety of medical domains, and they are frequently built from experience which constitutes data acquired from actual cases. This study aimed to analyze the potential of artificial neural networks and logistic regression techniques for estimation of hearing impairment among industrial workers. A total of 210 workers employed in a steel factory (in West of Iran) were selected, and their occupational exposure histories were analyzed. The hearing loss thresholds of the studied workers were determined using a calibrated audiometer. The personal noise exposures were also measured using a noise dosimeter in the workstations. Data obtained from five variables, which can influence the hearing loss, were used as input features, and the hearing loss thresholds were considered as target feature of the prediction methods. Multilayer feedforward neural networks and logistic regression were developed using MATLAB R2011a software. Based on the World Health Organization classification for the grades of hearing loss, 74.2% of the studied workers have normal hearing thresholds, 23.4% have slight hearing loss, and 2.4% have moderate hearing loss. The accuracy and kappa coefficient of the best developed neural networks for prediction of the grades of hearing loss were 88.6 and 66.30, respectively. The accuracy and kappa coefficient of the logistic regression were also 84.28 and 51.30, respectively. Neural networks could provide more accurate predictions of the hearing loss than logistic regression. The prediction method can provide reliable and comprehensible information for occupational health and medicine experts.
Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe;
2016-01-01
. In 2013, 48,310 (90% plausibility range:39,900-45,560) MSM were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. 62% of the total HIV-positive population are thought...... to have viral load plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility...
Asif, Ali
There is an increasing concern about excessive use of herbicides for weed control in arable lands. Usually the whole field is sprayed uniformly, while the distribution of weeds often is non-uniform. Often there are spots in a field where weed pressure is very low and has no significant effect on ...... to estimate infestation of weeds at early growth stage. The image analysis method was further developed to estimate colour response of applying increasing doses of herbicides in selectivity experiments and to evaluate the weed-suppressing effect of mulches....
Matteo Garofalo
Full Text Available Functional connectivity of in vitro neuronal networks was estimated by applying different statistical algorithms on data collected by Micro-Electrode Arrays (MEAs. First we tested these "connectivity methods" on neuronal network models at an increasing level of complexity and evaluated the performance in terms of ROC (Receiver Operating Characteristic and PPC (Positive Precision Curve, a new defined complementary method specifically developed for functional links identification. Then, the algorithms better estimated the actual connectivity of the network models, were used to extract functional connectivity from cultured cortical networks coupled to MEAs. Among the proposed approaches, Transfer Entropy and Joint-Entropy showed the best results suggesting those methods as good candidates to extract functional links in actual neuronal networks from multi-site recordings.
小波递推最小二乘法的ARMAX模型参数估计%Parameter Estimation of ARMAX Model Based on Wavelet RLS Method
李振强
2012-01-01
For the linear ARMAX model with the noise corrupted output data, a method of parameter estimation was proposed to estimate the parameters of the model with the input - output data in wavelet domain directly. The least squared (LS) method is an important method for parameter estimation in time domain, with the wavelet transform developed, it plays an important role in signal processing. By means of wavelet transform, the signal has both characteristics of time and frequency and becomes a signal in wavelet domain. Then the denoising result is more effective than in time domain and in frequency domain. The parameters of model were estimated by the wavelet least squared method, compared with the least squared method in time domain, the proposed method is more feasible and effective by the simulation.%研究辨识系统优化问题,针对线性时不变ARMAX模型的参数估计,为了提高辨识精度,提出了直接利用小波域的数据,递推估计出模型的参数的方法.首先将时域的输入输出信号采用小波变换,得到了具有时频特征的小波域信号,可进行去噪方面的处理,去噪结果比时域和频域更有效.然后,利用小波递推最小二乘法对ARMAX模型进行参数估计,通过与时域递推最小二乘法的估计参数比较,仿真结果表明提出的方法是有效的.
McCarthy, Peter M.; Sando, Roy; Sando, Steven K.; Dutton, DeAnn M.
2016-04-05
The U.S. Geological Survey, in cooperation with the Montana Department of Environmental Quality and the Montana Department of Natural Resources and Conservation, developed regional regression equations based on basin and streamflow characteristics for streamflow-gaging stations through water year 2009 that can be used to estimate streamflow characteristics for ungaged sites in western Montana. The regression equations allow estimation of low-flow frequencies; mean annual and mean monthly streamflows; and the 20-, 50-, and 80-percent durations for annual and monthly duration streamflows for ungaged sites in western Montana that are unaffected by regulation.
Discharge estimation based on machine learning
Zhu JIANG; Hui-yan WANG; Wen-wu SONG
2013-01-01
To overcome the limitations of the traditional stage-discharge models in describing the dynamic characteristics of a river, a machine learning method of non-parametric regression, the locally weighted regression method was used to estimate discharge. With the purpose of improving the precision and efficiency of river discharge estimation, a novel machine learning method is proposed:the clustering-tree weighted regression method. First, the training instances are clustered. Second, the k-nearest neighbor method is used to cluster new stage samples into the best-fit cluster. Finally, the daily discharge is estimated. In the estimation process, the interference of irrelevant information can be avoided, so that the precision and efficiency of daily discharge estimation are improved. Observed data from the Luding Hydrological Station were used for testing. The simulation results demonstrate that the precision of this method is high. This provides a new effective method for discharge estimation.
An Estimation Method for number of carrier frequency
Xiong Peng
2015-01-01
Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.
Vortex-Based Aero- and Hydrodynamic Estimation
Hemati, Maziar Sam
Flow control strategies often require knowledge of unmeasurable quantities, thus presenting a need to reconstruct flow states from measurable ones. In this thesis, the modeling, simulation, and estimator design aspects of flow reconstruction are considered. First, a vortex-based aero- and hydrodynamic estimation paradigm is developed to design a wake sensing algorithm for aircraft formation flight missions. The method assimilates wing distributed pressure measurements with a vortex-based wake model to better predict the state of the flow. The study compares Kalman-type algorithms with particle filtering algorithms, demonstrating that the vortex nonlinearities require particle filters to yield adequate performance. Furthermore, the observability structure of the wake is shown to have a negative impact on filter performance regardless of the algorithm applied. It is demonstrated that relative motions can alleviate the filter divergence issues associated with this observability structure. In addition to estimator development, the dissertation addresses the need for an efficient unsteady multi-body aerodynamics testbed for estimator and controller validation studies. A pure vortex particle implementation of a vortex panel-particle method is developed to satisfy this need. The numerical method is demonstrated on the impulsive startup of a flat plate as well as the impulsive startup of a multi-wing formation. It is clear, from these validation studies, that the method is able to accommodate the unsteady wake effects that arise in formation flight missions. Lastly, successful vortex-based estimation is highly dependent on the reliability of the low-order vortex model used in representing the flow of interest. The present treatise establishes a systematic framework for vortex model improvement, grounded in optimal control theory and the calculus of variations. By minimizing model predicted errors with respect to empirical data, the shortcomings of the baseline vortex model
Improved Phasor Estimation Method for Dynamic Voltage Restorer Applications
Ebrahimzadeh, Esmaeil; Farhangi, Shahrokh; Iman-Eini, Hossein;
2015-01-01
The dynamic voltage restorer (DVR) is a series compensator for distribution system applications, which protects sensitive loads against voltage sags by fast voltage injection. The DVR must estimate the magnitude and phase of the measured voltages to achieve the desired performance. This paper...... proposes a phasor parameter estimation algorithm based on a recursive variable and fixed data window least error squares (LES) method for the DVR control system. The proposed algorithm, in addition to decreasing the computational burden, improves the frequency response of the control scheme based...... on the fixed data window LES method. The DVR control system based on the proposed algorithm provides a better compromise between the estimation speed and accuracy of the voltage and current signals and can be implemented using a simple and low-cost processor. The results of the studies indicate...
Method and system for estimating herbage uptake of an animal
2011-01-01
The invention relates to a method and a system for estimating the feeding value or the amount of consumed herbage of grazing animals. The estimated herbage uptake is based on measured and possibly estimated data which is supplied as input data to a mathematical model. Measured input data may...... by the model and possibly provided as output data. Measurements may be obtained by a sensor module carried by the animal and the measurements may be wirelessly transmitted from the sensor module to a receiver, possibly via relay transceivers....
Remote sensing image fusion based on Bayesian linear estimation
GE ZhiRong; WANG Bin; ZHANG LiMing
2007-01-01
A new remote sensing image fusion method based on statistical parameter estimation is proposed in this paper. More specially, Bayesian linear estimation (BLE) is applied to observation models between remote sensing images with different spatial and spectral resolutions. The proposed method only estimates the mean vector and covariance matrix of the high-resolution multispectral (MS) images, instead of assuming the joint distribution between the panchromatic (PAN) image and low-resolution multispectral image. Furthermore, the proposed method can enhance the spatial resolution of several principal components of MS images, while the traditional Principal Component Analysis (PCA) method is limited to enhance only the first principal component. Experimental results with real MS images and PAN image of Landsat ETM+ demonstrate that the proposed method performs better than traditional methods based on statistical parameter estimation,PCA-based method and wavelet-based method.
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
Garofalo, Matteo; Nieus, Thierry; Massobrio, Paolo; Martinoia, Sergio
2009-01-01
Functional connectivity of in vitro neuronal networks was estimated by applying different statistical algorithms on data collected by Micro-Electrode Arrays (MEAs). First we tested these “connectivity methods” on neuronal network models at an increasing level of complexity and evaluated the performance in terms of ROC (Receiver Operating Characteristic) and PPC (Positive Precision Curve), a new defined complementary method specifically developed for functional links identification. Then, the algorithms better estimated the actual connectivity of the network models, were used to extract functional connectivity from cultured cortical networks coupled to MEAs. Among the proposed approaches, Transfer Entropy and Joint-Entropy showed the best results suggesting those methods as good candidates to extract functional links in actual neuronal networks from multi-site recordings. PMID:19652720
On event based state estimation
Sijs, J.; Lazar, M.
2009-01-01
To reduce the amount of data transfer in networked control systems and wireless sensor networks, measurements are usually taken only when an event occurs, rather than at each synchronous sampling instant. However, this complicates estimation and control problems considerably. The goal of this paper
On event based state estimation
Sijs, J.; Lazar, M.
2009-01-01
To reduce the amount of data transfer in networked control systems and wireless sensor networks, measurements are usually taken only when an event occurs, rather than at each synchronous sampling instant. However, this complicates estimation and control problems considerably. The goal of this paper
Grey Prediction Based Software Stage-Effort Estimation
WANG Yong; SONG Qinbao; SHEN Junyi
2007-01-01
The software stage-effort estimation can be used to dynamically adjust software project schedule, further to help make the project finished on budget. This paper presents a grey model Verhulst based method for stage-effort estimation during software development process, a bias correction technology was used to improve the estimation accuracy. The proposed method was evaluated with a large-scale industrial software engineering database. The results are very encouraging and indicate the method has considerable potential.
Autoregressive Methods for Spectral Estimation from Interferograms.
1986-09-19
Forman/Steele/Vanasse [12] phase filter approach, which approximately removes the linear phase distortion introduced into the interferogram by retidation...band interferogram for the spectrum to be analyzed. The symmetrizing algorithm, based on the Forman/Steele/Vanasse method [12] computes a phase filter from
A Group Contribution Method for Estimating Cetane and Octane Numbers
Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group
2016-07-28
Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.
A new method for parameter estimation in nonlinear dynamical equations
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Predictive visual tracking based on least absolute deviation estimation
Rongtai Cai; Yanjie Wang
2008-01-01
To cope with the occlusion and intersection between targets and the environment, location prediction is employed in the visual tracking system. Target trace is fitted by sliding subsection polynomials based on least absolute deviation (LAD) estimation, and the future location of target is predicted with the fitted trace. Experiment results show that the proposed location prediction algorithm based on LAD estimation has significant robustness advantages over least square (LS) estimation, and it is more effective than LS-based methods in visual tracking.
Xiangwei Guo
2016-02-01
Full Text Available An estimation of the power battery state of charge (SOC is related to the energy management, the battery cycle life and the use cost of electric vehicles. When a lithium-ion power battery is used in an electric vehicle, the SOC displays a very strong time-dependent nonlinearity under the influence of random factors, such as the working conditions and the environment. Hence, research on estimating the SOC of a power battery for an electric vehicle is of great theoretical significance and application value. In this paper, according to the dynamic response of the power battery terminal voltage during a discharging process, the second-order RC circuit is first used as the equivalent model of the power battery. Subsequently, on the basis of this model, the least squares method (LS with a forgetting factor and the adaptive unscented Kalman filter (AUKF algorithm are used jointly in the estimation of the power battery SOC. Simulation experiments show that the joint estimation algorithm proposed in this paper has higher precision and convergence of the initial value error than a single AUKF algorithm.
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
陈国志; 陈隆道; 蔡忠法
2011-01-01
为了降低多重信号分类(MUSIC)法频率估计的计算复杂度,提出基于传播算子的间谐波频率估计方法.通过传播算子可以得到噪声子空间,不需要估计协方差矩阵和进行特征分解,并且不需要间谐波个数的先验知识,基于传播算子MUSIC算法的频率估计性能与MUSIC算法几乎相同.构造复数域自适应线性神经网络模型来估计谐波和间谐波的幅值和相位.该模型的输入变量和权值仅为实数域自适应线性神经网络的一半,简化了网络结构;采用Levenberg-Marquardt(LM)算法对网络进行学习,大大减少了学习次数.仿真结果表明,该算法无需同步采样,能够快速准确地估计间谐波的频率、幅值和相位.%An interharmonic frequency estimation algorithm based on propagator method (PM) was proposed in order to reduce the computational complexity of multiple signal classification (MUSIC). The propagator can be used to construct the noise subspace. The PM didn't involve covariance matrix and eigenvalue decomposition and the priori knowledge of interharmonic number, and had the approximation performance compared with MUSIC. A complex Adaline neural network was employed to obtain amplitudes and phases of harmonics and interharmonics. The proposed complex Adaline structure was based on LevenbergMarquardt (LM) rule. The algorithm reduced input vectors and weights to half of the number that real Adaline used. These attributes can increase convergence speed. The simulation results show that the algorithm can accurately achieve frequencies, amplitudes and phases of interharmonics without synchronous sampling data.
A Maximum-Entropy Method for Estimating the Spectrum
无
2007-01-01
Based on the maximum-entropy (ME) principle, a new power spectral estimator for random waves is derived in the form of ~S(ω)=(a/8)-H2(2π)d+1ω-(d+2)exp[-b(2π/ω)n], by solving a variational problem subject to some quite general constraints. This robust method is comprehensive enough to describe the wave spectra even in extreme wave conditions and is superior to periodogram method that is not suitable to process comparatively short or intensively unsteady signals for its tremendous boundary effect and some inherent defects of FFT. Fortunately, the newly derived method for spectral estimation works fairly well, even though the sample data sets are very short and unsteady, and the reliability and efficiency of this spectral estimator have been preliminarily proved.
Joint Pitch and DOA Estimation Using the ESPRIT method
Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom
2015-01-01
In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... method is illustrated on a synthetic signal as well as real-life recorded data....
Broadband DOA Estimation Based on Nested Arrays
Zhi-bo Shen
2015-01-01
Full Text Available Direction of arrival (DOA estimation is a crucial problem in electronic reconnaissance. A novel broadband DOA estimation method utilizing nested arrays is devised in this paper, which is capable of estimating the frequencies and DOAs of multiple narrowband signals in broadbands, even though they may have different carrier frequencies. The proposed method converts the DOA estimation of multiple signals with different frequencies into the spatial frequency estimation. Then, the DOAs and frequencies are pair matched by sparse recovery. It is possible to significantly increase the degrees of freedom (DOF with the nested arrays and the number of sources can be more than that of sensor array. In addition, the method can achieve high estimation precision without the two-dimensional search process in frequency and angle domain. The validity of the proposed method is verified by theoretic analysis and simulation results.
Weibull Parameters Estimation Based on Physics of Failure Model
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....
Analysis and estimation of risk management methods
Kankhva Vadim Sergeevich
2016-05-01
Full Text Available At the present time risk management is an integral part of state policy in all the countries with developed market economy. Companies dealing with consulting services and implementation of the risk management systems carve out a niche. Unfortunately, conscious preventive risk management in Russia is still far from the level of standardized process of a construction company activity, which often leads to scandals and disapproval in case of unprofessional implementation of projects. The authors present the results of the investigation of the modern understanding of the existing methodology classification and offer the authorial concept of classification matrix of risk management methods. Creation of the developed matrix is based on the analysis of the method in the context of incoming and outgoing transformed information, which may include different elements of risk control stages. So the offered approach allows analyzing the possibilities of each method.
SURF-based position estimation method using aerial image sequences.%基于SURF特征的航空序列图像位置估计算法
乔奎贤; 赵妮; 李耀军
2013-01-01
视觉传感器在航空无人机导航和定位任务中应用越来越广泛。针对无人机位置参数估计问题，提出了一种基于SURF特征的图像配准算法，该算法能够适应航空序列图像的旋转、尺度变换及噪声干扰，实现无人机位置的精确估计。构建了SURF尺度空间，运用快速Hessian矩阵定位极值点，计算出航空图像的64维SURF特征描述子；基于Hessian矩阵迹完成特征点匹配；使用RANSAC算法剔除出格点，实现位置参数的精确估计。通过航空图像序列实测数据位置估计实验，验证了该算法的有效性。%Vision sensor is widely used in the aviation aircraft navigation and positioning tasks. For position parameter estima-tion of aircraft, this paper presents a SURF feature based image registration algorithm, it is able to adapt to aerial image sequence rotation, scale transformation and noise, to achieve accurate estimates of aircraft position. It builds SURF scale space, uses fast Hessian matrix maximum values to calculate 64-dimensional SURF feature descriptors of aerial images. Based on the trace of the Hessian matrix, it completes feature points matching task. By using RANSAC algorithm, it removes the outliers matching points for accurate position estimate. Experiments with two real aerial image sequences show the effectiveness of the proposed position estimation algorithm.
Evaluation of non cyanide methods for hemoglobin estimation
Vinaya B Shah
2011-01-01
Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers
Sakashita, Makiko; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Nawano, Shigeru
2007-03-01
This paper presents a method for extracting multi-organs from four-phase contrasted CT images taken at different contrast timings (non-contrast, early, portal, and late phases). First, we apply a median filter to each CT image and align four-phase CT images by performing non-rigid volumetric image registration. Then, a three-dimensional joint histogram of CT values is computed from three-phase (early-, portal-, and late-) CT images. We assume that this histogram is a mixture of normal distributions corresponding to the liver, spleen, kidney, vein, artery, muscle, and bone regions. The EM algorithm is employed to estimate each normal distribution. Organ labels are assigned to each voxel using the mahalanobis distance measure. Connected component analysis is applied to correct the shape of each organ region. After that, the pancreas region is extracted from non-contrasted CT images in which other extracted organs and vessel regions are excluded. The EM algorithm is also employed for estimating the distribution of CT values inside the pancreas. We applied this method to seven cases of four-phase CT images. Extraction results show that the proposed method extracted multi-organs satisfactorily.
Jimenez Mena, Belen; Verrier, Etienne; Hospital, Frederic
We performed a simulation study of several estimators of the effective population size (Ne): NeH = estimator based on the rate of decrease in heterozygosity; NeT = estimator based on the temporal method; NeLD = linkage disequilibrium-based method. We first focused on NeH, which presented...... under scenarios of 3 and 20 bi-allelic loci. Increasing the number of loci largely improved the performance of NeT and NeLD. We highlight the value of NeT and NeLD when large numbers of bi-allelic loci are available, which is nowadays the case for SNPs markers....
The Lyapunov dimension and its estimation via the Leonov method
Kuznetsov, N.V., E-mail: nkuznetsov239@gmail.com
2016-06-03
Highlights: • Survey on effective analytical approach for Lyapunov dimension estimation, proposed by Leonov, is presented. • Invariance of Lyapunov dimension under diffeomorphisms and its connection with Leonov method are demonstrated. • For discrete-time dynamical systems an analog of Leonov method is suggested. - Abstract: Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985–90), and the numerical calculation of the Lyapunov exponents and dimension.
GEOMETRIC METHOD OF SEQUENTIAL ESTIMATION RELATED TO MULTINOMIAL DISTRIBUTION MODELS
WEIBOCHENG; LISHOUYE
1995-01-01
In 1980's differential geometric methods are successfully used to study curved expomential families and normal nonlinear regression models.This paper presents a new geometric structure to study multinomial distribution models which contain a set of nonlinear parameters.Based on this geometric structure,the suthors study several asymptotic properties for sequential estimation.The bias,the variance and the information loss of the sequential estimates are given from geomentric viewpoint,and a limit theorem connected with the observed and expected Fisher information is obtained in terms of curvatvre measures.The results show that the sequential estimation procednce has some better properties which are generally impossible for nonsequential estimation procedures.
Simplified triangle method for estimating evaporative fraction over soybean crops
Silva-Fuzzo, Daniela Fernanda; Rocha, Jansle Vieira
2016-10-01
Accurate estimates are emerging with technological advances in remote sensing, and the triangle method has demonstrated to be a useful tool for the estimation of evaporative fraction (EF). The purpose of this study was to estimate the EF using the triangle method at the regional level. We used data from the Moderate Resolution Imaging Spectroradiometer orbital sensor, referring to indices of surface temperature and vegetation index for a 10-year period (2002/2003 to 2011/2012) of cropping seasons in the state of Paraná, Brazil. The triangle method has shown considerable results for the EF, and the validation of the estimates, as compared to observed data of climatological water balance, showed values >0.8 for modified "d" of Wilmott and R2 values between 0.6 and 0.7 for some counties. The errors were low for all years analyzed, and the test showed that the estimated data are very close to the observed data. Based on statistical validation, we can say that the triangle method is a consistent tool, is useful as it uses only images of remote sensing as variables, and can provide support for monitoring large-scale agroclimatic, specially for countries of great territorial dimensions, such as Brazil, which lacks a more dense network of meteorological ground stations, i.e., the country does not appear to cover a large field for data.
APJE-SLIM Based Method for Marine Human Error Probability Estimation%基于APJE-SLIM的海运人因失误概率的确定
席永涛; 陈伟炯; 夏少生; 张晓东
2011-01-01
Safety is the eternal theme in shipping industry.Research shows that human error is the main reason of maritime accidents.In order to research marine human errors, the PSF are discussed, and the human error probability (HEP) is estimated under the influence of PSF.Based on the detailed investigation of human errors in collision avoidance behavior which is the most key mission in navigation and the PSF, human reliability of mariners in collision avoidance is analyzed by using the integration of APJE and SLIM.Result shows that PSF such as fatigue and health status, knowledge, experience and training, task complexity, safety management and organizational effectiveness, etc.have varying influence on HEP.If the level of PSF can be improved, the HEP can decreased.Using APJE to determine the absolute human error probabilities of extreme point can solve the problem that the probability of reference point is hard to obtain in SLIM method, and obtain the marine HEP under the different influence levels of PSF.%安全是海运行业永恒的主题,调查研究表明,人因失误是造成海事的主要原因.为了对海运人因失误进行研究,探讨引起人因失误的行为形成因子(PSF),确定在PSF影响下的人因失误概率.在调查海上避让行为的人因失误和这些失误的行为形成因子的基础上,采用APJE和SLIM 相结合的方法对航海人员避让行为中的可靠性进行分析.结果表明,航海人员疲劳与健康程度、知识、经验与培训水平、任务复杂程度、安全管理水平与组织有效性等PSF对人因失误概率有着不同程度的影响,相应提高PSF水平,可极大地减少人因失误概率.利用APJE确定端点绝对失误概率,解决了SLIM方法中难以获得参考点概率的问题,获得了在不同种类不同水平PSF影响下的海运人因失误概率.
Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling
ZHOU Dan; LI Chengrong; WANG Zhongdong
2013-01-01
Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ～ 1 000 and a censoring rate of 90％,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90％ confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90％ confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.
Bai, Yuxiang; Wang, Jinpeng; Bashari, Mohanad; Hu, Xiuting [The State Key Laboratory of Food Science and Technology, School of Food Science and Technology, Jiangnan University, Wuxi 214122 (China); Feng, Tao [School of Perfume and Aroma Technology, Shanghai Institute of Technology, Shanghai 201418 (China); Xu, Xueming [The State Key Laboratory of Food Science and Technology, School of Food Science and Technology, Jiangnan University, Wuxi 214122 (China); Jin, Zhengyu, E-mail: jinlab2008@yahoo.com [The State Key Laboratory of Food Science and Technology, School of Food Science and Technology, Jiangnan University, Wuxi 214122 (China); Tian, Yaoqi, E-mail: yqtian@jiangnan.edu.cn [The State Key Laboratory of Food Science and Technology, School of Food Science and Technology, Jiangnan University, Wuxi 214122 (China)
2012-08-10
Highlights: Black-Right-Pointing-Pointer We develop a TGA method for the measurement of the stoichiometric ratio. Black-Right-Pointing-Pointer A series of formulas are deduced to calculate the stoichiometric ratio. Black-Right-Pointing-Pointer Four {alpha}-CD-based inclusion complexes were successfully prepared. Black-Right-Pointing-Pointer The developed method is applicable. - Abstract: An approach mainly based on thermogravimetric analysis (TGA) was developed to evaluate the stoichiometric ratio (SR, guest to host) of the guest-{alpha}-cyclodextrin (Guest-{alpha}-CD) inclusion complexes (4-cresol-{alpha}-CD, benzyl alcohol-{alpha}-CD, ferrocene-{alpha}-CD and decanoic acid-{alpha}-CD). The present data obtained from Fourier transform-infrared (FT-IR) spectroscopy showed that all the {alpha}-CD-based inclusion complexes were successfully prepared in a solid-state form. The stoichiometric ratios of {alpha}-CD to the relative guests (4-cresol, benzyl alcohol, ferrocene and decanoic acid) determined by the developed method were 1:1, 1:2, 2:1 and 1:2, respectively. These SR data were well demonstrated by the previously reported X-ray diffraction (XRD) method and the NMR confirmatory experiments, except the SR of decanoic acid with a larger size and longer chain was not consistent. It is, therefore, suggested that the TGA-based method is applicable to follow the stoichiometric ratio of the polycrystalline {alpha}-CD-based inclusion complexes with smaller and shorter chain guests.
ESTIMATION OF STATURE BASED ON FOOT LENGTH
Vidyullatha Shetty
2015-01-01
Full Text Available BACKGROUND : Stature is the height of the person in the upright posture. It is an important measure of physical identity. Estimation of body height from its segments or dismember parts has important considerations for identifications of living or dead human body or remains recovered from disasters or other similar conditions. OBJECTIVE : Stature is an important indicator for identification. There are numerous means to establish stature and their significance lies in the simplicity of measurement, applicability and accuracy in prediction. Our aim of the study was to review the relationship between foot length and body height. METHODS : The present study reviews various prospective studies which were done to estimate the stature. All the measurements were taken by using standard measuring devices and standard anthropometric techniques. RESULTS : This review shows there is a correlation between stature and foot dimensions it is found to be positive and statistically highly significant. Prediction of stature was found to be most accurate by multiple regression analysis. CONCLUSIONS : Stature and gender estimation can be done by using foot measurements and stud y will help in medico - legal cases in establishing identity of an individual and this would be useful for Anatomists and Anthropologists to calculate stature based on foot length
A method of complex background estimation in astronomical images
Popowicz, Adam
2016-01-01
In this paper, we present a novel approach to the estimation of strongly varying backgrounds in astronomical images by means of small objects removal and subsequent missing pixels interpolation. The method is based on the analysis of a pixel local neighborhood and utilizes the morphological distance transform. In contrast to popular background estimation techniques, our algorithm allows for accurate extraction of complex structures, like galaxies or nebulae. Moreover, it does not require multiple tuning parameters, since it relies on physical properties of CCD image sensors - the gain and the read-out noise characteristics. The comparison with other widely used background estimators revealed higher accuracy of the proposed technique. The superiority of the novel method is especially significant for the most challenging fluctuating backgrounds. The size of filtered out objects is tunable, therefore the algorithm may eliminate a wide range of foreground structures, including the dark current impulses, cosmic ra...
基于辐射的潜在蒸散量估算方法适用性分析%Assessment on radiation-based methods to estimate potential evapotranspiration
左德鹏; 徐宗学; 程磊; 赵芳芳
2011-01-01
根据不同气候区4个站的历史数据,选取8种基于辐射的PET估算方法,以FAO56-PM法计算的PET作为参考值进行比较分析,最后用20 cm蒸发皿蒸发量对所有方法在不同气候区的适用性进行评价.其结果表明,采用初始参数时,Hargreaves法在不同气候区估算的逐月以及多年月平均PET误差均较小,其它方法则产生较大误差.校正参数后8种辐射法在不同气候区的估算结果均得到明显改进,且误差随站点湿润程度增大而减小,Makkink法在干旱以及半干旱区的民勤和延安站误差最小,Hargreaves法在半湿润以及湿润区的侯马和上海站表现最好.校正参数后所有方法在不同气候区4个站与20 cm蒸发皿蒸发量的相关系数均在0.9以上.就校正参数后的辐射法而言,建议在干旱或半干旱区优先选择Makkink法,湿润或半湿润区优先选择Hargreaves法.%Potential evapotranspiration is an important component of the hydrological cycle and a key input to the hydrological models. Estimation of potential evapotranspiration accurately is of great significance for water resources planning and management, important to improve the utilization of agricultural water resources, and helpful to estimate the ecological water requirements,etc. A variety of potential evapotranspiration estimation methods have been developed in the world,but most of them are only suitable for the specific climatic zones. It will result in great error if they are used without calibration. Potential evapotranspiration estimation methods used in China generally are directly from other countries, comparative evaluation on those methods is particularly needed. Methods to estimate potential evapotranspiration can be divided into temperature-based, radiation-based, mass-transfer and combination methods,etc. However,most of these methods need various data. For instance,the FA056-PM method is consid-ered the most accurate method to estimate potential
Method for estimating spin-spin interactions from magnetization curves
Tamura, Ryo; Hukushima, Koji
2017-02-01
We develop a method to estimate the spin-spin interactions in the Hamiltonian from the observed magnetization curve by machine learning based on Bayesian inference. In our method, plausible spin-spin interactions are determined by maximizing the posterior distribution, which is the conditional probability of the spin-spin interactions in the Hamiltonian for a given magnetization curve with observation noise. The conditional probability is obtained with the Markov chain Monte Carlo simulations combined with an exchange Monte Carlo method. The efficiency of our method is tested using synthetic magnetization curve data, and the results show that spin-spin interactions are estimated with a high accuracy. In particular, the relevant terms of the spin-spin interactions are successfully selected from the redundant interaction candidates by the l1 regularization in the prior distribution.
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.
2016-09-19
three low-flow regions were developed using two types of regression analyses—left-censored and generalized-least-squares regression analyses. A total of 211 streamgages were included in the development of nine spring regression equations—three equations for each of the three low-flow regions. A total of 208 streamgages were included in the development of nine fall regression equations—three equations for each of the three low-flow regions. A censoring threshold was used to develop 15 left-censored regression equations to estimate the three fall low-flow frequency statistics for each of the three low-flow regions and to estimate the three spring low-flow frequency statistics for the southern and northwest regions. For the northeast region, generalized-least-squares regression was used to develop three equations to estimate the three spring low-flow frequency statistics. For the northeast region, average standard errors of prediction range from 32.4 to 48.4 percent for the spring equations and average standard errors of estimate range from 56.4 to 73.8 percent for the fall equations. For the northwest region, average standard errors of estimate range from 58.9 to 62.1 percent for the spring equations and from 83.2 to 109.4 percent for the fall equations. For the southern region, average standard errors of estimate range from 43.2 to 64.0 percent for the spring equations and from 78.1 to 78.7 percent for the fall equations.The regression equations are applicable only to stream sites in Iowa with low flows not substantially affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. The regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system application. StreamStats allows users to click on any ungaged stream site and compute estimates of the six selected spring and fall low-flow statistics; in addition, 90-percent
Empirical Analysis of Value-at-Risk Estimation Methods Using Extreme Value Theory
无
2001-01-01
This paper investigates methods of value-at-risk (VaR) estimation using extreme value theory (EVT). Itcompares two different estimation methods, 。two-step subsample bootstrap" based on moment estimation and maximumlikelihood estimation (MLE), according to their theoretical bases and computation procedures. Then, the estimationresults are analyzed together with those of normal method and empirical method. The empirical research of foreignexchange data shows that the EVT methods have good characters in estimating VaR under extreme conditions and"two-step subsample bootstrap" method is preferable to MLE.
Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation
Sharad Damodar Gore
2009-10-01
Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.
Travel Demand Analysis Method Based on OD Estimation%基于 OD 反推的交通需求分析方法研究
纪魁; 曹国华
2014-01-01
This paper addresses the traffic demand analysis problem for the project which has wide incidence and complex road system .A new traffic demand analysis method is proposed by using OD estimation with TransCAD plat-form .By using the background traffic flow as the link flow and setting up virtual partitions ,OD estimation can be carried out .According to the relationship between OD matrix and PA matrix ,the passenger flow of virtual partitions can be ob-tained .Then ,the forecast of virtual partitions and the area can be done by four-stage transport demand forecasting .Fi-nally ,the proposed method is demonstrated by using a traffic demand analysis for the adjustment of the regulatory de-tailed planning of a city as an example ,which proves that the proposed method is reasonable and effective .%对于影响范围较大、道路系统较为复杂的项目的交通需求分析问题，结合T ransCAD软件平台，提出了基于OD反推的交通需求分析方法。以背景交通量作为路段流量，OD反推得到虚拟分区的客流发生、吸引量，继而进行四阶段交通需求预测。相对于传统＂手工＂分配，在背景交通量基础上叠加流量的方法，文章提出的方法能够有效避免高估干道，低估次要道路拥堵程度的情况，可以细化到支路层面进行分析。
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
The deposit size frequency method for estimating undiscovered uranium deposits
McCammon, R.B.; Finch, W.I.
1993-01-01
The deposit size frequency (DSF) method has been developed as a generalization of the method that was used in the National Uranium Resource Evaluation (NURE) program to estimate the uranium endowment of the United States. The DSF method overcomes difficulties encountered during the NURE program when geologists were asked to provide subjective estimates of (1) the endowed fraction of an area judged favorable (factor F) for the occurrence of undiscovered uranium deposits and (2) the tons of endowed rock per unit area (factor T) within the endowed fraction of the favorable area. Because the magnitudes of factors F and T were unfamiliar to nearly all of the geologists, most geologists responded by estimating the number of undiscovered deposits likely to occur within the favorable area and the average size of these deposits. The DSF method combines factors F and T into a single factor (F??T) that represents the tons of endowed rock per unit area of the undiscovered deposits within the favorable area. Factor F??T, provided by the geologist, is the estimated number of undiscovered deposits per unit area in each of a number of specified deposit-size classes. The number of deposit-size classes and the size interval of each class are based on the data collected from the deposits in known (control) areas. The DSF method affords greater latitude in making subjective estimates than the NURE method and emphasizes more of the everyday experience of exploration geologists. Using the DSF method, new assessments have been made for the "young, organic-rich" surficial uranium deposits in Washington and idaho and for the solution-collapse breccia pipe uranium deposits in the Grand Canyon region in Arizona and adjacent Utah. ?? 1993 Oxford University Press.
A method for estimating optical properties of dusty cloud
Tianhe Wang; Jianping Huang
2009-01-01
Based on the scattering properties of nonspherical dust aerosol,a new method is developed for retrieving dust aerosol optical depths of dusty clouds.The dusty clouds are defined as the hybrid system of dust plume and cloud.The new method is based on transmittance measurements from surface-based instruments multi-filter rotating shadowband radiometer(MFRSR)and cloud parameters from lidar measurements.It uses the difference of absorption between dust aerosols and water droplets for distinguishing and estimating the optical properties of dusts and clouds,respectively.This new retrieval method is not sensitive to the retrieval error of cloud properties and the maximum absolute deviations of dust aerosol and total optical depths for thin dusty cloud retrieval algorithm are only 0.056 and 0.1.respectively,for given possible uncertainties.The retrieval error for thick dusty cloud mainly depends on lidar-based total dusty cloud properties.
马贤颖; 吴欣
2013-01-01
对传统的WideBand Delphi估计法、功能点法和类比法进行了分析、比较，提出了一种改进的成本估算方法。该方法以WideBand Delphi估计法为原型，在专家估计过程中，将功能点法和类比法融入其中，有效规避了WideBand Delphi估计法过于依赖专家经验的不足，并通过类比扩展了功能点法所局限的应用领域，既保证了估计方法的科学性，又保证了估计结果的准确性。实际估算数据的比较表明，改进后的估算方法具有更高的精确度。%The characteristics of traditional WideBand Delphi method,COSMIC method and analogy method are analyzed and compared. An improved cost estimation method with software is proposed,in which the WideBand Delphi method is taken as a prototype,and the COSMIC method and the analogy method are combined during the cost estimation of the experts to over-come the shortcomings that the WideBand Delphi method depends too much on the experience of the experts. Furthermore,this method extends the application fields of COSMIC method via analogy. Multi-methods combination can ensure the scientificity of the estimation method and the accuracy of the results. The practical estimation data shows that this method has higher accuracy.
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.
2016-09-19
three low-flow regions were developed using two types of regression analyses—left-censored and generalized-least-squares regression analyses. A total of 211 streamgages were included in the development of nine spring regression equations—three equations for each of the three low-flow regions. A total of 208 streamgages were included in the development of nine fall regression equations—three equations for each of the three low-flow regions. A censoring threshold was used to develop 15 left-censored regression equations to estimate the three fall low-flow frequency statistics for each of the three low-flow regions and to estimate the three spring low-flow frequency statistics for the southern and northwest regions. For the northeast region, generalized-least-squares regression was used to develop three equations to estimate the three spring low-flow frequency statistics. For the northeast region, average standard errors of prediction range from 32.4 to 48.4 percent for the spring equations and average standard errors of estimate range from 56.4 to 73.8 percent for the fall equations. For the northwest region, average standard errors of estimate range from 58.9 to 62.1 percent for the spring equations and from 83.2 to 109.4 percent for the fall equations. For the southern region, average standard errors of estimate range from 43.2 to 64.0 percent for the spring equations and from 78.1 to 78.7 percent for the fall equations.The regression equations are applicable only to stream sites in Iowa with low flows not substantially affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. The regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system application. StreamStats allows users to click on any ungaged stream site and compute estimates of the six selected spring and fall low-flow statistics; in addition, 90-percent
Bayesian methods to estimate urban growth potential
Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.
2017-01-01
Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
Ahmad Nicknam; Reza Abbasnia; Yasser Eslamian; Mohsen Bozorgnasab; Ehsan Adeli Mosabbeb
2010-06-01
We determine the source parameters for 2003 (Mw 6.5) Bam, Iran, earthquake using an empirical Green’s function summation approach to model ground motions recorded by two strong motion stations at approximately 45 km epicentral distance. We introduce a genetic algorithm technique to optimize the fit to observed elastic response spectra. The proposed genetic algorithm technique allows us to explore the sensitivity of the results to multiple source parameters, including hypocenter location, focal mechanism (Strike and Dip), P-wave velocity in depth, fault dimension and rupture and healing velocities. We simulated the three components of seismogram at a far station, Mohammad-Abad station, by means of an inversion solution technique and predicted seismograms at another far station, Abaragh, incorporating the estimated model parameters. More agreement of our synthesized seismograms with those of the observed data in comparison with the results of other investigators confirms the reliability of estimated seismological parameters and the applicability of our technique. A series of sensitivity analysis are performed for demonstrating the influence of individual model parameter variation on defined error value. Using the empirical Green’s function summation method, our inferred source parameters provide the basis for predicting main shock shaking and guiding retrofitting efforts at sites, for example, the historical buildings in Arge- Bam site which were damaged during the 2003 Bam earthquake and strong motion data is unavailable.
A study of methods to estimate debris flow velocity
Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.
2008-01-01
Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.
Global parameter estimation methods for stochastic biochemical systems
Poovathingal Suresh
2010-08-01
Full Text Available Abstract Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter
Li, Jing; Wang, Min-Yan; Zhang, Jian; He, Wan-Qing; Nie, Lei; Shao, Xia
2013-12-01
VOCs emission from petrochemical storage tanks is one of the important emission sources in the petrochemical industry. In order to find out the VOCs emission amount of petrochemical storage tanks, Tanks 4.0.9d model is utilized to calculate the VOCs emission from different kinds of storage tanks. VOCs emissions from a horizontal tank, a vertical fixed roof tank, an internal floating roof tank and an external floating roof tank were calculated as an example. The consideration of the site meteorological information, the sealing information, the tank content information and unit conversion by using Tanks 4.0.9d model in China was also discussed. Tanks 4.0.9d model can be used to estimate VOCs emissions from petrochemical storage tanks in China as a simple and highly accurate method.
Statistical methods of estimating mining costs
Long, K.R.
2011-01-01
Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.
Richardson, John G.
2009-11-17
An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.
Seasonal adjustment methods and real time trend-cycle estimation
Bee Dagum, Estela
2016-01-01
This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...
Liu, Jingwei; Liu, Yi; Xu, Meizhi
2015-01-01
Parameter estimation method of Jelinski-Moranda (JM) model based on weighted nonlinear least squares (WNLS) is proposed. The formulae of resolving the parameter WNLS estimation (WNLSE) are derived, and the empirical weight function and heteroscedasticity problem are discussed. The effects of optimization parameter estimation selection based on maximum likelihood estimation (MLE) method, least squares estimation (LSE) method and weighted nonlinear least squares estimation (WNLSE) method are al...
Estimation of water percolation by different methods using TDR
Alisson Jadavi Pereira da Silva
2014-02-01
Full Text Available Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR in a drainage lysimeter. We used Darcy's law with K(θ functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ predicted by the method of Hillel et al. (1972 provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980, Sisson et al. (1980 and van Genuchten (1980 underestimated water percolation.
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Julius Hannink
2017-08-01
Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.
Smit, E M
1979-06-01
An ultra-micro method for the determination of the total nitrogen-content of biological fluids and suspensions is described, based on a digestion in sulphuric acid and a enzymatic determination of the ammonia formed with glutamate dehydrogenase (EC 1.4.1.3). The proposed method yields the same results as the classical Kjeldahl procedure, but is less time-consuming. The detection-limit of the nitrogen, without loss of precision and accuracy, is much lower than in the original Kjeldahl procedure, and is in the order of 35 ng N per sample.
Methods of Mmax Estimation East of the Rocky Mountains
Wheeler, Russell L.
2009-01-01
Several methods have been used to estimate the magnitude of the largest possible earthquake (Mmax) in parts of the Central and Eastern United States and adjacent Canada (CEUSAC). Each method has pros and cons. The largest observed earthquake in a specified area provides an unarguable lower bound on Mmax in the area. Beyond that, all methods are undermined by the enigmatic nature of geologic controls on the propagation of large CEUSAC ruptures. Short historical-seismicity records decrease the defensibility of several methods that are based on characteristics of small areas in most of CEUSAC. Methods that use global tectonic analogs of CEUSAC encounter uncertainties in understanding what 'analog' means. Five of the methods produce results that are inconsistent with paleoseismic findings from CEUSAC seismic zones or individual active faults.
Michal Vondra
2009-01-01
Full Text Available The application of methods based on measurements of photosynthesis efficiency is now more and more popular and used not only for the evaluation of the efficiency of herbicides but also for the estimation of their phytotoxicity to the cultivated crop. These methods enable to determine also differences in the sensitivity of cultivars and/or hybrids to individual herbicides. The advantage of these methods consists above all in the speed and accuracy of measuring.In a field experiment, the sensitivity of several selected grain maize hybrids (EDENSTAR, NK AROBASE, NK LUGAN, LG 33.30 and NK THERMO to the herbicide CALLISTO 480 SC + ATPLUS 463 was tested for a period of three years. The sensitivity to a registered dose of 0.25 l . ha−1 + 0.5 % was measured by means of the apparatus PS1 meter, which could measure the reflected radiation. Measurements of sensitivity of hybrids were performed on the 2nd, 3rd, 4th, 5th and 8th day after the application of the tested herbicide, i.e. in the growing stage of the 3rd–5th leaf. Plant material was harvested using a small-plot combine harvester SAMPO 2010. Samples were weighed and converted to the yield with 15 % of moisture in grain DM.The obtained three-year results did not demonstrate differences in sensitivity of tested hybrids to the registered dose of the herbicide CALLISTO 480 SC + ATPLUS 463 (i.e. 0.25 l . ha−1 + 0,5 %. Recorded results indicated that for the majority of tested hybrids the most critical were the 4th and the 5th day after the application; on these days the average PS1 values were the highest at all. In years 2005 and 2007, none of the tested hybrids exceeded the limit value 15 (which indicated a certain decrease in the efficiency of photosynthesis. Although in 2006 three of tested hybrids showed a certain decrease in photosynthetic activity (i.e. EDENSTAR and NK AROBASE on the 3rd day and NK LUGAN on the 2nd–4th day after the application, no visual symptoms
Fault Tolerant Matrix Pencil Method for Direction of Arrival Estimation
Yerriswamy, T; 10.5121/sipij.2011.2306
2011-01-01
Continuing to estimate the Direction-of-arrival (DOA) of the signals impinging on the antenna array, even when a few elements of the underlying Uniform Linear Antenna Array (ULA) fail to work will be of practical interest in RADAR, SONAR and Wireless Radio Communication Systems. This paper proposes a new technique to estimate the DOAs when a few elements are malfunctioning. The technique combines Singular Value Thresholding (SVT) based Matrix Completion (MC) procedure with the Direct Data Domain (D^3) based Matrix Pencil (MP) Method. When the element failure is observed, first, the MC is performed to recover the missing data from failed elements, and then the MP method is used to estimate the DOAs. We also, propose a very simple technique to detect the location of elements failed, which is required to perform MC procedure. We provide simulation studies to demonstrate the performance and usefulness of the proposed technique. The results indicate a better performance, of the proposed DOA estimation scheme under...
Leontine Alkema
Full Text Available BACKGROUND: In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME published an update of the estimates of the under-five mortality rate (U5MR and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. METHODS: We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. FINDINGS: Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. CONCLUSIONS: The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues.
Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen
2014-01-01
Background In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. Methods We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Findings Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. Conclusions The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues. PMID:25013954
Trimmed Likelihood-based Estimation in Binary Regression Models
Cizek, P.
2005-01-01
The binary-choice regression models such as probit and logit are typically estimated by the maximum likelihood method.To improve its robustness, various M-estimation based procedures were proposed, which however require bias corrections to achieve consistency and their resistance to outliers is rela
Adaptive Spectral Estimation Methods in Color Flow Imaging.
Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Avdal, Jorgen; Lovstakken, Lasse
2016-11-01
Clutter rejection for color flow imaging (CFI) remains a challenge due to either a limited amount of temporal samples available or nonstationary tissue clutter. This is particularly the case for interleaved CFI and B-mode acquisitions. Low velocity blood signal is attenuated along with the clutter due to the long transition band of the available clutter filters, causing regions of biased mean velocity estimates or signal dropouts. This paper investigates how adaptive spectral estimation methods, Capon and blood iterative adaptive approach (BIAA), can be used to estimate the mean velocity in CFI without prior clutter filtering. The approach is based on confining the clutter signal in a narrow spectral region around the zero Doppler frequency while keeping the spectral side lobes below the blood signal level, allowing for the clutter signal to be removed by thresholding in the frequency domain. The proposed methods are evaluated using computer simulations, flow phantom experiments, and in vivo recordings from the common carotid and jugular vein of healthy volunteers. Capon and BIAA methods could estimate low blood velocities, which are normally attenuated by polynomial regression filters, and may potentially give better estimation of mean velocities for CFI at a higher computational cost. The Capon method decreased the bias by 81% in the transition band of the used polynomial regression filter for small packet size ( N=8 ) and low SNR (5 dB). Flow phantom and in vivo results demonstrate that the Capon method can provide color flow images and flow profiles with lower variance and bias especially in the regions close to the artery walls.
Parameter estimation method for blurred cell images from fluorescence microscope
He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin
2016-10-01
Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.
ICA Model Order Estimation Using Clustering Method
P. Sovka
2007-12-01
Full Text Available In this paper a novel approach for independent component analysis (ICA model order estimation of movement electroencephalogram (EEG signals is described. The application is targeted to the brain-computer interface (BCI EEG preprocessing. The previous work has shown that it is possible to decompose EEG into movement-related and non-movement-related independent components (ICs. The selection of only movement related ICs might lead to BCI EEG classification score increasing. The real number of the independent sources in the brain is an important parameter of the preprocessing step. Previously, we used principal component analysis (PCA for estimation of the number of the independent sources. However, PCA estimates only the number of uncorrelated and not independent components ignoring the higher-order signal statistics. In this work, we use another approach - selection of highly correlated ICs from several ICA runs. The ICA model order estimation is done at significance level ÃŽÂ± = 0.05 and the model order is less or more dependent on ICA algorithm and its parameters.
New method of estimation of cosmic ray nucleus energy
Korotkova, N A; Postnikov, E B; Roganova, T M; Sveshnikova, L G; Turundaevskij, A N
2002-01-01
The new approach to estimation of primary cosmic nucleus energy is presented. It is based on measurement of spatial density of secondary particles, originated in nuclear interactions in the target and strengthened by thin converter layer. The proposed method allows creation of relatively lightweight apparatus of large square with large geometrical factor and can be applied in satellite and balloon experiments for all nuclei in a wide energy range of 10 sup 1 sup 1 -10 sup 1 sup 6 eV/particle. The physical basis of the method, full Monte Carlo simulation, the field of application are presented
Estimating monthly temperature using point based interpolation techniques
Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi
2013-04-01
This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.
Liu Xiqiang; Zhou Huilan; Li Hong; Gai Dianguang
2000-01-01
Based on the propagation characteristics of shear wave in the anisotropic layers, thecorrelation among several splitting shear-wave identification methods hasbeen studied. Thispaper puts forward the method estimating splitting shear-wave phases and its reliability byusing of the assumption that variance of noise and useful signal data obey normaldistribution. To check the validity of new method, the identification results and errorestimation corresponding to 95% confidence level by analyzing simulation signals have beengiven.
Image-based spectral transmission estimation using "sensitivity comparison".
Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani
2017-01-20
Although digital cameras have been used for spectral reflectance estimation, transmission measurement has rarely been considered in studies. This study presents a method named sensitivity comparison (SC) for spectral transmission estimation. The method needs neither a priori knowledge from the samples nor statistical information of a given reflectance dataset. As with spectrophotometers, the SC method needs one shot for calibration and another shot for measurement. The method exploits the sensitivity of the camera in the absence and presence of transparent colored objects for transmission estimation. 138 Kodak Wratten Gelatin filter transmissions were used for controlling the proposed method. Using modeling of the imaging system in different levels of noise, the performance of the proposed method was compared with a training-based Matrix R method. For checking the performance of the SC method in practice, 33 manmade colored transparent films were used in a conventional three-channel camera. The method generated promising results using different error metrics.
Quantum Estimation Methods for Quantum Illumination.
Sanz, M; Las Heras, U; García-Ripoll, J J; Solano, E; Di Candia, R
2017-02-17
Quantum illumination consists in shining quantum light on a target region immersed in a bright thermal bath with the aim of detecting the presence of a possible low-reflective object. If the signal is entangled with the receiver, then a suitable choice of the measurement offers a gain with respect to the optimal classical protocol employing coherent states. Here, we tackle this detection problem by using quantum estimation techniques to measure the reflectivity parameter of the object, showing an enhancement in the signal-to-noise ratio up to 3 dB with respect to the classical case when implementing only local measurements. Our approach employs the quantum Fisher information to provide an upper bound for the error probability, supplies the concrete estimator saturating the bound, and extends the quantum illumination protocol to non-Gaussian states. As an example, we show how Schrödinger's cat states may be used for quantum illumination.
Methods for estimating production and utilization of paper birch saplings
US Fish and Wildlife Service, Department of the Interior — Development of technique to estimate browse production and utilization. Developed a set of methods for estimating annual production and utilization of paper birch...
Enhancing Use Case Points Estimation Method Using Soft Computing Techniques
Nassif, Ali Bou; Capretz, Luiz Fernando; Ho, Danny
2016-01-01
Software estimation is a crucial task in software engineering. Software estimation encompasses cost, effort, schedule, and size. The importance of software estimation becomes critical in the early stages of the software life cycle when the details of software have not been revealed yet. Several commercial and non-commercial tools exist to estimate software in the early stages. Most software effort estimation methods require software size as one of the important metric inputs and consequently,...
Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram\\'{e}r-Rao...... estimation. Moreover, simulations on real-life data indicate that the NLS and aNLS methods are applicable even when reverberation is present and the noise is not white Gaussian....
A review of the methods for neuronal response latency estimation
Levakovaa, Marie; Tamborrino, Massimiliano; Ditlevsen, Susanne;
2015-01-01
Neuronal response latency is usually vaguely defined as the delay between the stimulus onset and the beginning of the response. It contains important information for the understanding of the temporal code. For this reason, the detection of the response latency has been extensively studied...... in the last twenty years, yielding different estimation methods. They can be divided into two classes, one of them including methods based on detecting an intensity change in the firing rate profile after the stimulus onset and the other containing methods based on detection of spikes evoked...... by the stimulation using interspike intervals and spike times. The aim of this paper is to present a review of the main techniques proposed in both classes, highlighting their advantages and shortcomings....
Range-based estimation of quadratic variation
Christensen, Kim; Podolskij, Mark
This paper proposes using realized range-based estimators to draw inference about the quadratic variation of jump-diffusion processes. We also construct a range-based test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient...
Ashot Davtian
2011-05-01
Full Text Available Two methods for the estimation of number per unit volume NV of spherical particles are discussed: the (physical disector (Sterio, 1984 and Saltykov's estimator (Saltykov, 1950; Fullman, 1953. A modification of Saltykov's estimator is proposed which reduces the variance. Formulae for bias and variance are given for both disector and improved Saltykov estimator for the case of randomly positioned particles. They enable the comparison of the two estimators with respect to their precision in terms of mean squared error.
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data
Particle Size Estimation Based on Edge Density
WANG Wei-xing
2005-01-01
Given image sequences of closely packed particles, the underlying aim is to estimate diameters without explicit segmentation. In a way, this is similar to the task of counting objects without directly counting them. Such calculations may, for example, be useful fast estimation of particle size in different application areas. The topic is that of estimating average size (=average diameter) of packed particles, from formulas involving edge density, and the edges from moment-based thresholding are used. An average shape factor is involved in the calculations, obtained for some frames from crude partial segmentation. Measurement results from about 80 frames have been analyzed.
Neeraj Tiwari
2014-06-01
Full Text Available Under inclusion probability proportional to size (IPPS sampling, the exact secondorder inclusion probabilities are often very difficult to obtain, and hence variance of the Horvitz- Thompson estimator and Sen-Yates-Grundy estimate of the variance of Horvitz-Thompson estimator are difficult to compute. Hence the researchers developed some alternative variance estimators based on approximations of the second-order inclusion probabilities in terms of the first order inclusion probabilities. We have numerically compared the performance of the various alternative approximate variance estimators using the split method of sample selection
Mineral resources estimation based on block modeling
Bargawa, Waterman Sulistyana; Amri, Nur Ali
2016-02-01
The estimation in this paper uses three kinds of block models of nearest neighbor polygon, inverse distance squared and ordinary kriging. The techniques are weighting scheme which is based on the principle that block content is a linear combination of the grade data or the sample around the block being estimated. The case study in Pongkor area, here is gold-silver resource modeling that allegedly shaped of quartz vein as a hydrothermal process of epithermal type. Resources modeling includes of data entry, statistical and variography analysis of topography and geological model, the block model construction, estimation parameter, presentation model and tabulation of mineral resources. Skewed distribution, here isolated by robust semivariogram. The mineral resources classification generated in this model based on an analysis of the kriging standard deviation and number of samples which are used in the estimation of each block. Research results are used to evaluate the performance of OK and IDS estimator. Based on the visual and statistical analysis, concluded that the model of OK gives the estimation closer to the data used for modeling.
Green's function based density estimation
Kovesarki, Peter; Brock, Ian C.; Nuncio Quiroz, Adriana Elizabeth [Physikalisches Institut, Universitaet Bonn (Germany)
2012-07-01
A method was developed based on Green's function identities to estimate probability densities. This can be used for likelihood estimations and for binary classifications. It offers several advantages over neural networks, boosted decision trees and other, regression based classifiers. For example, it is less prone to overtraining, and it is much easier to combine several samples. Some capabilities are demonstrated using ATLAS data.
Wavelet Based Semi-blind Channel Estimation For Multiband OFDM
Sadough, Sajad; Jaffrot, Emmanuel; Duhamel, Pierre
2007-01-01
This paper introduces an expectation-maximization (EM) algorithm within a wavelet domain Bayesian framework for semi-blind channel estimation of multiband OFDM based UWB communications. A prior distribution is chosen for the wavelet coefficients of the unknown channel impulse response in order to model a sparseness property of the wavelet representation. This prior yields, in maximum a posteriori estimation, a thresholding rule within the EM algorithm. We particularly focus on reducing the number of estimated parameters by iteratively discarding ``unsignificant'' wavelet coefficients from the estimation process. Simulation results using UWB channels issued from both models and measurements show that under sparsity conditions, the proposed algorithm outperforms pilot based channel estimation in terms of mean square error and bit error rate and enhances the estimation accuracy with less computational complexity than traditional semi-blind methods.
Dynamic systems models new methods of parameter and state estimation
2016-01-01
This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...
Stamatakis, Emmanuel; Hamer, Mark; O'Donovan, Gary; Batty, George David; Kivimaki, Mika
2013-03-01
Cardiorespiratory fitness (CRF) is a key predictor of chronic disease, particularly cardiovascular disease (CVD), but its assessment usually requires exercise testing which is impractical and costly in most health-care settings. Non-exercise testing cardiorespiratory fitness (NET-F)-estimating methods are a less resource-demanding alternative, but their predictive capacity for CVD and total mortality has yet to be tested. The objective of this study is to examine the association of a validated NET-F algorithm with all-cause and CVD mortality. The participants were 32,319 adults (14,650 men) aged 35-70 years who took part in eight Health Survey for England and Scottish Health Survey studies between 1994 and 2003. Non-exercise testing cardiorespiratory fitness (a metabolic equivalent of VO2max) was calculated using age, sex, body mass index (BMI), resting heart rate, and self-reported physical activity. We followed participants for mortality until 2008. Two thousand one hundred and sixty-five participants died (460 cardiovascular deaths) during a mean 9.0 [standard deviation (SD) = 3.6] year follow-up. After adjusting for potential confounders including diabetes, hypertension, smoking, social class, alcohol, and depression, a higher fitness score according to the NET-F was associated with a lower risk of mortality from all-causes (hazard ratio per SD increase in NET-F 0.85, 95% confidence interval: 0.78-0.93 in men; 0.88, 0.80-0.98 in women) and CVD (men: 0.75, 0.63-0.90; women: 0.73, 0.60-0.92). Non-exercise testing cardiorespiratory fitness had a better discriminative ability than any of its components (CVD mortality c-statistic: NET-F = 0.70-0.74; BMI = 0.45-0.59; physical activity = 0.60-0.64; resting heart rate = 0.57-0.61). The sensitivity of the NET-F algorithm to predict events occurring in the highest risk quintile was better for CVD (0.49 in both sexes) than all-cause mortality (0.44 and 0.40 for men and women, respectively). The specificity for all
Activity Recognition Using Biomechanical Model Based Pose Estimation
Reiss, Attila; Hendeby, Gustaf; Bleser, Gabriele; Stricker, Didier
2010-01-01
In this paper, a novel activity recognition method based on signal-oriented and model-based features is presented. The model-based features are calculated from shoulder and elbow joint angles and torso orientation, provided by upper-body pose estimation based on a biomechanical body model. The recognition performance of signal-oriented and model-based features is compared within this paper, and the potential of improving recognition accuracy by combining the two approaches is proved: the accu...
PERFORMANCE ANALYSIS OF METHODS FOR ESTIMATING ...
2014-12-31
Dec 31, 2014 ... analysis revealed that the MLM was the most accurate model ..... obtained using the empirical method as the same formula is used. ..... and applied meteorology, American meteorological society, October 1986, vol.25, pp.