Estimation of scale parameters of logistic distribution by linear functions of sample quantiles
无
2001-01-01
The large sample estimation of standard deviation of logistic distribution employs the asymptotically best linear unbiased estimators based on sample quantiles. The sample quantiles are established from a pair of single spacing. Finally, a table of the variances and efficiencies of the estimator for 5 ≤ n ≤ 65 is provided and comparison is made with other linear estimators.
Efficient Quantile Estimation for Functional-Coefficient Partially Linear Regression Models
Zhangong ZHOU; Rong JIANG; Weimin QIAN
2011-01-01
The quantile estimation methods are proposed for functional-coefficient partially linear regression (FCPLR) model by combining nonparametric and functional-coefficient regression (FCR) model.The local linear scheme and the integrated method are used to obtain local quantile estimators of all unknown functions in the FCPLR model.These resulting estimators are asymptotically normal,but each of them has big variance.To reduce variances of these quantile estimators,the one-step backfitting technique is used to obtain the efficient quantile estimators of all unknown functions,and their asymptotic normalities are derived.Two simulated examples are carried out to illustrate the proposed estimation methodology.
Linear Quantile Mixed Models: The lqmm Package for Laplace Quantile Regression
Marco Geraci
2014-05-01
Full Text Available Inference in quantile analysis has received considerable attention in the recent years. Linear quantile mixed models (Geraci and Bottai 2014 represent a ?exible statistical tool to analyze data from sampling designs such as multilevel, spatial, panel or longitudinal, which induce some form of clustering. In this paper, I will show how to estimate conditional quantile functions with random e?ects using the R package lqmm. Modeling, estimation and inference are discussed in detail using a real data example. A thorough description of the optimization algorithms is also provided.
Hierarchical linear regression models for conditional quantiles
TIAN Maozai; CHEN Gemai
2006-01-01
The quantile regression has several useful features and therefore is gradually developing into a comprehensive approach to the statistical analysis of linear and nonlinear response models,but it cannot deal effectively with the data with a hierarchical structure.In practice,the existence of such data hierarchies is neither accidental nor ignorable,it is a common phenomenon.To ignore this hierarchical data structure risks overlooking the importance of group effects,and may also render many of the traditional statistical analysis techniques used for studying data relationships invalid.On the other hand,the hierarchical models take a hierarchical data structure into account and have also many applications in statistics,ranging from overdispersion to constructing min-max estimators.However,the hierarchical models are virtually the mean regression,therefore,they cannot be used to characterize the entire conditional distribution of a dependent variable given high-dimensional covariates.Furthermore,the estimated coefficient vector (marginal effects)is sensitive to an outlier observation on the dependent variable.In this article,a new approach,which is based on the Gauss-Seidel iteration and taking a full advantage of the quantile regression and hierarchical models,is developed.On the theoretical front,we also consider the asymptotic properties of the new method,obtaining the simple conditions for an n1/2-convergence and an asymptotic normality.We also illustrate the use of the technique with the real educational data which is hierarchical and how the results can be explained.
Estimation of Conditional Quantile using Neural Networks
Kulczycki, P.; Schiøler, Henrik
1999-01-01
The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....
Estimation of Conditional Quantile using Neural Networks
Kulczycki, P.; Schiøler, Henrik
1999-01-01
The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....
Quantile Estimation in Dependent Sequences.
1981-09-01
Heidelberger and Welch (1980; 1981) we used K = 25 and d = 2 , although for extreme quantiles in highly congested queues with short run lengths, d = 3 was...YqG , i.e. if Yq(1) i "’" <q(G) are the order statistics of Yql,...,qG 23 1.L then the ngm is defiried by (assuming G is odd) (3.8) ngq(xp) = yq([. 5G ...random variables (NEAR(l) and GNEAR(1) processes; see Lawrance and Lewis (1981a) and (1981b)) and on waiting time se- quences in heavily congested
On accuracy of upper quantiles estimation
I. Markiewicz
2010-11-01
Full Text Available Flood frequency analysis (FFA entails the estimation of the upper tail of a probability density function (PDF of annual peak flows obtained from either the annual maximum series or partial duration series. In hydrological practice, the properties of various methods of upper quantiles estimation are identified with the case of known population distribution function. In reality, the assumed hypothetical model differs from the true one and one cannot assess the magnitude of error caused by model misspecification in respect to any estimated statistics. The opinion about the accuracy of the methods of upper quantiles estimation formed from the case of known population distribution function is upheld. The above-mentioned issue is the subject of the paper. The accuracy of large quantile assessments obtained from the four estimation methods is compared to two-parameter log-normal and log-Gumbel distributions and their three-parameter counterparts, i.e., three-parameter log-normal and GEV distributions. The cases of true and false hypothetical models are considered. The accuracy of flood quantile estimates depends on the sample size, the distribution type (both true and hypothetical, and strongly depends on the estimation method. In particular, the maximum likelihood method loses its advantageous properties in case of model misspecification.
On accuracy of upper quantiles estimation
Markiewicz, I.; Strupczewski, W. G.; Kochanek, K.
2010-11-01
Flood frequency analysis (FFA) entails the estimation of the upper tail of a probability density function (PDF) of annual peak flows obtained from either the annual maximum series or partial duration series. In hydrological practice, the properties of various methods of upper quantiles estimation are identified with the case of known population distribution function. In reality, the assumed hypothetical model differs from the true one and one cannot assess the magnitude of error caused by model misspecification in respect to any estimated statistics. The opinion about the accuracy of the methods of upper quantiles estimation formed from the case of known population distribution function is upheld. The above-mentioned issue is the subject of the paper. The accuracy of large quantile assessments obtained from the four estimation methods is compared to two-parameter log-normal and log-Gumbel distributions and their three-parameter counterparts, i.e., three-parameter log-normal and GEV distributions. The cases of true and false hypothetical models are considered. The accuracy of flood quantile estimates depends on the sample size, the distribution type (both true and hypothetical), and strongly depends on the estimation method. In particular, the maximum likelihood method loses its advantageous properties in case of model misspecification.
Sampling errors of quantile estimations from finite samples of data
Roy, Philippe; Gachon, Philippe
2016-01-01
Empirical relationships are derived for the expected sampling error of quantile estimations using Monte Carlo experiments for two frequency distributions frequently encountered in climate sciences. The relationships found are expressed as a scaling factor times the standard error of the mean; these give a quick tool to estimate the uncertainty of quantiles for a given finite sample size.
Return-Volatility Relationship: Insights from Linear and Non-Linear Quantile Regression
D.E. Allen (David); A.K. Singh (Abhay); R.J. Powell (Robert); M.J. McAleer (Michael); J. Taylor (James); L. Thomas (Lyn)
2013-01-01
textabstractThe purpose of this paper is to examine the asymmetric relationship between price and implied volatility and the associated extreme quantile dependence using linear and non linear quantile regression approach. Our goal in this paper is to demonstrate that the relationship between the
Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation
Sun, Ying
2015-09-01
Quantile functions are important in characterizing the entire probability distribution of a random variable, especially when the tail of a skewed distribution is of interest. This article introduces new quantile function estimators for spatial and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without replicated observations. The theoretical properties are investigated and the performances of the proposed methods are evaluated by simulations. The proposed method is applied to particulate matter (PM) data from the Community Multiscale Air Quality (CMAQ) model to characterize the upper quantiles, which are crucial for studying spatial association between PM concentrations and adverse human health effects. © 2016 American Statistical Association and the American Society for Quality.
Simulation and Estimation of Extreme Quantiles and Extreme Probabilities
Guyader, Arnaud, E-mail: arnaud.guyader@uhb.fr [Universite Rennes 2 (France); Hengartner, Nicolas [Los Alamos National Laboratory, Information Sciences Group (United States); Matzner-Lober, Eric [Universite Rennes 2 (France)
2011-10-15
Let X be a random vector with distribution {mu} on Double-Struck-Capital-R {sup d} and {Phi} be a mapping from Double-Struck-Capital-R {sup d} to Double-Struck-Capital-R . That mapping acts as a black box, e.g., the result from some computer experiments for which no analytical expression is available. This paper presents an efficient algorithm to estimate a tail probability given a quantile or a quantile given a tail probability. The algorithm improves upon existing multilevel splitting methods and can be analyzed using Poisson process tools that lead to exact description of the distribution of the estimated probabilities and quantiles. The performance of the algorithm is demonstrated in a problem related to digital watermarking.
Estimating conditional quantiles with the help of the pinball loss
Steinwart, Ingo; 10.3150/10-BEJ267
2011-01-01
The so-called pinball loss for estimating conditional quantiles is a well-known tool in both statistics and machine learning. So far, however, only little work has been done to quantify the efficiency of this tool for nonparametric approaches. We fill this gap by establishing inequalities that describe how close approximate pinball risk minimizers are to the corresponding conditional quantile. These inequalities, which hold under mild assumptions on the data-generating distribution, are then used to establish so-called variance bounds, which recently turned out to play an important role in the statistical analysis of (regularized) empirical risk minimization approaches. Finally, we use both types of inequalities to establish an oracle inequality for support vector machines that use the pinball loss. The resulting learning rates are min--max optimal under some standard regularity assumptions on the conditional quantile.
Kernel conditional quantile estimator under left truncation for functional regressors
Nacéra Helal
2016-01-01
Full Text Available Let \\(Y\\ be a random real response which is subject to left-truncation by another random variable \\(T\\. In this paper, we study the kernel conditional quantile estimation when the covariable \\(X\\ takes values in an infinite-dimensional space. A kernel conditional quantile estimator is given under some regularity conditions, among which in the small-ball probability, its strong uniform almost sure convergence rate is established. Some special cases have been studied to show how our work extends some results given in the literature. Simulations are drawn to lend further support to our theoretical results and assess the behavior of the estimator for finite samples with different rates of truncation and sizes.
Estimating conditional quantiles with the help of the pinball loss
Steinwart, Ingo [Los Alamos National Laboratory
2008-01-01
Using the so-called pinball loss for estimating conditional quantiles is a well-known tool in both statistics and machine learning. So far, however, only little work has been done to quantify the efficiency of this tool for non-parametric (modified) empirical risk minimization approaches. The goal of this work is to fill this gap by establishing inequalities that describe how close approximate pinball risk minimizers are to the corresponding conditional quantile. These inequalities, which hold under mild assumptions on the data-generating distribution, are then used to establish so-called variance bounds which recently turned out to play an important role in the statistical analysis of (modified) empirical risk minimization approaches. To illustrate the use of the established inequalities, we then use them to establish an oracle inequality for support vector machines that use the pinball loss. Here, it turns out that we obtain learning rates which are optimal in a min-max sense under some standard assumptions on the regularity of the conditional quantile function.
Quantile estimation for a non-geometric ergodic Markov chain
Ramirez-Nafarrate, Adrian; Muñoz, David F.
2013-10-01
Simulation has been successfully used for estimating performance measures (e.g. mean, variance and quantiles) of complex systems, such as queueing and inventory systems. However, parameter estimation using simulation may be a difficult task under some conditions. In this paper, we present a counterexample for which traditional simulation methods do not allow us to estimate the accuracy of the point estimators for the mean and risk performance measures for steady-state. The counterexample is based on a Markov chain with continuous state space and non-geometric ergodicity. The simulation of this Markov chain shows that neither multiple replications nor batch-based methodologies can produce asymptotically valid confidence intervals for the point estimators.
Wu Fuxian; Wen Weidong
2016-01-01
Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
Bache, Stefan Holst
A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
A Study of Alternative Quantile Estimation Methods in Newsboy-Type Problems
1980-03-01
early in 1950 [1]. This problem has been presented in the literature under a variety of names, including newsvendor problem , Christmas tree problem ...quantile estimation methods in newsboy-type problems . Sok, Yong-u Monterey, California. Naval Postgraduate School http://hdl.handle.net/10945/19069...Downloaded from NPS Archive: Calhoun A STUDY OF ALTERNATIVE QUANTILE ESTIMATION METHODS IN NEWSBOY-TYPE PROBLEMS Yong-u Sok sss- 939434 NAVAL POSTGRADUATE
Quantiles, parametric-select density estimation, and bi-information parameter estimators
Parzen, E.
1982-01-01
A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.
Hao, Lingxin
2007-01-01
Quantile Regression, the first book of Hao and Naiman's two-book series, establishes the seldom recognized link between inequality studies and quantile regression models. Though separate methodological literature exists for each subject, the authors seek to explore the natural connections between this increasingly sought-after tool and research topics in the social sciences. Quantile regression as a method does not rely on assumptions as restrictive as those for the classical linear regression; though more traditional models such as least squares linear regression are more widely utilized, Hao
Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of
Shao, Yuehong; Wu, Junmei; Li, Min
2017-01-01
The quantile estimates and spatiotemporal consistency of extreme precipitation are studied by regional linear frequency analysis for Huaihe River basin in China. Firstly, the study area can be categorized into six homogeneous regions by using cluster analysis, heterogeneity measure, and discordancy measure. In the next step, we determine the optimum distribution for each homogeneous region by using two criteria of Monte Carlo simulations and the root-mean-square error (RMSE) of the sample L-moments. A diagram of L-moments ratio is used to further judge and validate the optimum distribution. The generalized extreme value (GEV), generalized normal (GNO), and generalized logistic (GLO) for 24-h duration are determined to be the more appropriate distribution based on the two criteria, L-moments ratio plot, and the tail thickness of curve in adjacent regions. A summary assessment can provide the more reasonable distribution, which avoids arbitrary results from single test. An important practical element of this study that was missing from previous works is the quantile spatiotemporal consistency analysis, which helps identify non-monotonicity among quantiles at different durations and reduces the gradient of estimates in the adjacent regions. Abnormality and spatial discontinuation can be removed by distributing the surplus of the ratio and twice different interpolation. A complete set of spatiotemporal consistent quantile estimates for various duration (24 h, 3 days, 5 days, and 7 days) and return periods (from 2 to 1000 years) can be obtained by using the abovementioned method in the study area, which are in the agreement with the observed precipitation extremes. It will provide important basis for hydrometeorological research, which is of significant scientific and practical merit.
Shao, Yuehong; Wu, Junmei; Li, Min
2016-09-01
The quantile estimates and spatiotemporal consistency of extreme precipitation are studied by regional linear frequency analysis for Huaihe River basin in China. Firstly, the study area can be categorized into six homogeneous regions by using cluster analysis, heterogeneity measure, and discordancy measure. In the next step, we determine the optimum distribution for each homogeneous region by using two criteria of Monte Carlo simulations and the root-mean-square error (RMSE) of the sample L-moments. A diagram of L-moments ratio is used to further judge and validate the optimum distribution. The generalized extreme value (GEV), generalized normal (GNO), and generalized logistic (GLO) for 24-h duration are determined to be the more appropriate distribution based on the two criteria, L-moments ratio plot, and the tail thickness of curve in adjacent regions. A summary assessment can provide the more reasonable distribution, which avoids arbitrary results from single test. An important practical element of this study that was missing from previous works is the quantile spatiotemporal consistency analysis, which helps identify non-monotonicity among quantiles at different durations and reduces the gradient of estimates in the adjacent regions. Abnormality and spatial discontinuation can be removed by distributing the surplus of the ratio and twice different interpolation. A complete set of spatiotemporal consistent quantile estimates for various duration (24 h, 3 days, 5 days, and 7 days) and return periods (from 2 to 1000 years) can be obtained by using the abovementioned method in the study area, which are in the agreement with the observed precipitation extremes. It will provide important basis for hydrometeorological research, which is of significant scientific and practical merit.
Gupta, Cherry; Cobre, Juliana; Polpo, Adriano; Sinha, Debjayoti
2016-09-01
Existing cure-rate survival models are generally not convenient for modeling and estimating the survival quantiles of a patient with specified covariate values. This paper proposes a novel class of cure-rate model, the transform-both-sides cure-rate model (TBSCRM), that can be used to make inferences about both the cure-rate and the survival quantiles. We develop the Bayesian inference about the covariate effects on the cure-rate as well as on the survival quantiles via Markov Chain Monte Carlo (MCMC) tools. We also show that the TBSCRM-based Bayesian method outperforms existing cure-rate models based methods in our simulation studies and in application to the breast cancer survival data from the National Cancer Institute's Surveillance, Epidemiology, and End Results (SEER) database.
Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M
2014-01-01
This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.
Abobaker M. Jaber
2014-01-01
Full Text Available This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD with nonparametric methods of local linear quantile (LLQ. We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.
A KERNEL-TYPE ESTIMATOR OF A QUANTILE FUNCTION UNDER RANDOMLY TRUNCATED DATA
无
2006-01-01
A kernel-type estimator of the quantile function Q(p) = inf{t: F(t) ≥ p},0 ≤ p ≤ 1, is proposed based on the kernel smoother when the data are subjected to random truncation. The Bahadur-type representations of the kernel smooth estimator are established, and from Bahadur representations the authors can show that this estimator is strongly consistent, asymptotically normal, and weakly convergent.
Over, Thomas; Saito, Riki J.; Veilleux, Andrea; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey
2016-06-28
This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, generalized skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at
Estimating Flood Quantiles on the Basis of Multi-Event Rainfall Simulation – Case Study
Jarosińska Elżbieta
2015-12-01
Full Text Available This paper presents an approach to estimating the probability distribution of annual discharges Q based on rainfall-runoff modelling using multiple rainfall events. The approach is based on the prior knowledge about the probability distribution of annual maximum daily totals of rainfall P in a natural catchment, random disaggregation of the totals into hourly values, and rainfall-runoff modelling. The presented Multi-Event Simulation of Extreme Flood method (MESEF combines design event method based on single-rainfall event modelling, and continuous simulation method used for estimating the maximum discharges of a given exceedance probability using rainfall-runoff models. In the paper, the flood quantiles were estimated using the MESEF method, and then compared to the flood quantiles estimated using classical statistical method based on observed data.
Roscoe, K. L.; Weerts, A. H.
2012-04-01
Water level predictions in rivers are used by operational managers to make water management decisions. Such decisions can concern water routing in times of drought, operation of weirs, and actions for flood protection, such as evacuation. Understanding the uncertainty in the predictions can help managers make better-informed decisions. Conditional Quantile Regression is a method that can be used to determine the uncertainty in forecasted water levels by providing an estimate of the probability density function of the error in the prediction conditional on the forecasted water level. To derive this relationship, a series of forecasts and errors in the forecasts (residuals) are required. Thus, conditional quantile regressions can be derived for locations where both observations and forecasts are available. However, 1D-hydraulic models that are used for operational forecasting produce forecasts at intermediate points where no measurements are available but for which predictive uncertainty estimates are also desired for decision making. The objective of our study is to test if interpolation methods can be used to adequately estimate conditional quantile regressions at these in-between locations. For this purpose, five years of hindcasts were used at seven stations along the IJssel River in the Netherlands. Residuals in water level hindcasts were interpolated at the five in-between lying stations. The interpolation was based solely on distance and the interpolated residuals were compared to the measured residuals at stations at the in-between locations. The resulting interpolated residuals estimated the measured residuals well, especially for longer lead times. Quantile regression was then carried out using the series of forecasts and interpolated residuals at the in-between stations. The interpolated quantile regressions were compared with regressions calibrated using the actual residuals at the in-between stations. Results show that even a simple interpolation based
Fitzenberger, Bernd; Wilke, Ralf Andreas
2015-01-01
Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights by m...... treatment of the topic is based on the perspective of applied researchers using quantile regression in their empirical work....
Abobaker M. Jaber
2014-01-01
Full Text Available Empirical mode decomposition (EMD is particularly useful in analyzing nonstationary and nonlinear time series. However, only partial data within boundaries are available because of the bounded support of the underlying time series. Consequently, the application of EMD to finite time series data results in large biases at the edges by increasing the bias and creating artificial wiggles. This study introduces a new two-stage method to automatically decrease the boundary effects present in EMD. At the first stage, local polynomial quantile regression (LLQ is applied to provide an efficient description of the corrupted and noisy data. The remaining series is assumed to be hidden in the residuals. Hence, EMD is applied to the residuals at the second stage. The final estimate is the summation of the fitting estimates from LLQ and EMD. Simulation was conducted to assess the practical performance of the proposed method. Results show that the proposed method is superior to classical EMD.
Girinoto, Sadik, Kusman; Indahwati
2017-03-01
The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.
Haben, Stephen
2016-01-01
We present a model for generating probabilistic forecasts by combining kernel density estimation (KDE) and quantile regression techniques, as part of the probabilistic load forecasting track of the Global Energy Forecasting Competition 2014. The KDE method is initially implemented with a time-decay parameter. We later improve this method by conditioning on the temperature or the period of the week variables to provide more accurate forecasts. Secondly, we develop a simple but effective quantile regression forecast. The novel aspects of our methodology are two-fold. First, we introduce symmetry into the time-decay parameter of the kernel density estimation based forecast. Secondly we combine three probabilistic forecasts with different weights for different periods of the month.
Conditional Quantile Processes based on Series or Many Regressors
Belloni, Alexandre; Fernandez-Val, Ivan
2011-01-01
Quantile regression (QR) is a principal regression method for analyzing the impact of covariates on outcomes. The impact is described by the conditional quantile function and its functionals. In this paper we develop the nonparametric QR series framework, covering many regressors as a special case, for performing inference on the entire conditional quantile function and its linear functionals. In this framework, we approximate the entire conditional quantile function by a linear combination of series terms with quantile-specific coefficients and estimate the function-valued coefficients from the data. We develop large sample theory for the empirical QR coefficient process, namely we obtain uniform strong approximations to the empirical QR coefficient process by conditionally pivotal and Gaussian processes, as well as by gradient and weighted bootstrap processes. We apply these results to obtain estimation and inference methods for linear functionals of the conditional quantile function, such as the conditiona...
Tail index and quantile estimation with very high frequency data
J. Daníelsson (Jón); C.G. de Vries (Casper)
1997-01-01
textabstractA precise estimation of the tail shape of forex returns is of critical importance for proper risk assessment. We improve upon the efficiency of conventional estimators that rely on a first order expansion of the tail shape, by using the second order expansion. Here we advocate a moments
Tail index and quantile estimation with very high frequency data
J. Daníelsson (Jón); C.G. de Vries (Casper)
1997-01-01
textabstractA precise estimation of the tail shape of forex returns is of critical importance for proper risk assessment. We improve upon the efficiency of conventional estimators that rely on a first order expansion of the tail shape, by using the second order expansion. Here we advocate a moments
Kowalski, Amanda
2016-01-02
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.
A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates
Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh
2016-10-01
We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the
Quantile Regression With Measurement Error
Wei, Ying
2009-08-27
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.
Statistical downscaling modeling with quantile regression using lasso to estimate extreme rainfall
Santri, Dewi; Wigena, Aji Hamim; Djuraidah, Anik
2016-02-01
Rainfall is one of the climatic elements with high diversity and has many negative impacts especially extreme rainfall. Therefore, there are several methods that required to minimize the damage that may occur. So far, Global circulation models (GCM) are the best method to forecast global climate changes include extreme rainfall. Statistical downscaling (SD) is a technique to develop the relationship between GCM output as a global-scale independent variables and rainfall as a local- scale response variable. Using GCM method will have many difficulties when assessed against observations because GCM has high dimension and multicollinearity between the variables. The common method that used to handle this problem is principal components analysis (PCA) and partial least squares regression. The new method that can be used is lasso. Lasso has advantages in simultaneuosly controlling the variance of the fitted coefficients and performing automatic variable selection. Quantile regression is a method that can be used to detect extreme rainfall in dry and wet extreme. Objective of this study is modeling SD using quantile regression with lasso to predict extreme rainfall in Indramayu. The results showed that the estimation of extreme rainfall (extreme wet in January, February and December) in Indramayu could be predicted properly by the model at quantile 90th.
Estimating Quantile Families of Loss Distributions for Non-Life Insurance Modelling via L-Moments
Gareth W. Peters
2016-05-01
Full Text Available This paper discusses different classes of loss models in non-life insurance settings. It then overviews the class of Tukey transform loss models that have not yet been widely considered in non-life insurance modelling, but offer opportunities to produce flexible skewness and kurtosis features often required in loss modelling. In addition, these loss models admit explicit quantile specifications which make them directly relevant for quantile based risk measure calculations. We detail various parameterisations and sub-families of the Tukey transform based models, such as the g-and-h, g-and-k and g-and-j models, including their properties of relevance to loss modelling. One of the challenges that are amenable to practitioners when fitting such models is to perform robust estimation of the model parameters. In this paper we develop a novel, efficient, and robust procedure for estimating the parameters of this family of Tukey transform models, based on L-moments. It is shown to be more efficient than the current state of the art estimation methods for such families of loss models while being simple to implement for practical purposes.
Bonde, Casper Stork; Graversen, Carina; Gregersen, Andreas Gregers;
2005-01-01
An important topic in Automatic Speech Recognition (ASR) is to reduce the effect of noise, in particular when mismatch exists between the training and application conditions. Many noise robutness schemes within the feature processing domain use as a prerequisite a noise estimate prior...... to the appearance of the speech signal which require noise robust voice activity detection and assumptions of stationary noise. However, both of these requirements are often not met and it is therefore of particular interest to investigate methods like the Quantile Based Noise Estimation (QBNE) mehtod which...... estimates the noise during speech and non-speech sections without the use of a voice activity detector. While the standard QBNE-method uses a fixed pre-defined quantile accross all frequency bands, this paper suggests adaptive QBNE (AQBNE) which adapts the quantile individually to each frequency band...
Quantile regression modeling for Malaysian automobile insurance premium data
Fuzi, Mohd Fadzli Mohd; Ismail, Noriszura; Jemain, Abd Aziz
2015-09-01
Quantile regression is a robust regression to outliers compared to mean regression models. Traditional mean regression models like Generalized Linear Model (GLM) are not able to capture the entire distribution of premium data. In this paper we demonstrate how a quantile regression approach can be used to model net premium data to study the effects of change in the estimates of regression parameters (rating classes) on the magnitude of response variable (pure premium). We then compare the results of quantile regression model with Gamma regression model. The results from quantile regression show that some rating classes increase as quantile increases and some decrease with decreasing quantile. Further, we found that the confidence interval of median regression (τ = O.5) is always smaller than Gamma regression in all risk factors.
Hosseini, Reza
2010-01-01
It is widely claimed that the quantile function is equivariant under increasing transformations. We show by a counterexample that this is not true (even for strictly increasing transformations). However, we show that the quantile function is equivariant under left continuous increasing transformations. We also provide an equivariance relation for continuous decreasing transformations. In the case that the transformation is not continuous, we show that while the transformed quantile at p can b...
Estimating earnings losses due to mental illness: a quantile regression approach.
Marcotte, Dave E; Wilcox-Gök, Virginia
2003-09-01
The ability of workers to remain productive and sustain earnings when afflicted with mental illness depends importantly on access to appropriate treatment and on flexibility and support from employers. In the United States there is substantial variation in access to health care and sick leave and other employment flexibilities across the earnings distribution. Consequently, a worker's ability to work and how much his/her earnings are impeded likely depend upon his/her position in the earnings distribution. Because of this, focusing on average earnings losses may provide insufficient information on the impact of mental illness in the labor market. In this paper, we examine the effects of mental illness on earnings by recognizing that effects could vary across the distribution of earnings. Using data from the National Comorbidity Survey, we employ a quantile regression estimator to identify the effects at key points in the earnings distribution. We find that earnings effects vary importantly across the distribution. While average effects are often not large, mental illness more commonly imposes earnings losses at the lower tail of the distribution, especially for women. In only one case do we find an illness to have negative effects across the distribution. Mental illness can have larger negative impacts on economic outcomes than previously estimated, even if those effects are not uniform. Consequently, researchers and policy makers alike should not be placated by findings that mean earnings effects are relatively small. Such estimates miss important features of how and where mental illness is associated with real economic losses for the ill.
Tightness of M-estimators for multiple linear regression in time series
Johansen, Søren; Nielsen, Bent
We show tightness of a general M-estimator for multiple linear regression in time series. The positive criterion function for the M-estimator is assumed lower semi-continuous and sufficiently large for large argument: Particular cases are the Huber-skip and quantile regression. Tightness requires...
Time-adaptive quantile regression
Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik
2008-01-01
An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....
Hosseini, Reza
2010-01-01
It is widely claimed that the quantile function is equivariant under increasing transformations. We show by a counterexample that this is not true (even for strictly increasing transformations). However, we show that the quantile function is equivariant under left continuous increasing transformations. We also provide an equivariance relation for continuous decreasing transformations. In the case that the transformation is not continuous, we show that while the transformed quantile at p can be arbitrarily far from the quantile of the transformed at p (in terms of absolute difference), the probability mass between the two is zero. We also show by an example that weighted definition of the median is not equivariant under even strictly increasing continuous transformations.
Quantile regression theory and applications
Davino, Cristina; Vistocco, Domenico
2013-01-01
A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and
Strupczewski, Witold G.; Bogdanowich, Ewa; Debele, Sisay
2016-04-01
Under Polish climate conditions the series of Annual Maxima (AM) flows are usually a mixture of peak flows of thaw- and rainfall- originated floods. The northern, lowland regions are dominated by snowmelt floods whilst in mountainous regions the proportion of rainfall floods is predominant. In many stations the majority of AM can be of snowmelt origin, but the greatest peak flows come from rainfall floods or vice versa. In a warming climate, precipitation is less likely to occur as snowfall. A shift from a snow- towards a rain-dominated regime results in a decreasing trend in mean and standard deviations of winter peak flows whilst rainfall floods do not exhibit any trace of non-stationarity. That is why a simple form of trends (i.e. linear trends) are more difficult to identify in AM time-series than in Seasonal Maxima (SM), usually winter season time-series. Hence it is recommended to analyse trends in SM, where a trend in standard deviation strongly influences the time -dependent upper quantiles. The uncertainty associated with the extrapolation of the trend makes it necessary to apply a relationship for trend which has time derivative tending to zero, e.g. we can assume a new climate equilibrium epoch approaching, or a time horizon is limited by the validity of the trend model. For both winter and summer SM time series, at least three distributions functions with trend model in the location, scale and shape parameters are estimated by means of the GAMLSS package using the ML-techniques. The resulting trend estimates in mean and standard deviation are mutually compared to the observed trends. Then, using AIC measures as weights, a multi-model distribution is constructed for each of two seasons separately. Further, assuming a mutual independence of the seasonal maxima, an AM model with time-dependent parameters can be obtained. The use of a multi-model approach can alleviate the effects of different and often contradictory trends obtained by using and identifying
Quantile Regression in the Study of Developmental Sciences
Petscher, Yaacov; Logan, Jessica A. R.
2014-01-01
Linear regression analysis is one of the most common techniques applied in developmental research, but only allows for an estimate of the average relations between the predictor(s) and the outcome. This study describes quantile regression, which provides estimates of the relations between the predictor(s) and outcome, but across multiple points of…
Cannon, Alex J.
2011-09-01
The qrnn package for R implements the quantile regression neural network, which is an artificial neural network extension of linear quantile regression. The model formulation follows from previous work on the estimation of censored regression quantiles. The result is a nonparametric, nonlinear model suitable for making probabilistic predictions of mixed discrete-continuous variables like precipitation amounts, wind speeds, or pollutant concentrations, as well as continuous variables. A differentiable approximation to the quantile regression error function is adopted so that gradient-based optimization algorithms can be used to estimate model parameters. Weight penalty and bootstrap aggregation methods are used to avoid overfitting. For convenience, functions for quantile-based probability density, cumulative distribution, and inverse cumulative distribution functions are also provided. Package functions are demonstrated on a simple precipitation downscaling task.
Linear Minimum variance estimation fusion
ZHU Yunmin; LI Xianrong; ZHAO Juan
2004-01-01
This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.
Non-Stationary Hydrologic Frequency Analysis using B-Splines Quantile Regression
Nasri, B.; St-Hilaire, A.; Bouezmarni, T.; Ouarda, T.
2015-12-01
Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic structures and water resources system under the assumption of stationarity. However, with increasing evidence of changing climate, it is possible that the assumption of stationarity would no longer be valid and the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extreme flows based on B-Splines quantile regression, which allows to model non-stationary data that have a dependence on covariates. Such covariates may have linear or nonlinear dependence. A Markov Chain Monte Carlo (MCMC) algorithm is used to estimate quantiles and their posterior distributions. A coefficient of determination for quantiles regression is proposed to evaluate the estimation of the proposed model for each quantile level. The method is applied on annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in these variables and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for annual maximum and minimum discharge with high annual non-exceedance probabilities. Keywords: Quantile regression, B-Splines functions, MCMC, Streamflow, Climate indices, non-stationarity.
Quantiles for Finite Mixtures of Normal Distributions
Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.
2006-01-01
Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)
On multivariate quantiles under partial ordering
Belloni, Alexandre
2009-01-01
This paper focuses on generalizing quantiles from the ordering point of view. We propose the concept of {\\it partial quantiles} based on a given partial order. We establish that partial quantiles are equivariant under partial-order-preserving transformations of the data, display a concentration of measure phenomenon, generalize the concept of efficient frontier, and can measure dispersion from the partial order perspective. We also study several statistical aspects of partial quantiles. We provide estimators, associated rates of convergence, and asymptotic distributions that hold uniformly over a continuum of quantile indices. Furthermore, we provide procedures that can restore monotonicity properties that might have been disturbed by estimation error, and establish computational complexity bounds. Finally, we illustrate the concepts by discussing several theoretical examples and simulations. Empirical applications to compare intake nutrients within diets and to evaluate the performance of investment funds ar...
Hallin, Marc; Šiman, Miroslav; 10.1214/09-AOS723
2010-01-01
A new multivariate concept of quantile, based on a directional version of Koenker and Bassett's traditional regression quantiles, is introduced for multivariate location and multiple-output regression problems. In their empirical version, those quantiles can be computed efficiently via linear programming techniques. Consistency, Bahadur representation and asymptotic normality results are established. Most importantly, the contours generated by those quantiles are shown to coincide with the classical halfspace depth contours associated with the name of Tukey. This relation does not only allow for efficient depth contour computations by means of parametric linear programming, but also for transferring from the quantile to the depth universe such asymptotic results as Bahadur representations. Finally, linear programming duality opens the way to promising developments in depth-related multivariate rank-based inference.
Odry, Jean; Arnaud, Patrick
2016-04-01
The SHYREG method (Aubert et al., 2014) associates a stochastic rainfall generator and a rainfall-runoff model to produce rainfall and flood quantiles on a 1 km2 mesh covering the whole French territory. The rainfall generator is based on the description of rainy events by descriptive variables following probability distributions and is characterised by a high stability. This stochastic generator is fully regionalised, and the rainfall-runoff transformation is calibrated with a single parameter. Thanks to the stability of the approach, calibration can be performed against only flood quantiles associated with observated frequencies which can be extracted from relatively short time series. The aggregation of SHYREG flood quantiles to the catchment scale is performed using an areal reduction factor technique unique on the whole territory. Past studies demonstrated the accuracy of SHYREG flood quantiles estimation for catchments where flow data are available (Arnaud et al., 2015). Nevertheless, the parameter of the rainfall-runoff model is independently calibrated for each target catchment. As a consequence, this parameter plays a corrective role and compensates approximations and modelling errors which makes difficult to identify its proper spatial pattern. It is an inherent objective of the SHYREG approach to be completely regionalised in order to provide a complete and accurate flood quantiles database throughout France. Consequently, it appears necessary to identify the model configuration in which the calibrated parameter could be regionalised with acceptable performances. The revaluation of some of the method hypothesis is a necessary step before the regionalisation. Especially the inclusion or the modification of the spatial variability of imposed parameters (like production and transfer reservoir size, base flow addition and quantiles aggregation function) should lead to more realistic values of the only calibrated parameter. The objective of the work presented
On Bayes linear unbiased estimation of estimable functions for the singular linear model
ZHANG Weiping; WEI Laisheng
2005-01-01
The unique Bayes linear unbiased estimator (Bayes LUE) of estimable functions is derived for the singular linear model. The superiority of Bayes LUE over ordinary best linear unbiased estimator is investigated under mean square error matrix (MSEM)criterion.
大分位数与上端点的估计%Estimators for a Large Quantile and the Upp er Endp oint
何腊梅
2014-01-01
The estimation of extreme value index is a primary problem in extreme value theory. In this paper, based on a Pickands-type estimator for the extreme value index, estimators for a large quantile and the upper endpoint of a probability distribution are established. Furthermore, the asymptotic properties of these estimators are discussed.%极值指数的估计是极值理论里一个基本问题。本文基于一类极值指数的Pickands型估计量，给出了分布的大分位数与上端点的估计，还讨论了这些估计量的渐近性质。
Kai, Bo; Li, Runze; Zou, Hui
2011-02-01
The complexity of semiparametric models poses new challenges to statistical inference and model selection that frequently arise from real applications. In this work, we propose new estimation and variable selection procedures for the semiparametric varying-coefficient partially linear model. We first study quantile regression estimates for the nonparametric varying-coefficient functions and the parametric regression coefficients. To achieve nice efficiency properties, we further develop a semiparametric composite quantile regression procedure. We establish the asymptotic normality of proposed estimators for both the parametric and nonparametric parts and show that the estimators achieve the best convergence rate. Moreover, we show that the proposed method is much more efficient than the least-squares-based method for many non-normal errors and that it only loses a small amount of efficiency for normal errors. In addition, it is shown that the loss in efficiency is at most 11.1% for estimating varying coefficient functions and is no greater than 13.6% for estimating parametric components. To achieve sparsity with high-dimensional covariates, we propose adaptive penalization methods for variable selection in the semiparametric varying-coefficient partially linear model and prove that the methods possess the oracle property. Extensive Monte Carlo simulation studies are conducted to examine the finite-sample performance of the proposed procedures. Finally, we apply the new methods to analyze the plasma beta-carotene level data.
Functional data analysis of generalized regression quantiles
Guo, Mengmeng
2013-11-05
Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.
Non-crossing weighted kernel quantile regression with right censored data.
Bang, Sungwan; Eo, Soo-Heang; Cho, Yong Mee; Jhun, Myoungshic; Cho, HyungJun
2016-01-01
Regarding survival data analysis in regression modeling, multiple conditional quantiles are useful summary statistics to assess covariate effects on survival times. In this study, we consider an estimation problem of multiple nonlinear quantile functions with right censored survival data. To account for censoring in estimating a nonlinear quantile function, weighted kernel quantile regression (WKQR) has been developed by using the kernel trick and inverse-censoring-probability weights. However, the individually estimated quantile functions based on the WKQR often cross each other and consequently violate the basic properties of quantiles. To avoid this problem of quantile crossing, we propose the non-crossing weighted kernel quantile regression (NWKQR), which estimates multiple nonlinear conditional quantile functions simultaneously by enforcing the non-crossing constraints on kernel coefficients. The numerical results are presented to demonstrate the competitive performance of the proposed NWKQR over the WKQR.
吕亚召; 张日权; 赵为华; 刘吉彩
2014-01-01
本文提出复合最小化平均分位数损失估计方法(composite minimizing average check loss estimation,CMACLE)用于实现部分线性单指标模型(partial linear single-index models,PLSIM)的复合分位数回归(composite quantile regression,CQR).首先基于高维核函数构造参数部分的复合分位数回归意义下的相合估计,在此相合估计的基础上,通过采用指标核函数进一步得到参数和非参数函数的可达最优收敛速度的估计,并建立所得估计的渐近正态性,比较PLSIM的CQR估计和最小平均方差估计(MAVE)的相对渐近效率.进一步地,本文提出CQR框架下PLSIM的变量选择方法,证明所提变量选择方法的oracle性质.随机模拟和实例分析验证了所提方法在有限样本时的表现,证实了所提方法的优良性.
A quantile count model of water depth constraints on Cape Sable seaside sparrows
Cade, B.S.; Dong, Q.
2008-01-01
1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.
Defining Sample Quantiles by the True Rank Probability
Lasse Makkonen
2014-01-01
Full Text Available Many definitions exist for sample quantiles and are included in statistical software. The need to adopt a standard definition of sample quantiles has been recognized and different definitions have been compared in terms of satisfying some desirable properties, but no consensus has been found. We outline here that comparisons of the sample quantile definitions are irrelevant because the probabilities associated with order-ranked sample values are known exactly. Accordingly, the standard definition for sample quantiles should be based on the true rank probabilities. We show that this allows more accurate inference of the tails of the distribution, and thus improves estimation of the probability of extreme events.
Linear parameter estimation of rational biokinetic functions
Doeswijk, T.G.; Keesman, K.J.
2009-01-01
For rational biokinetic functions such as the Michaelis-Menten equation, in general, a nonlinear least-squares method is a good estimator. However, a major drawback of a nonlinear least-squares estimator is that it can end up in a local minimum. Rearranging and linearizing rational biokinetic
Fliess, Michel; Sira-Ramirez, Hebertt
2007-01-01
Non-linear state estimation and some related topics, like parametric estimation, fault diagnosis, and perturbation attenuation, are tackled here via a new methodology in numerical differentiation. The corresponding basic system theoretic definitions and properties are presented within the framework of differential algebra, which permits to handle system variables and their derivatives of any order. Several academic examples and their computer simulations, with on-line estimations, are illustrating our viewpoint.
Surface tensor estimation from linear sections
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
Surface tensor estimation from linear sections
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
2015-01-01
From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
Fliess, Michel; Join, Cédric; Sira-Ramirez, Hebertt
2008-01-01
International audience; Non-linear state estimation and some related topics, like parametric estimation, fault diagnosis, and perturbation attenuation, are tackled here via a new methodology in numerical differentiation. The corresponding basic system theoretic definitions and properties are presented within the framework of differential algebra, which permits to handle system variables and their derivatives of any order. Several academic examples and their computer simulations, with on-line ...
Group Lasso for high dimensional sparse quantile regression models
Kato, Kengo
2011-01-01
This paper studies the statistical properties of the group Lasso estimator for high dimensional sparse quantile regression models where the number of explanatory variables (or the number of groups of explanatory variables) is possibly much larger than the sample size while the number of variables in "active" groups is sufficiently small. We establish a non-asymptotic bound on the $\\ell_{2}$-estimation error of the estimator. This bound explains situations under which the group Lasso estimator is potentially superior/inferior to the $\\ell_{1}$-penalized quantile regression estimator in terms of the estimation error. We also propose a data-dependent choice of the tuning parameter to make the method more practical, by extending the original proposal of Belloni and Chernozhukov (2011) for the $\\ell_{1}$-penalized quantile regression estimator. As an application, we analyze high dimensional additive quantile regression models. We show that under a set of primitive regularity conditions, the group Lasso estimator c...
Relative linear power contribution with estimation statistics
Lohnberg, P.
1983-01-01
The relative contribution by a noiselessly observed input signal to the power of a possibly disturbed observed stationary output signal from a linear system is expressed into signal spectral densities. Approximations of estimator statistics and derived confidence limits agree fairly well with
Semiparametric Quantile Modelling of Hierarchical Data
Mao Zai TIAN; Man Lai TANG; Ping Shing CHAN
2009-01-01
The classic hierarchical linear model formulation provides a considerable flexibility for modelling the random effects structure and a powerful tool for analyzing nested data that arise in various areas such as biology, economics and education. However, it assumes the within-group errors to be independently and identically distributed (i.i.d.) and models at all levels to be linear. Most importantly, traditional hierarchical models (just like other ordinary mean regression methods) cannot characterize the entire conditional distribution of a dependent variable given a set of covariates and fail to yield robust estimators. In this article, we relax the aforementioned and normality assumptions, and develop a so-called Hierarchical Semiparametric Quantile Regression Models in which the within-group errors could be heteroscedastic and models at some levels are allowed to be nonparametric. We present the ideas with a 2-level model. The level-l model is specified as a nonparametric model whereas level-2 model is set as a parametric model. Under the proposed semiparametric setting the vector of partial derivatives of the nonparametric function in level-1 becomes the response variable vector in level 2. The proposed method allows us to model the fixed effects in the innermost level (i.e., level 2) as a function of the covariates instead of a constant effect. We outline some mild regularity conditions required for convergence and asymptotic normality for our estimators. We illustrate our methodology with a real hierarchical data set from a laboratory study and some simulation studies.
Estimation of linear functionals in emission tomography
Kuruc, A.
1995-08-01
In emission tomography, the spatial distribution of a radioactive tracer is estimated from a finite sample of externally-detected photons. We present an algorithm-independent theory of statistical accuracy attainable in emission tomography that makes minimal assumptions about the underlying image. Let f denote the tracer density as a function of position (i.e., f is the image being estimated). We consider the problem of estimating the linear functional {Phi}(f) {triple_bond} {integral}{phi}(x)f(x) dx, where {phi} is a smooth function, from n independent observations identically distributed according to the Radon transform of f. Assuming only that f is bounded above and below away from 0, we construct statistically efficient estimators for {Phi}(f). By definition, the variance of the efficient estimator is a best-possible lower bound (depending on and f) on the variance of unbiased estimators of {Phi}(f). Our results show that, in general, the efficient estimator will have a smaller variance than the standard estimator based on the filtered-backprojection reconstruction algorithm. The improvement in performance is obtained by exploiting the range properties of the Radon transform.
Quantile regression applied to spectral distance decay
Rocchini, D.; Cade, B.S.
2008-01-01
Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.
Quantile equivalence to evaluate compliance with habitat management objectives
Cade, Brian S.; Johnson, Pamela R.
2011-01-01
Equivalence estimated with linear quantile regression was used to evaluate compliance with habitat management objectives at Arapaho National Wildlife Refuge based on monitoring data collected in upland (5,781 ha; n = 511 transects) and riparian and meadow (2,856 ha, n = 389 transects) habitats from 2005 to 2008. Quantiles were used because the management objectives specified proportions of the habitat area that needed to comply with vegetation criteria. The linear model was used to obtain estimates that were averaged across 4 y. The equivalence testing framework allowed us to interpret confidence intervals for estimated proportions with respect to intervals of vegetative criteria (equivalence regions) in either a liberal, benefit-of-doubt or conservative, fail-safe approach associated with minimizing alternative risks. Simple Boolean conditional arguments were used to combine the quantile equivalence results for individual vegetation components into a joint statement for the multivariable management objectives. For example, management objective 2A required at least 809 ha of upland habitat with a shrub composition ≥0.70 sagebrush (Artemisia spp.), 20–30% canopy cover of sagebrush ≥25 cm in height, ≥20% canopy cover of grasses, and ≥10% canopy cover of forbs on average over 4 y. Shrub composition and canopy cover of grass each were readily met on >3,000 ha under either conservative or liberal interpretations of sampling variability. However, there were only 809–1,214 ha (conservative to liberal) with ≥10% forb canopy cover and 405–1,098 ha with 20–30%canopy cover of sagebrush ≥25 cm in height. Only 91–180 ha of uplands simultaneously met criteria for all four components, primarily because canopy cover of sagebrush and forbs was inversely related when considered at the spatial scale (30 m) of a sample transect. We demonstrate how the quantile equivalence analyses also can help refine the numerical specification of habitat objectives and explore
A quantile regression approach for modelling a Health-Related Quality of Life Measure
Giulia Cavrini
2013-05-01
Full Text Available Objective. The aim of this study is to propose a new approach for modeling the EQ-5D index and EQ-5D VAS in order to explain the lifestyle determinants effect using the quantile regression analysis. Methods. Data was collected within a cross-sectional study that involved a probabilistic sample of 1,622 adults randomly selected from the population register of two Health Authorities of Bologna in northern Italy. The perceived health status of people was measured using the EQ-5D questionnaire. The Visual Analogue Scale included in the EQ-5D Questionnaire, the EQ-VAS, and the EQ-5D index were used to obtain the synthetic measures of quality of life. To model EQ-VAS Score and EQ-5D index, a quantile regression analysis was employed. Quantile Regression is a way to estimate the conditional quantiles of the VAS Score distribution in a linear model, in order to have a more complete view of possible associations between a measure of Health Related Quality of Life (dependent variable and socio-demographic and determinants data. This methodological approach was preferred to an OLS regression because of the EQ-VAS Score and EQ-5D index typical distribution. Main Results. The analysis suggested that age, gender, and comorbidity can explain variability in perceived health status measured by the EQ-5D index and the VAS.
An Entropic Estimator for Linear Inverse Problems
Amos Golan
2012-05-01
Full Text Available In this paper we examine an Information-Theoretic method for solving noisy linear inverse estimation problems which encompasses under a single framework a whole class of estimation methods. Under this framework, the prior information about the unknown parameters (when such information exists, and constraints on the parameters can be incorporated in the statement of the problem. The method builds on the basics of the maximum entropy principle and consists of transforming the original problem into an estimation of a probability density on an appropriate space naturally associated with the statement of the problem. This estimation method is generic in the sense that it provides a framework for analyzing non-normal models, it is easy to implement and is suitable for all types of inverse problems such as small and or ill-conditioned, noisy data. First order approximation, large sample properties and convergence in distribution are developed as well. Analytical examples, statistics for model comparisons and evaluations, that are inherent to this method, are discussed and complemented with explicit examples.
On Linear Coherent Estimation with Spatial Collaboration
Kar, Swarnendu
2012-01-01
We consider a power-constrained sensor network, consisting of multiple sensor nodes and a fusion center (FC), that is deployed for the purpose of estimating a common random parameter of interest. In contrast to the distributed framework, the sensor nodes are allowed to update their individual observations by (linearly) combining observations from neighboring nodes. The updated observations are communicated to the FC using an analog amplify-and-forward modulation scheme and through a coherent multiple access channel. The optimal collaborative strategy is obtained by minimizing the cumulative transmission power subject to a maximum distortion constraint. For the distributed scenario (i.e., with no observation sharing), the solution reduces to the power-allocation problem considered by [Xiao, TSP08]. Collaboration among neighbors significantly improves power efficiency of the network in the low local-SNR regime, as demonstrated through an insightful example and numerical simulations.
Statistical modelling with quantile functions
Gilchrist, Warren
2000-01-01
Galton used quantiles more than a hundred years ago in describing data. Tukey and Parzen used them in the 60s and 70s in describing populations. Since then, the authors of many papers, both theoretical and practical, have used various aspects of quantiles in their work. Until now, however, no one put all the ideas together to form what turns out to be a general approach to statistics.Statistical Modelling with Quantile Functions does just that. It systematically examines the entire process of statistical modelling, starting with using the quantile function to define continuous distributions. The author shows that by using this approach, it becomes possible to develop complex distributional models from simple components. A modelling kit can be developed that applies to the whole model - deterministic and stochastic components - and this kit operates by adding, multiplying, and transforming distributions rather than data.Statistical Modelling with Quantile Functions adds a new dimension to the practice of stati...
Quantile regression provides a fuller analysis of speed data.
Hewson, Paul
2008-03-01
Considerable interest already exists in terms of assessing percentiles of speed distributions, for example monitoring the 85th percentile speed is a common feature of the investigation of many road safety interventions. However, unlike the mean, where t-tests and ANOVA can be used to provide evidence of a statistically significant change, inference on these percentiles is much less common. This paper examines the potential role of quantile regression for modelling the 85th percentile, or any other quantile. Given that crash risk may increase disproportionately with increasing relative speed, it may be argued these quantiles are of more interest than the conditional mean. In common with the more usual linear regression, quantile regression admits a simple test as to whether the 85th percentile speed has changed following an intervention in an analogous way to using the t-test to determine if the mean speed has changed by considering the significance of parameters fitted to a design matrix. Having briefly outlined the technique and briefly examined an application with a widely published dataset concerning speed measurements taken around the introduction of signs in Cambridgeshire, this paper will demonstrate the potential for quantile regression modelling by examining recent data from Northamptonshire collected in conjunction with a "community speed watch" programme. Freely available software is used to fit these models and it is hoped that the potential benefits of using quantile regression methods when examining and analysing speed data are demonstrated.
THE RATES OF CONVERGENCE OF M-ESTIMATORS FOR PARTLY LINEAR MODELS IN DEPENDENT CASES
SHIPEIDE; CHENXIRU
1996-01-01
Consider the partly linear model Yi=X'iβ0+g0(Ti)+ei,where{(Ti,Xi)}∞1 is a strictly stationary sequence of random variables,the e'is are i.i.d. random errors, the Yi's are realvalued responses,β0 is a d-vector of parameters,Xi is a d-vector of explanatory variables,Ti is another explanatory variable ranging over a nondegenerate compact interval. Based on a segment of observations(T1,X'1,Y1)…,(Tn,X'n,Yn)，this article investigates the rates of convergence of the M-eatimators for β0 and g0 obtained from the minimixation problem ∑n i=1ρ（Yi-X'iβ-gn(Ti)）=min β∈R2,sn∈Fn where Fn is a space of B-spline fuctions of order m+1 and ρ(·) is a function chosen suitably.Under some regularity condtions, it is shown that the estimator of go achieves the optimal global rate of convergence of estimators for nonparametric regrssion,and the estimator of β0 is asymptotically uormal.The M-estimators here include regression quantile estimators,L1-estimators,Lp-norm estimators, Huber's type M-estimators and usual least squares estimators.Applications of the asymptotic theory to testing the hypothesis H0:A'β0=β are also discussed, Where β ia a given vector and Ais a known d×d0 matrix with rank d0.
Quantile uncertainty and value-at-risk model risk.
Alexander, Carol; Sarabia, José María
2012-08-01
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.
Sequential Confidence Bands for Quantile Densities Under Truncated and Censored Data
Yong Zhou; Liu-quan Sun
2005-01-01
In this paper an asymptotic distribution is obtained for the maximal deviation between the kernel quantile density estimator and the quantile density when the data are subject to random left truncation and right censorship. Based on this result we propose a fully sequential procedure for constructing a fixed-width confidence band for the quantile density on a finite interval and show that the procedure has the desired coverage probability asymptotically as the width of the band approaches zero.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Adjoint Error Estimation for Linear Advection
Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S
2011-03-30
An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.
Estimation for the simple linear Boolean model
2006-01-01
We consider the simple linear Boolean model, a fundamental coverage process also known as the Markov/General/infinity queue. In the model, line segments of independent and identically distributed length are located at the points of a Poisson process. The segments may overlap, resulting in a pattern of "clumps"-regions of the line that are covered by one or more segments-alternating with uncovered regions or "spacings". Study and application of the model have been impeded by the difficult...
Linear Factor Models and the Estimation of Expected Returns
Sarisoy, Cisil; de Goeij, Peter; Werker, Bas
2016-01-01
Linear factor models of asset pricing imply a linear relationship between expected returns of assets and exposures to one or more sources of risk. We show that exploiting this linear relationship leads to statistical gains of up to 31% in variances when estimating expected returns on individual asse
Linear Factor Models and the Estimation of Expected Returns
Sarisoy, Cisil; de Goeij, Peter; Werker, Bas
2015-01-01
Estimating expected returns on individual assets or portfolios is one of the most fundamental problems of finance research. The standard approach, using historical averages,produces noisy estimates. Linear factor models of asset pricing imply a linear relationship between expected returns and exposu
Algorithms for non-linear M-estimation
Madsen, Kaj; Edlund, O; Ekblom, H
1997-01-01
a sequence of estimation problems for linearized models is solved. In the testing we apply four estimators to ten non-linear data fitting problems. The test problems are also solved by the Generalized Levenberg-Marquardt method and standard optimization BFGS method. It turns out that the new method...
Robust control of robots via linear estimated state feedback
Berghuis, Harry; Nijmeijer, Henk
1994-01-01
In this note we propose a robust tracking controller for robots that requires only position measurements. The controller consists of two parts: a linear observer part that generates an estimated error state from the error on the joint position and a linear feedback part that utilizes this estimated
Anke L B Günther
Full Text Available BACKGROUND: Breastfeeding may lower chronic disease risk by long-term effects on hormonal status and adiposity, but the relations remain uncertain. OBJECTIVE: To prospectively investigate the association of breastfeeding with the growth hormone- (GH insulin-like growth factor- (IGF axis, insulin sensitivity, body composition and body fat distribution in younger adulthood (18-37 years. DESIGN: Data from 233 (54% female participants of a German cohort, the Dortmund Nutritional and Anthropometric Longitudinally Designed (DONALD Study, with prospective data on infant feeding were analyzed. Multivariable linear as well as quantile regression were performed with full breastfeeding (not: ≤ 2, short: 3-17, long: >17 weeks as exposure and adult IGF-I, IGF binding proteins (IGFBP -1, -2, -3, homeostasis model assessment of insulin resistance (HOMA-IR, fat mass index, fat-free mass index, and waist circumference as outcomes. RESULTS: After adjustment for early life and socio-economic factors, women who had been breastfed longer displayed higher adult IGFBP-2 (p(trend = 0.02 and lower values of HOMA-IR (p(trend = 0.004. Furthermore, in women breastfeeding duration was associated with a lower mean fat mass index (p(trend = 0.01, fat-free mass index (p(trend = 0.02 and waist circumference (p(trend = 0.004 in young adulthood. However, there was no relation to IGF-I, IGFBP-1 and IGFBP-3 (all p(trend > 0.05. Associations for IGFBP-2 and fat mass index were more pronounced at higher, for waist circumference at very low or high percentiles of the distribution. In men, there was no consistent relation of breastfeeding with any outcome. CONCLUSIONS: Our data suggest that breastfeeding may have long-term, favorable effects on extremes of adiposity and insulin metabolism in women, but not in men. In both sexes, breastfeeding does not seem to induce programming of the GH-IGF-axis.
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Efficient estimation of moments in linear mixed models
Wu, Ping; Zhu, Li-Xing; 10.3150/10-BEJ330
2012-01-01
In the linear random effects model, when distributional assumptions such as normality of the error variables cannot be justified, moments may serve as alternatives to describe relevant distributions in neighborhoods of their means. Generally, estimators may be obtained as solutions of estimating equations. It turns out that there may be several equations, each of them leading to consistent estimators, in which case finding the efficient estimator becomes a crucial problem. In this paper, we systematically study estimation of moments of the errors and random effects in linear mixed models.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Virtual estimator for piecewise linear systems based on observability analysis.
Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés, Luis G; Beltrán, Carlos Daniel García
2013-02-27
This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results.
Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis
Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel
2013-01-01
This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007
Focused information criterion and model averaging based on weighted composite quantile regression
Xu, Ganggang
2013-08-13
We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..
A new estimate of the parameters in linear mixed models
王松桂; 尹素菊
2002-01-01
In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.
Juan ZHAO; Yunmin ZHU
2009-01-01
The optimally weighted least squares estimate and the linear minimum variance estimate are two of the most popular estimation methods for a linear model. In this paper, the authors make a comprehensive discussion about the relationship between the two estimates. Firstly, the authors consider the classical linear model in which the coefficient matrix of the linear model is deterministic,and the necessary and sufficient condition for equivalence of the two estimates is derived. Moreover,under certain conditions on variance matrix invertibility, the two estimates can be identical provided that they use the same a priori information of the parameter being estimated. Secondly, the authors consider the linear model with random coefficient matrix which is called the extended linear model;under certain conditions on variance matrix invertibility, it is proved that the former outperforms the latter when using the same a priori information of the parameter.
A Frisch-Newton Algorithm for Sparse Quantile Regression
Roger Koenker; Pin Ng
2005-01-01
Recent experience has shown that interior-point methods using a log barrier approach are far superior to classical simplex methods for computing solutions to large parametric quantile regression problems.In many large empirical applications, the design matrix has a very sparse structure. A typical example is the classical fixed-effect model for panel data where the parametric dimension of the model can be quite large, but the number of non-zero elements is quite small. Adopting recent developments in sparse linear algebra we introduce a modified version of the Frisch-Newton algorithm for quantile regression described in Portnoy and Koenker[28].The new algorithm substantially reduces the storage (memory) requirements and increases computational speed.The modified algorithm also facilitates the development of nonparametric quantile regression methods. The pseudo design matrices employed in nonparametric quantile regression smoothing are inherently sparse in both the fidelity and roughness penalty components. Exploiting the sparse structure of these problems opens up a whole range of new possibilities for multivariate smoothing on large data sets via ANOVA-type decomposition and partial linear models.
The quantile spectral density and comparison based tests for nonlinear time series
Lee, Junbum
2011-01-01
In this paper we consider tests for nonlinear time series, which are motivated by the notion of serial dependence. The proposed tests are based on comparisons with the quantile spectral density, which can be considered as a quantile version of the usual spectral density function. The quantile spectral density 'measures' sequential dependence structure of a time series, and is well defined under relatively weak mixing conditions. We propose an estimator for the quantile spectral density and derive its asympototic sampling properties. We use the quantile spectral density to construct a goodness of fit test for time series and explain how this test can also be used for comparing the sequential dependence structure of two time series. The method is illustrated with simulations and some real data examples.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Vuori, Kaarina; Strandén, Ismo; Sevón-Aimonen, Marja-Liisa; Mäntysaari, Esa A
2006-01-01
A method based on Taylor series expansion for estimation of location parameters and variance components of non-linear mixed effects models was considered. An attractive property of the method is the opportunity for an easily implemented algorithm. Estimation of non-linear mixed effects models can be done by common methods for linear mixed effects models, and thus existing programs can be used after small modifications. The applicability of this algorithm in animal breeding was studied with simulation using a Gompertz function growth model in pigs. Two growth data sets were analyzed: a full set containing observations from the entire growing period, and a truncated time trajectory set containing animals slaughtered prematurely, which is common in pig breeding. The results from the 50 simulation replicates with full data set indicate that the linearization approach was capable of estimating the original parameters satisfactorily. However, estimation of the parameters related to adult weight becomes unstable in the case of a truncated data set.
Unstable volatility functions: the break preserving local linear estimator
Casas, Isabel; Gijbels, Irene
The objective of this paper is to introduce the break preserving local linear (BPLL) estimator for the estimation of unstable volatility functions. Breaks in the structure of the conditional mean and/or the volatility functions are common in Finance. Markov switching models (Hamilton, 1989) and t...
CONSISTENCY OF LS ESTIMATOR IN SIMPLE LINEAR EV REGRESSION MODELS
Liu Jixue; Chen Xiru
2005-01-01
Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.
A least squares estimation method for the linear learning model
B. Wierenga (Berend)
1978-01-01
textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a
A Direct Estimation Approach to Sparse Linear Discriminant Analysis
Cai, Tony
2011-01-01
This paper considers sparse linear discriminant analysis of high-dimensional data. In contrast to the existing methods which are based on separate estimation of the precision matrix $\\O$ and the difference $\\de$ of the mean vectors, we introduce a simple and effective classifier by estimating the product $\\O\\de$ directly through constrained $\\ell_1$ minimization. The estimator can be implemented efficiently using linear programming and the resulting classifier is called the linear programming discriminant (LPD) rule. The LPD rule is shown to have desirable theoretical and numerical properties. It exploits the approximate sparsity of $\\O\\de$ and as a consequence allows cases where it can still perform well even when $\\O$ and/or $\\de$ cannot be estimated consistently. Asymptotic properties of the LPD rule are investigated and consistency and rate of convergence results are given. The LPD classifier has superior finite sample performance and significant computational advantages over the existing methods that req...
Adaptive Unified Biased Estimators of Parameters in Linear Model
Hu Yang; Li-xing Zhu
2004-01-01
To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.
Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis
Ilse Cervantes
2013-02-01
Full Text Available This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system’s outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results.
Spectral Experts for Estimating Mixtures of Linear Regressions
Chaganty, Arun Tejasvi; Liang, Percy
2013-01-01
Discriminative latent-variable models are typically learned using EM or gradient-based optimization, which suffer from local optima. In this paper, we develop a new computationally efficient and provably consistent estimator for a mixture of linear regressions, a simple instance of a discriminative latent-variable model. Our approach relies on a low-rank linear regression to recover a symmetric tensor, which can be factorized into the parameters using a tensor power method. We prove rates of ...
ROBUST ESTIMATION IN PARTIAL LINEAR MIXED MODEL FOR LONGITUDINAL DATA
Qin Guoyou; Zhu Zhongyi
2008-01-01
In this article, robust generalized estimating equation for the analysis of par- tial linear mixed model for longitudinal data is used. The authors approximate the non- parametric function by a regression spline. Under some regular conditions, the asymptotic properties of the estimators are obtained. To avoid the computation of high-dimensional integral, a robust Monte Carlo Newton-Raphson algorithm is used. Some simulations are carried out to study the performance of the proposed robust estimators. In addition, the authors also study the robustness and the efficiency of the proposed estimators by simulation. Finally, two real longitudinal data sets are analyzed.
Estimation in partial linear EV models with replicated observations
CUI; Hengjian
2004-01-01
The aim of this work is to construct the parameter estimators in the partial linear errors-in-variables (EV) models and explore their asymptotic properties. Unlike other related References, the assumption of known error covariance matrix is removed when the sample can be repeatedly drawn at each designed point from the model. The estimators of interested regression parameters, and the model error variance, as well as the nonparametric function, are constructed. Under some regular conditions, all of the estimators prove strongly consistent. Meanwhile, the asymptotic normality for the estimator of regression parameter is also presented. A simulation study is reported to illustrate our asymptotic results.
Adaptive quasi-likelihood estimate in generalized linear models
CHEN Xia; CHEN Xiru
2005-01-01
This paper gives a thorough theoretical treatment on the adaptive quasilikelihood estimate of the parameters in the generalized linear models. The unknown covariance matrix of the response variable is estimated by the sample. It is shown that the adaptive estimator defined in this paper is asymptotically most efficient in the sense that it is asymptotic normal, and the covariance matrix of the limit distribution coincides with the one for the quasi-likelihood estimator for the case that the covariance matrix of the response variable is completely known.
Penalized maximum likelihood estimation for generalized linear point processes
Hansen, Niels Richard
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....
Estimating WISC-IV indexes: proration versus linear scaling.
Glass, Laura A; Ryan, Joseph J; Bartels, Jared M; Morris, Jeri
2008-10-01
This investigation compared proration and linear scaling for estimating Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) verbal comprehension (VCI) and perceptual reasoning (PRI) composites from all relevant two subtest combinations. Using 57 primary school students and 41 clinical referrals, actual VCI and PRI scores were highly correlated with estimated index scores based on proration and linear scaling (all rs> or =.90). In the school sample, significant mean score differences between the actual and estimated composites were found in two comparisons; however, differences between mean scores were less than three points. No significant differences emerged in the clinical sample. Results indicate that any of the two subtest combinations produced reasonably accurate estimates of actual indexes. There was no advantage of one computational method over the other. Copyright 2008 Wiley Periodicals, Inc.
Error Estimation for the Linearized Auto-Localization Algorithm
Fernando Seco
2012-02-01
Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.
Estimation linear model using block generalized inverse of a matrix
Jasińska, Elżbieta; Preweda, Edward
2013-01-01
The work shows the principle of generalized linear model, point estimation, which can be used as a basis for determining the status of movements and deformations of engineering objects. The structural model can be put on any boundary conditions, for example, to ensure the continuity of the deformations. Estimation by the method of least squares was carried out taking into account the terms and conditions of the Gauss- Markov for quadratic forms stored using Lagrange function. The original sol...
Regime variance testing - a quantile approach
gajda, Janusz; Wyłomańska, Agnieszka
2012-01-01
This paper is devoted to testing time series that exhibit behavior related to two or more regimes with different statistical properties. Motivation of our study are two real data sets from plasma physics with observable two-regimes structure. In this paper we develop estimation procedure for critical point of division the structure change of a time series. Moreover we propose three tests for recognition such specific behavior. The presented methodology is based on the empirical second moment and its main advantage is lack of the distribution assumption. Moreover, the examined statistical properties we express in the language of empirical quantiles of the squared data therefore the methodology is an extension of the approach known from the literature. The theoretical results we confirm by simulations and analysis of real data of turbulent laboratory plasma.
Spatial Quantile Regression In Analysis Of Healthy Life Years In The European Union Countries
Trzpiot Grażyna
2016-12-01
Full Text Available The paper investigates the impact of the selected factors on the healthy life years of men and women in the EU countries. The multiple quantile spatial autoregression models are used in order to account for substantial differences in the healthy life years and life quality across the EU members. Quantile regression allows studying dependencies between variables in different quantiles of the response distribution. Moreover, this statistical tool is robust against violations of the classical regression assumption about the distribution of the error term. Parameters of the models were estimated using instrumental variable method (Kim, Muller 2004, whereas the confidence intervals and p-values were bootstrapped.
Admissibilities of linear estimator in a class of linear models with a multivariate t error variable
无
2010-01-01
This paper discusses admissibilities of estimators in a class of linear models,which include the following common models:the univariate and multivariate linear models,the growth curve model,the extended growth curve model,the seemingly unrelated regression equations,the variance components model,and so on.It is proved that admissible estimators of functions of the regression coefficient β in the class of linear models with multivariate t error terms,called as Model II,are also ones in the case that error terms have multivariate normal distribution under a strictly convex loss function or a matrix loss function.It is also proved under Model II that the usual estimators of β are admissible for p 2 with a quadratic loss function,and are admissible for any p with a matrix loss function,where p is the dimension of β.
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
Bistatic Sonar Localization Based on Best Linear Unbiased Estimation
无
2007-01-01
A best linear unbiased estimation (BLUE) algorithm for bistatic sonar localization is proposed. The Cramer-Rao bound for bistatic sonar and the geometrical dilution of precision (GDOP) in different conditions are given. The simulation results show that the location accuracy of BLUE algorithm is higher than the weighted least square method.
SNR Estimation in Linear Systems with Gaussian Matrices
Suliman, Mohamed A.
2017-09-27
This letter proposes a highly accurate algorithm to estimate the signal-to-noise ratio (SNR) for a linear system from a single realization of the received signal. We assume that the linear system has a Gaussian matrix with one sided left correlation. The unknown entries of the signal and the noise are assumed to be independent and identically distributed with zero mean and can be drawn from any distribution. We use the ridge regression function of this linear model in company with tools and techniques adapted from random matrix theory to achieve, in closed form, accurate estimation of the SNR without prior statistical knowledge on the signal or the noise. Simulation results show that the proposed method is very accurate.
Yee LEUNG; WU Kefa; DONG Tianxin
2001-01-01
In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.
Competing Risks Quantile Regression at Work
Dlugosz, Stephan; Lo, Simon M. S.; Wilke, Ralf
2017-01-01
Despite its emergence as a frequently used method for the empirical analysis of multivariate data, quantile regression is yet to become a mainstream tool for the analysis of duration data. We present a pioneering empirical study on the grounds of a competing risks quantile regression model. We use...
Unconditional quantile regressions to determine the social gradient of obesity in Spain 1993-2014.
Rodriguez-Caro, Alejandro; Vallejo-Torres, Laura; Lopez-Valcarcel, Beatriz
2016-10-19
There is a well-documented social gradient in obesity in most developed countries. Many previous studies have conventionally categorised individuals according to their body mass index (BMI), focusing on those above a certain threshold and thus ignoring a large amount of the BMI distribution. Others have used linear BMI models, relying on mean effects that may mask substantial heterogeneity in the effects of socioeconomic variables across the population. In this study, we measure the social gradient of the BMI distribution of the adult population in Spain over the past two decades (1993-2014), using unconditional quantile regressions. We use three socioeconomic variables (education, income and social class) and evaluate differences in the corresponding effects on different percentiles of the log-transformed BMI distribution. Quantile regression methods have the advantage of estimating the socioeconomic effect across the whole BMI distribution allowing for this potential heterogeneity. The results showed a large and increasing social gradient in obesity in Spain, especially among females. There is, however, a large degree of heterogeneity in the socioeconomic effect across the BMI distribution, with patterns that vary according to the socioeconomic indicator under study. While the income and educational gradient is greater at the end of the BMI distribution, the main impact of social class is around the median BMI values. A steeper social gradient is observed with respect to educational level rather than household income or social class. The findings of this study emphasise the heterogeneous nature of the relationship between social factors and obesity across the BMI distribution as a whole. Quantile regression methods might provide a more suitable framework for exploring the complex socioeconomic gradient of obesity.
Linear minimax estimation for random vectors with parametric uncertainty
Bitar, E
2010-06-01
In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).
Quantile forecast discrimination ability and value
Bouallegue, Zied Ben; Friederichs, Petra
2015-01-01
While probabilistic forecast verification for categorical forecasts is well established, some of the existing concepts and methods have not found their equivalent for the case of continuous variables. New tools dedicated to the assessment of forecast discrimination ability and forecast value are introduced here, based on quantile forecasts being the base product for the continuous case (hence in a nonparametric framework). The relative user characteristic (RUC) curve and the quantile value plot allow analysing the performance of a forecast for a specific user in a decision-making framework. The RUC curve is designed as a user-based discrimination tool and the quantile value plot translates forecast discrimination ability in terms of economic value. The relationship between the overall value of a quantile forecast and the respective quantile skill score is also discussed. The application of these new verification approaches and tools is illustrated based on synthetic datasets, as well as for the case of global...
Mäntysaari Esa A
2006-06-01
Full Text Available Abstract A method based on Taylor series expansion for estimation of location parameters and variance components of non-linear mixed effects models was considered. An attractive property of the method is the opportunity for an easily implemented algorithm. Estimation of non-linear mixed effects models can be done by common methods for linear mixed effects models, and thus existing programs can be used after small modifications. The applicability of this algorithm in animal breeding was studied with simulation using a Gompertz function growth model in pigs. Two growth data sets were analyzed: a full set containing observations from the entire growing period, and a truncated time trajectory set containing animals slaughtered prematurely, which is common in pig breeding. The results from the 50 simulation replicates with full data set indicate that the linearization approach was capable of estimating the original parameters satisfactorily. However, estimation of the parameters related to adult weight becomes unstable in the case of a truncated data set.
Precise Asymptotics of Error Variance Estimator in Partially Linear Models
Shao-jun Guo; Min Chen; Feng Liu
2008-01-01
In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.
Autchariyapanitkul, K; S Chanaim; Sriboonchitta, S; DENOEUX, T
2014-01-01
International audience; We consider an inference method for prediction based on belief functions in quantile regression with an asymmetric Laplace distribution. We apply this method to the capital asset pricing model to estimate the beta coefficient and measure volatility under various market conditions at given quantiles. Likelihood-based belief functions are constructed from historical data of the securities in the S&P500 market. The results give us evidence on the systematic risk, in the f...
Quantile treatment effects of job loss on health.
Schiele, Valentin; Schmitz, Hendrik
2016-09-01
Studies on health effects of job loss mostly estimate mean effects. We argue that the effects might differ over the distribution of the health status and use quantile regression methods to provide a more complete picture. To take the potential endogeneity of job loss into account, we estimate quantile treatment effects where we rely on job loss due to plant closures. We find that the effect of job loss indeed varies across the mental and physical health distribution. Job loss due to plant closures affects physical health adversely for individuals in the middle and lower part of the health distribution while those in best physical condition do not seem to be affected. The results for mental health, though less distinct, point in the same direction. We find no effects on BMI.
Remote sensing image fusion based on Bayesian linear estimation
GE ZhiRong; WANG Bin; ZHANG LiMing
2007-01-01
A new remote sensing image fusion method based on statistical parameter estimation is proposed in this paper. More specially, Bayesian linear estimation (BLE) is applied to observation models between remote sensing images with different spatial and spectral resolutions. The proposed method only estimates the mean vector and covariance matrix of the high-resolution multispectral (MS) images, instead of assuming the joint distribution between the panchromatic (PAN) image and low-resolution multispectral image. Furthermore, the proposed method can enhance the spatial resolution of several principal components of MS images, while the traditional Principal Component Analysis (PCA) method is limited to enhance only the first principal component. Experimental results with real MS images and PAN image of Landsat ETM+ demonstrate that the proposed method performs better than traditional methods based on statistical parameter estimation,PCA-based method and wavelet-based method.
Unbiased bootstrap error estimation for linear discriminant analysis.
Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R
2014-12-01
Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.
Estimating dynamic equilibrium economies: linear versus nonlinear likelihood
2004-01-01
This paper compares two methods for undertaking likelihood-based inference in dynamic equilibrium economies: a sequential Monte Carlo filter proposed by Fernández-Villaverde and Rubio-Ramírez (2004) and the Kalman filter. The sequential Monte Carlo filter exploits the nonlinear structure of the economy and evaluates the likelihood function of the model by simulation methods. The Kalman filter estimates a linearization of the economy around the steady state. The authors report two main results...
Gradient estimates for parabolic and elliptic systems from linear laminates
Dong, Hongjie
2012-01-01
We establish several gradient estimates for second-order divergence type parabolic and elliptic systems. The coefficients and data are assumed to be H\\"older or Dini continuous in the time variable and all but one spatial variables. This type of systems arises from the problems of linearly elastic laminates and composite materials. For the proof, we use Campanato's approach in a novel way. Non-divergence type equations under a similar condition are also discussed.
Application of linear mean-square estimation in ocean engineering
Wang, Li-ping; Chen, Bai-yu; Chen, Chao; Chen, Zheng-shou; Liu, Gui-lin
2016-03-01
The attempt to obtain long-term observed data around some sea areas we concern is usually very hard or even impossible in practical offshore and ocean engineering situations. In this paper, by means of linear mean-square estimation method, a new way to extend short-term data to long-term ones is developed. The long-term data about concerning sea areas can be constructed via a series of long-term data obtained from neighbor oceanographic stations, through relevance analysis of different data series. It is effective to cover the insufficiency of time series prediction method's overdependence upon the length of data series, as well as the limitation of variable numbers adopted in multiple linear regression model. The storm surge data collected from three oceanographic stations located in Shandong Peninsula are taken as examples to analyze the number-selection effect of reference oceanographic stations (adjacent to the concerning sea area) and the correlation coefficients between sea sites which are selected for reference and for engineering projects construction respectively. By comparing the N-year return-period values which are calculated from observed raw data and processed data which are extended from finite data series by means of the linear mean-square estimation method, one can draw a conclusion that this method can give considerably good estimation in practical ocean engineering, in spite of different extreme value distributions about raw and processed data.
Exploratory quantile regression with many covariates: an application to adverse birth outcomes.
Burgette, Lane F; Reiter, Jerome P; Miranda, Marie Lynn
2011-11-01
Covariates may affect continuous responses differently at various points of the response distribution. For example, some exposure might have minimal impact on conditional means, whereas it might lower conditional 10th percentiles sharply. Such differential effects can be important to detect. In studies of the determinants of birth weight, for instance, it is critical to identify exposures like the one above, since low birth weight is a risk factor for later health problems. Effects of covariates on the tails of distributions can be obscured by models (such as linear regression) that estimate conditional means; however, effects on tails can be detected by quantile regression. We present 2 approaches for exploring high-dimensional predictor spaces to identify important predictors for quantile regression. These are based on the lasso and elastic net penalties. We apply the approaches to a prospective cohort study of adverse birth outcomes that includes a wide array of demographic, medical, psychosocial, and environmental variables. Although tobacco exposure is known to be associated with lower birth weights, the analysis suggests an interesting interaction effect not previously reported: tobacco exposure depresses the 20th and 30th percentiles of birth weight more strongly when mothers have high levels of lead in their blood compared with those who have low blood lead levels.
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
Quantile forecast discrimination ability and value
Ben Bouallègue, Zied; Pinson, Pierre; Friederichs, Petra
2015-01-01
While probabilistic forecast verification for categorical forecasts is well established, some of the existing concepts and methods have not found their equivalent for the case of continuous variables. New tools dedicated to the assessment of forecast discrimination ability and forecast value......-based discrimination tool and the quantile value plot translates forecast discrimination ability in terms of economic value. The relationship between the overall value of a quantile forecast and the respective quantile skill score is also discussed. The application of these new verification approaches and tools...
Simulating Quantile Models with Applications to Economics and Management
Machado, José A. F.
2010-05-01
The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.
Linear vs. nonlinear porosity estimation of NMR oil reservoir data
Mohsen Abdou Abou Mandour
2010-09-01
Full Text Available Nuclear magnetic resonance is widely used to assess oil reservoir properties especially those that can not be evaluated using conventional techniques. In this regard, porosity determination and the related estimation of the oil present play a very important role in assessing the eco1nomic value of the oil wells. Nuclear Magnetic Resonance data is usually fit to the sum of decaying exponentials. The resulting distribution; i.e. T2 distribution; is directly related to porosity determination. In this work, three reservoir core samples (Tight Sandstone and two Carbonate samples were analyzed. Linear Least Square method (LLS and non-linear least square fitting using Levenberg-Marquardt method were used to calculate the T2 distribution and the resulting incremental porosity. Parametric analysis for the two methods was performed to evaluate the impact of number of exponentials, and effect of the regularization parameter (? on the smoothing of the solution. Effect of the type of solution on porosity determination was carried out. It was found that 12 exponentials is the optimum number of exponentials for both the linear and nonlinear solutions. In the mean time, it was shown that the linear solution begins to be smooth at α = 0.5 which corresponds to the standard industrial value for the regularization parameter. The order of magnitude of time needed for the linear solution is in the range of few minutes while it is in the range of few hours for the nonlinear solution. Regardless of the fact that small differences exist between the linear and nonlinear solutions, these small values make an appreciable difference in porosity. The nonlinear solution predicts 12% less porosity for the tight sandstone sample and 4.5 % and 13 % more porosity in the two carbonate samples respectively.
A Comparison of Alternative Estimators of Linearly Aggregated Macro Models
Fikri Akdeniz
2012-07-01
Full Text Available Normal 0 false false false TR X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-ansi-language:TR; mso-fareast-language:TR;} This paper deals with the linear aggregation problem. For the true underlying micro relations, which explain the micro behavior of the individuals, no restrictive rank conditions are assumed. Thus the analysis is presented in a framework utilizing generalized inverses of singular matrices. We investigate several estimators for certain linear transformations of the systematic part of the corresponding macro relations. Homogeneity of micro parameters is discussed. Best linear unbiased estimation for micro parameters is described.
Adaptive Error Estimation in Linearized Ocean General Circulation Models
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large
Estimation of Log-Linear-Binomial Distribution with Applications
Elsayed Ali Habib
2010-01-01
Full Text Available Log-linear-binomial distribution was introduced for describing the behavior of the sum of dependent Bernoulli random variables. The distribution is a generalization of binomial distribution that allows construction of a broad class of distributions. In this paper, we consider the problem of estimating the two parameters of log-linearbinomial distribution by moment and maximum likelihood methods. The distribution is used to fit genetic data and to obtain the sampling distribution of the sign test under dependence among trials.
Taming Chaos by Linear Regulation with Bound Estimation
Jiqiang Wang
2015-01-01
Full Text Available Chaos control has become an important area of research and consequently many approaches have been proposed to control chaos. This paper proposes a linear regulation method. Different from the existing approaches is that it can provide region of attraction while estimating the bounding behaviour of the norm of the states. The proposed method also possesses design flexibility and can be easily used to cater for special requirement such that control signal should be generated via single input, single state, static feedback and so forth. The applications to the Tigan system, the Genesio chaotic system, the novel chaotic system, and the Lorenz chaotic system justify the above claims.
The Optimal Selection for Restricted Linear Models with Average Estimator
Qichang Xie
2014-01-01
Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.
Generalized linear model for estimation of missing daily rainfall data
Rahman, Nurul Aishah; Deni, Sayang Mohd; Ramli, Norazan Mohamed
2017-04-01
The analysis of rainfall data with no missingness is vital in various applications including climatological, hydrological and meteorological study. The issue of missing data is a serious concern since it could introduce bias and lead to misleading conclusions. In this study, five imputation methods including simple arithmetic average, normal ratio method, inverse distance weighting method, correlation coefficient weighting method and geographical coordinate were used to estimate the missing data. However, these imputation methods ignored the seasonality in rainfall dataset which could give more reliable estimation. Thus this study is aimed to estimate the missingness in daily rainfall data by using generalized linear model with gamma and Fourier series as the link function and smoothing technique, respectively. Forty years daily rainfall data for the period from 1975 until 2014 which consists of seven stations at Kelantan region were selected for the analysis. The findings indicated that the imputation methods could provide more accurate estimation values based on the least mean absolute error, root mean squared error and coefficient of variation root mean squared error when seasonality in the dataset are considered.
Leaf area estimation of cassava from linear dimensions
SAMARA ZANETTI
2017-08-01
Full Text Available ABSTRACT The objective of this study was to determine predictor models of leaf area of cassava from linear leaf measurements. The experiment was carried out in greenhouse in the municipality of Botucatu, São Paulo state, Brazil. The stem cuttings with 5-7 nodes of the cultivar IAC 576-70 were planted in boxes filled with about 320 liters of soil, keeping soil moisture at field capacity, monitored by puncturing tensiometers. At 80 days after planting, 140 leaves were randomly collected from the top, middle third and base of cassava plants. We evaluated the length and width of the central lobe of leaves, number of lobes and leaf area. The measurements of leaf areas were correlated with the length and width of the central lobe and the number of lobes of the leaves, and adjusted to polynomial and multiple regression models. The linear function that used the length of the central lobe LA = -69.91114 + 15.06462L and linear multiple functions LA = -69.9188 + 15.5102L + 0.0197726K - 0.0768998J or LA = -69.9346 + 15.0106L + 0.188931K - 0.0264323H are suitable models to estimate leaf area of cassava cultivar IAC 576-70.
Spatial Signature Estimation with an Uncalibrated Uniform Linear Array
Xiang Cao
2015-06-01
Full Text Available In this paper, the problem of spatial signature estimation using a uniform linear array (ULA with unknown sensor gain and phase errors is considered. As is well known, the directions-of-arrival (DOAs can only be determined within an unknown rotational angle in this array model. However, the phase ambiguity has no impact on the identification of the spatial signature. Two auto-calibration methods are presented for spatial signature estimation. In our methods, the rotational DOAs and model error parameters are firstly obtained, and the spatial signature is subsequently calculated. The first method extracts two subarrays from the ULA to construct an estimator, and the elements of the array can be used several times in one subarray. The other fully exploits multiple invariances in the interior of the sensor array, and a multidimensional nonlinear problem is formulated. A Gauss–Newton iterative algorithm is applied for solving it. The first method can provide excellent initial inputs for the second one. The effectiveness of the proposed algorithms is demonstrated by several simulation results.
Nearly best linear estimates of logistic parameters based on complete ordered statistics
无
2001-01-01
Deals with the determination of the nearly best linear estimates of location and scale parameters of a logistic population, when both parameters are unknown, by introducing Bloms semi-empirical α, β-correction′into the asymptotic mean and covariance formulae with complete and ordered samples taken into consideration and various nearly best linear estimates established and points out the high efficiency of these estimators relative to the best linear unbiased estimators (BLUEs) and other linear estimators makes them useful in practice.
Modeling Autoregressive Processes with Moving-Quantiles-Implied Nonlinearity
Isao Ishida
2015-01-01
Full Text Available We introduce and investigate some properties of a class of nonlinear time series models based on the moving sample quantiles in the autoregressive data generating process. We derive a test fit to detect this type of nonlinearity. Using the daily realized volatility data of Standard & Poor’s 500 (S&P 500 and several other indices, we obtained good performance using these models in an out-of-sample forecasting exercise compared with the forecasts obtained based on the usual linear heterogeneous autoregressive and other models of realized volatility.
K factor estimation in distribution transformers using linear regression models
Juan Miguel Astorga Gómez
2016-06-01
Full Text Available Background: Due to massive incorporation of electronic equipment to distribution systems, distribution transformers are subject to operation conditions other than the design ones, because of the circulation of harmonic currents. It is necessary to quantify the effect produced by these harmonic currents to determine the capacity of the transformer to withstand these new operating conditions. The K-factor is an indicator that estimates the ability of a transformer to withstand the thermal effects caused by harmonic currents. This article presents a linear regression model to estimate the value of the K-factor, from total current harmonic content obtained with low-cost equipment.Method: Two distribution transformers that feed different loads are studied variables, current total harmonic distortion factor K are recorded, and the regression model that best fits the data field is determined. To select the regression model the coefficient of determination R2 and the Akaike Information Criterion (AIC are used. With the selected model, the K-factor is estimated to actual operating conditions.Results: Once determined the model it was found that for both agricultural cargo and industrial mining, present harmonic content (THDi exceeds the values that these transformers can drive (average of 12.54% and minimum 8,90% in the case of agriculture and average value of 18.53% and a minimum of 6.80%, for industrial mining case.Conclusions: When estimating the K factor using polynomial models it was determined that studied transformers can not withstand the current total harmonic distortion of their current loads. The appropriate K factor for studied transformer should be 4; this allows transformers support the current total harmonic distortion of their respective loads.
Best linear unbiased estimation of the nuclear masses
Bouriquet, Bertrand
2009-01-01
This paper presents methods to provide an optimal evaluation of the nuclear masses. The techniques used for this purpose come from data assimilation (DA) that allows combining, in an optimal and consistent way, information coming from experiment and from numerical modelling. Using all the available information, it leads to improve not only masses evaluations, but also their uncertainties. Each newly evaluated mass value is associated with some accuracy that is sensibly reduced with respect to the values given in tables, especially in the case of the less well-known masses. In this paper, we first introduce a useful tool of DA, the Best Linear Unbiased Estimation (BLUE). This BLUE method is applied to nuclear mass tables and some results of improvement are shown. Then finally, some post validation diagnostics, demonstrating that the method has been used in optimal conditions, are described and used to validate the results.
Parameter estimation and hypothesis testing in linear models
Koch, Karl-Rudolf
1999-01-01
The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...
Linear Estimation of Location and Scale Parameters Using Partial Maxima
Papadatos, Nickos
2010-01-01
Consider an i.i.d. sample X^*_1,X^*_2,...,X^*_n from a location-scale family, and assume that the only available observations consist of the partial maxima (or minima)sequence, X^*_{1:1},X^*_{2:2},...,X^*_{n:n}, where X^*_{j:j}=max{X^*_1,...,X^*_j}. This kind of truncation appears in several circumstances, including best performances in athletics events. In the case of partial maxima, the form of the BLUEs (best linear unbiased estimators) is quite similar to the form of the well-known Lloyd's (1952, Least-squares estimation of location and scale parameters using order statistics, Biometrika, vol. 39, pp. 88-95) BLUEs, based on (the sufficient sample of) order statistics, but, in contrast to the classical case, their consistency is no longer obvious. The present paper is mainly concerned with the scale parameter, showing that the variance of the partial maxima BLUE is at most of order O(1/log n), for a wide class of distributions.
Binary Classifier Calibration Using an Ensemble of Linear Trend Estimation
Naeini, Mahdi Pakdaman; Cooper, Gregory F.
2017-01-01
Learning accurate probabilistic models from data is crucial in many practical tasks in data mining. In this paper we present a new non-parametric calibration method called ensemble of linear trend estimation (ELiTE). ELiTE utilizes the recently proposed ℓ1 trend ltering signal approximation method [22] to find the mapping from uncalibrated classification scores to the calibrated probability estimates. ELiTE is designed to address the key limitations of the histogram binning-based calibration methods which are (1) the use of a piecewise constant form of the calibration mapping using bins, and (2) the assumption of independence of predicted probabilities for the instances that are located in different bins. The method post-processes the output of a binary classifier to obtain calibrated probabilities. Thus, it can be applied with many existing classification models. We demonstrate the performance of ELiTE on real datasets for commonly used binary classification models. Experimental results show that the method outperforms several common binary-classifier calibration methods. In particular, ELiTE commonly performs statistically significantly better than the other methods, and never worse. Moreover, it is able to improve the calibration power of classifiers, while retaining their discrimination power. The method is also computationally tractable for large scale datasets, as it is practically O(N log N) time, where N is the number of samples.
Camacho, Oscar M; Eldridge, Alison; Proctor, Christopher J; McAdam, Kevin
2015-08-01
Approximately 100 toxicants have been identified in cigarette smoke, to which exposure has been linked to a range of serious diseases in smokers. Smoking machines have been used to quantify toxicant emissions from cigarettes for regulatory reporting. The World Health Organization Study Group on Tobacco Product Regulation has proposed a regulatory scenario to identify median values for toxicants found in commercially available products, which could be used to set mandated limits on smoke emissions. We present an alternative approach, which used quantile regression to estimate reference percentiles to help contextualise the toxicant yields of commercially available products with respect to a reference analyte, such as tar or nicotine. To illustrate this approach we examined four toxicants (acetone, N'-nitrosoanatabine, phenol and pyridine) with respect to tar, and explored International Organization for Standardization (ISO) and Health Canada Intense (HCI) regimes. We compared this approach with other methods for assessing toxicants in cigarette smoke, such as ratios to nicotine or tar, and linear regression. We concluded that the quantile regression approach effectively represented data distributions across toxicants for both ISO and HCI regimes. This method provides robust, transparent and intuitive percentile estimates in relation to any desired reference value within the data space. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Penalized maximum likelihood estimation for generalized linear point processes
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we...
Competing Risks Quantile Regression at Work
Dlugosz, Stephan; Lo, Simon M. S.; Wilke, Ralf
2017-01-01
Despite its emergence as a frequently used method for the empirical analysis of multivariate data, quantile regression is yet to become a mainstream tool for the analysis of duration data. We present a pioneering empirical study on the grounds of a competing risks quantile regression model. We us...... into the distribution of transitions out of maternity leave. It is found that cumulative incidences implied by the quantile regression model differ from those implied by a proportional hazards model. To foster the use of the model, we make an R-package (cmprskQR) available....... large-scale maternity duration data with multiple competing risks derived from German linked social security records to analyse how public policies are related to the length of economic inactivity of young mothers after giving birth. Our results show that the model delivers detailed insights...
Admissible estimation of linear functions of characteristic values of a finite population
邹国华; 成平; 冯士雍
1997-01-01
The problem on admissibility of estimators is considered based on the point of view of the superpopu-tation model. The necessary and sufficient conditions for linear estimators of an arbitrary linear function of characteristic values of a finite population to be admissible in the class of linear or all estimators are obtained respectively.
Liu Gang
2009-01-01
Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.
Testing for Stock Market Contagion: A Quantile Regression Approach
S.Y. Park (Sung); W. Wang (Wendun); N. Huang (Naijing)
2015-01-01
markdownabstract__Abstract__ Regarding the asymmetric and leptokurtic behavior of financial data, we propose a new contagion test in the quantile regression framework that is robust to model misspecification. Unlike conventional correlation-based tests, the proposed quantile contagion test
Digital speech enhancement based on DTOMP and adaptive quantile
Wang, Anna; Zhou, Xiaoxing; Xue, Changliang; Sun, Xiyan; Sun, Hongying
2013-03-01
Compressed Sensing (CS) that can effectively extract the information contained in the signal is a new sampling theory based on signal sparseness. This paper applies CS theory in digital speech signal enhancement processing, proposes an adaptive quantile method for the noise power estimation and combines the improved double-threshold orthogonal matching pursuit algorithm for speech reconstruction, then achieves speech enhancement processing. Compared with the simulation results of the spectral subtraction and the subspace algorithm, the experiment results verify the feasibility and effectiveness of the algorithm proposed in this paper applied to speech enhancement processing.
BUSINESS GROWTH STRATEGIES OF ILLINOIS FARMS: A QUANTILE REGRESSION APPROACH
Hennings, Enrique; Katchova, Ani L.
2005-01-01
This study examines the business strategies employed by Illinois farms to maintain equity growth using quantile regression analysis. Using data from the Farm Business Farm Management system, this study finds that the effect of different business strategies on equity growth rates differs between quantiles. Financial management strategies have a positive effect for farms situated in the highest quantile of equity growth, while for farms in the lowest quantile the effect on equity growth is nega...
A Software Reliability Model Using Quantile Function
Bijamma Thomas
2014-01-01
Full Text Available We study a class of software reliability models using quantile function. Various distributional properties of the class of distributions are studied. We also discuss the reliability characteristics of the class of distributions. Inference procedures on parameters of the model based on L-moments are studied. We apply the proposed model to a real data set.
Moderate Deviations for M-estimators in Linear Models with φ-mixing Errors
Jun FAN
2012-01-01
In this paper,the moderate deviations for the M-estimators of regression parameter in a linear model are obtained when the errors form a strictly stationary φ-mixing sequence.The results are applied to study many different types of M-estimators such as Huber's estimator,Lp-regression estimator,least squares estimator and least absolute deviation estimator.
Xi-zhi Wu; Mao-zai Tian
2008-01-01
Quantile regression is gradually emerging as a powerful tool for estimating models of conditional quantile functions, and therefore research in this area has vastly increased in the past two decades. This paper, with the quantile regression technique, is the first comprehensive longitudinal study on mathematics participation data collected in Alberta, Canada. The major advantage of longitudinal study is its capability to separate the so-called cohort and age effects in the context of population studies. One aim of this paper is to study whether the family background factors alter performance on the mathematical achievement of the strongest students in the same way as that of weaker students based on the large longitudinal sample of 2000,2001 and 2002 mathematics participation longitudinal data set. The interesting findings suggest that there may be differential family background factor effects at different points in the mathematical achievement conditional distribution.
Prediction of quantiles by statistical learning and application to GDP forecasting
Alquier, Pierre
2012-01-01
In this paper, we tackle the problem of prediction and confidence intervals for time series using a statistical learning approach and quantile loss functions. In a first time, we show that the Gibbs estimator (also known as Exponentially Weighted aggregate) is able to predict as well as the best predictor in a given family for a wide set of loss functions. In particular, using the quantile loss function of Koenker and Bassett (1978), this allows to build confidence intervals. We apply these results to the problem of prediction and confidence regions for the French Gross Domestic Product (GDP) growth, with promising results.
Horst Entorf
2015-07-01
Full Text Available Two alternative hypotheses – referred to as opportunity- and stigma-based behavior – suggest that the magnitude of the link between unemployment and crime also depends on preexisting local crime levels. In order to analyze conjectured nonlinearities between both variables, we use quantile regressions applied to German district panel data. While both conventional OLS and quantile regressions confirm the positive link between unemployment and crime for property crimes, results for assault differ with respect to the method of estimation. Whereas conventional mean regressions do not show any significant effect (which would confirm the usual result found for violent crimes in the literature, quantile regression reveals that size and importance of the relationship are conditional on the crime rate. The partial effect is significantly positive for moderately low and median quantiles of local assault rates.
Does intense monitoring matter? A quantile regression approach
Fekri Ali Shawtari
2017-06-01
Full Text Available Corporate governance has become a centre of attention in corporate management at both micro and macro levels due to adverse consequences and repercussion of insufficient accountability. In this study, we include the Malaysian stock market as sample to explore the impact of intense monitoring on the relationship between intellectual capital performance and market valuation. The objectives of the paper are threefold: i to investigate whether intense monitoring affects the intellectual capital performance of listed companies; ii to explore the impact of intense monitoring on firm value; iii to examine the extent to which the directors serving more than two board committees affects the linkage between intellectual capital performance and firms' value. We employ two approaches, namely, the Ordinary Least Square (OLS and the quantile regression approach. The purpose of the latter is to estimate and generate inference about conditional quantile functions. This method is useful when the conditional distribution does not have a standard shape such as an asymmetric, fat-tailed, or truncated distribution. In terms of variables, the intellectual capital is measured using the value added intellectual coefficient (VAIC, while the market valuation is proxied by firm's market capitalization. The findings of the quantile regression shows that some of the results do not coincide with the results of OLS. We found that intensity of monitoring does not influence the intellectual capital of all firms. It is also evident that intensity of monitoring does not influence the market valuation. However, to some extent, it moderates the relationship between intellectual capital performance and market valuation. This paper contributes to the existing literature as it presents new empirical evidences on the moderating effects of the intensity of monitoring of the board committees on the relationship between performance and intellectual capital.
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA
QianWeimin; LiYumei
2005-01-01
The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.
Smooth conditional distribution function and quantiles under random censorship.
Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine
2002-09-01
We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).
Tabatabaeipour, Seyed Mojtaba; Bak, Thomas
2012-01-01
In this paper we consider the problem of fault estimation and accommodation for discrete time piecewise linear systems. A robust fault estimator is designed to estimate the fault such that the estimation error converges to zero and H∞ performance of the fault estimation is minimized. Then...
A Stochastic Restricted Principal Components Regression Estimator in the Linear Model
Daojiang He
2014-01-01
Full Text Available We propose a new estimator to combat the multicollinearity in the linear model when there are stochastic linear restrictions on the regression coefficients. The new estimator is constructed by combining the ordinary mixed estimator (OME and the principal components regression (PCR estimator, which is called the stochastic restricted principal components (SRPC regression estimator. Necessary and sufficient conditions for the superiority of the SRPC estimator over the OME and the PCR estimator are derived in the sense of the mean squared error matrix criterion. Finally, we give a numerical example and a Monte Carlo study to illustrate the performance of the proposed estimator.
Etchevers, Anne; Le Tertre, Alain; Lucas, Jean-Paul; Bretin, Philippe; Oulhote, Youssef; Le Bot, Barbara; Glorennec, Philippe
2015-01-01
Blood lead levels (BLLs) have substantially decreased in recent decades in children in France. However, further reducing exposure is a public health goal because there is no clear toxicological threshold. The identification of the environmental determinants of BLLs as well as risk factors associated with high BLLs is important to update prevention strategies. We aimed to estimate the contribution of environmental sources of lead to different BLLs in children in France. We enrolled 484 children aged from 6months to 6years, in a nationwide cross-sectional survey in 2008-2009. We measured lead concentrations in blood and environmental samples (water, soils, household settled dusts, paints, cosmetics and traditional cookware). We performed two models: a multivariate generalized additive model on the geometric mean (GM), and a quantile regression model on the 10th, 25th, 50th, 75th and 90th quantile of BLLs. The GM of BLLs was 13.8μg/L (=1.38μg/dL) (95% confidence intervals (CI): 12.7-14.9) and the 90th quantile was 25.7μg/L (CI: 24.2-29.5). Household and common area dust, tap water, interior paint, ceramic cookware, traditional cosmetics, playground soil and dust, and environmental tobacco smoke were associated with the GM of BLLs. Household dust and tap water made the largest contributions to both the GM and the 90th quantile of BLLs. The concentration of lead in dust was positively correlated with all quantiles of BLLs even at low concentrations. Lead concentrations in tap water above 5μg/L were also positively correlated with the GM, 75th and 90th quantiles of BLLs in children drinking tap water. Preventative actions must target household settled dust and tap water to reduce the BLLs of children in France. The use of traditional cosmetics should be avoided whereas ceramic cookware should be limited to decorative purposes.
Two biased estimation techniques in linear regression: Application to aircraft
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
Empirical Quantile CLTs for Time Dependent Data
Kuelbs, James
2011-01-01
We establish empirical quantile process CLTs based on $n$ independent copies of a stochastic process $\\{X_t: t \\in E\\}$ that are uniform in $t \\in E$ and quantile levels $\\alpha \\in I$, where $I$ is a closed sub-interval of $(0,1)$. Typically $E=[0,T]$, or a finite product of such intervals. Also included are CLT's for the empirical process based on $\\{I_{X_t \\le y} - \\rm {Pr}(X_t \\le y): t \\in E, y \\in R \\}$ that are uniform in $t \\in E, y \\in R$. The process $\\{X_t: t \\in E\\}$ may be chosen from a broad collection of Gaussian processes, compound Poisson processes, stationary independent increment stable processes, and martingales.
Estimation of Physical Parameters in Linear and Nonlinear Dynamic Systems
Knudsen, Morten
and estimation of physical parameters in particular. 2. To apply the new methods for modelling of specific objects, such as loudspeakers, ac- and dc-motors wind turbines and beat exchangers. A reliable quality measure of an obtained parameter estimate is a prerequisite for any reasonable use of the result...
Tabatabaeipour, Seyed Mojtaba; Bak, Thomas
2012-01-01
In this paper we consider the problem of fault estimation and accommodation for discrete time piecewise linear systems. A robust fault estimator is designed to estimate the fault such that the estimation error converges to zero and H∞ performance of the fault estimation is minimized. Then......, the estimate of fault is used to compensate for the effect of the fault. Hence, using the estimate of fault, a fault tolerant controller using a piecewise linear static output feedback is designed such that it stabilizes the system and provides an upper bound on the H∞ performance of the faulty system....... Sufficient conditions for the existence of robust fault estimator and fault tolerant controller are derived in terms of linear matrix inequalities. Upper bounds on the H∞ performance can be minimized by solving convex optimization problems with linear matrix inequality constraints. The efficiency...
Discontinuous Galerkin error estimation for linear symmetric hyperbolic systems
Adjerid, Slimane; Weinhart, Thomas
2009-01-01
In this manuscript we present an error analysis for the discontinuous Galerkin discretization error of multi-dimensional first-order linear symmetric hyperbolic systems of partial differential equations. We perform a local error analysis by writing the local error as a series and showing that its le
Limit theorems for functions of marginal quantiles
Babu, G Jogesh; Choi, Kwok Pui; Mangalam, Vasudevan; 10.3150/10-BEJ287
2011-01-01
Multivariate distributions are explored using the joint distributions of marginal sample quantiles. Limit theory for the mean of a function of order statistics is presented. The results include a multivariate central limit theorem and a strong law of large numbers. A result similar to Bahadur's representation of quantiles is established for the mean of a function of the marginal quantiles. In particular, it is shown that \\[\\sqrt{n}\\Biggl(\\frac{1}{n}\\sum_{i=1}^n\\phi\\bigl(X_{n:i}^{(1)},...,X_{n:i}^{(d)}\\bigr)-\\bar{\\gamma}\\Biggr)=\\frac{1}{\\sqrt{n}}\\sum_{i=1}^nZ_{n,i}+\\mathrm{o}_P(1)\\] as $n\\rightarrow\\infty$, where $\\bar{\\gamma}$ is a constant and $Z_{n,i}$ are i.i.d. random variables for each $n$. This leads to the central limit theorem. Weak convergence to a Gaussian process using equicontinuity of functions is indicated. The results are established under very general conditions. These conditions are shown to be satisfied in many commonly occurring situations.
Determinants of Birthweight Outcomes: Quantile Regressions Based on Panel Data
Bache, Stefan Holst; Dahl, Christian Møller; Kristensen, Johannes Tang
regression framework in order to control for heterogeneity and to infer conclusions about causality across the entire birthweight distribution. We obtain estimation results for maternal smoking and other interesting determinants, applying these to data obtained from Aarhus University Hospital, Skejby...... to the possibility that smoking habits can be influenced through policy conduct. It is widely believed that maternal smoking reduces birthweight; however, the crucial difficulty in estimating such effects is the unobserved heterogeneity among mothers. We consider extensions of three panel data models to a quantile...... and significance of prenatal smoking. Controlling for unobserved effects does not change the fact that smoking reduces birthweight, but it shows that the effect is primarily a problem in the left tail of the distribution on a slightly smaller scale....
Design of reduced-order state estimators for linear time-varying multivariable systems
Nguyen, Charles C.
1987-01-01
The design of reduced-order state estimators for linear time-varying multivariable systems is considered. Employing the concepts of matrix operators and the method of canonical transformations, this paper shows that there exists a reduced-order state estimator for linear time-varying systems that are 'lexicography-fixedly observable'. In addition, the eigenvalues of the estimator can be arbitrarily assigned. A simple algorithm is proposed for the design of the state estimator.
Asymptotic Parameter Estimation for a Class of Linear Stochastic Systems Using Kalman-Bucy Filtering
Xiu Kan
2012-01-01
Full Text Available The asymptotic parameter estimation is investigated for a class of linear stochastic systems with unknown parameter θ:dXt=(θα(t+β(tXtdt+σ(tdWt. Continuous-time Kalman-Bucy linear filtering theory is first used to estimate the unknown parameter θ based on Bayesian analysis. Then, some sufficient conditions on coefficients are given to analyze the asymptotic convergence of the estimator. Finally, the strong consistent property of the estimator is discussed by comparison theorem.
Mulyani, Sri; Andriyana, Yudhie; Sudartianto
2017-03-01
Mean regression is a statistical method to explain the relationship between the response variable and the predictor variable based on the central tendency of the data (mean) of the response variable. The parameter estimation in mean regression (with Ordinary Least Square or OLS) generates a problem if we apply it to the data with a symmetric, fat-tailed, or containing outlier. Hence, an alternative method is necessary to be used to that kind of data, for example quantile regression method. The quantile regression is a robust technique to the outlier. This model can explain the relationship between the response variable and the predictor variable, not only on the central tendency of the data (median) but also on various quantile, in order to obtain complete information about that relationship. In this study, a quantile regression is developed with a nonparametric approach such as smoothing spline. Nonparametric approach is used if the prespecification model is difficult to determine, the relation between two variables follow the unknown function. We will apply that proposed method to poverty data. Here, we want to estimate the Percentage of Poor People as the response variable involving the Human Development Index (HDI) as the predictor variable.
Kumar, K Vasanth; Sivanesan, S
2005-08-31
Comparison analysis of linear least square method and non-linear method for estimating the isotherm parameters was made using the experimental equilibrium data of safranin onto activated carbon at two different solution temperatures 305 and 313 K. Equilibrium data were fitted to Freundlich, Langmuir and Redlich-Peterson isotherm equations. All the three isotherm equations showed a better fit to the experimental equilibrium data. The results showed that non-linear method could be a better way to obtain the isotherm parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.
THE SUPERIORITY OF EMPIRICAL BAYES ESTIMATION OF PARAMETERS IN PARTITIONED NORMAL LINEAR MODEL
Zhang Weiping; Wei Laisheng
2008-01-01
In this article, the empirical Bayes (EB) estimators are constructed for the estimable functions of the parameters in partitioned normal linear model. The superiorities of the EB estimators over ordinary least-squares (LS) estimator are investigated under mean square error matrix (MSEM) criterion.
Remodeling and Estimation for Sparse Partially Linear Regression Models
Yunhui Zeng
2013-01-01
Full Text Available When the dimension of covariates in the regression model is high, one usually uses a submodel as a working model that contains significant variables. But it may be highly biased and the resulting estimator of the parameter of interest may be very poor when the coefficients of removed variables are not exactly zero. In this paper, based on the selected submodel, we introduce a two-stage remodeling method to get the consistent estimator for the parameter of interest. More precisely, in the first stage, by a multistep adjustment, we reconstruct an unbiased model based on the correlation information between the covariates; in the second stage, we further reduce the adjusted model by a semiparametric variable selection method and get a new estimator of the parameter of interest simultaneously. Its convergence rate and asymptotic normality are also obtained. The simulation results further illustrate that the new estimator outperforms those obtained by the submodel and the full model in the sense of mean square errors of point estimation and mean square prediction errors of model prediction.
Pladdy, Christopher; Nerayanuru, Sreenivasa M.; Fimoff, Mark; Özen, Serdar; Zoltowski, Michael
2004-01-01
We present a low complexity approximate method for semi-blind best linear unbiased estimation (BLUE) of a channel impulse response vector (CIR) for a communication system, which utilizes a periodically transmitted training sequence, within a continuous stream of information symbols. The algorithm achieves slightly degraded results at a much lower complexity than directly computing the BLUE CIR estimate. In addition, the inverse matrix required to invert the weighted normal equations to solve ...
Computational Issues in Linear Least-Squares Estimation and Control
1979-06-06
Algorithms for Parallel Processing in Optimal Estimation," to appear in Automatica, May, 1979. Newton, Issac, [1926], Philosophe Naturalis Principia ... Mathematica , Ii. Pemberton, Ed. (G. & J. Innys, London, ed. 3). , [1934], Mathematical Principles of Natural Philosophy, A. Motte, Translation, 7. Cajori, Ed
Direct Marketing and the Structure of Farm Sales: An Unconditional Quantile Regression Approach
Park, Timothy A.
2015-01-01
This paper examines the impact of participation in direct marketing on the entire distribution of farm sales using the unconditional quantile regression (UQR) estimator. Our analysis yields unbiased estimates of the unconditional impact of direct marketing on farm sales and reveals the heterogeneous effects that occur across the distribution of farm sales. The impacts of direct marketing efforts are uniformly negative across the UQR results, but declines in sales tend to grow smaller as sales...
Direct Marketing and the Structure of Farm Sales: An Unconditional Quantile Regression Approach
Park, Timothy A.
2015-01-01
This paper examines the impact of participation in direct marketing on the entire distribution of farm sales using the unconditional quantile regression (UQR) estimator. Our analysis yields unbiased estimates of the unconditional impact of direct marketing on farm sales and reveals the heterogeneous effects that occur across the distribution of farm sales. The impacts of direct marketing efforts are uniformly negative across the UQR results, but declines in sales tend to grow smaller as sales...
Ramnath Vishal
2017-01-01
Full Text Available Traditionally in the field of pressure metrology uncertainty quantification was performed with the use of the Guide to the Uncertainty in Measurement (GUM; however, with the introduction of the GUM Supplement 1 (GS1 the use of Monte Carlo simulations has become an accepted practice for uncertainty analysis in metrology for mathematical models in which the underlying assumptions of the GUM are not valid. Consequently the use of quantile functions was developed as a means to easily summarize and report on uncertainty numerical results that were based on Monte Carlo simulations. In this paper, we considered the case of a piston–cylinder operated pressure balance where the effective area is modelled in terms of a combination of explicit/implicit and linear/non-linear models, and how quantile functions may be applied to analyse results and compare uncertainties from a mixture of GUM and GS1 methodologies.
Wen-Cheng Wang
2014-01-01
Full Text Available It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models.
Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen
2014-01-01
It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models. PMID:24574916
Method and system for non-linear motion estimation
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
dglars: An R Package to Estimate Sparse Generalized Linear Models
Luigi Augugliaro
2014-09-01
Full Text Available dglars is a publicly available R package that implements the method proposed in Augugliaro, Mineo, and Wit (2013, developed to study the sparse structure of a generalized linear model. This method, called dgLARS, is based on a differential geometrical extension of the least angle regression method proposed in Efron, Hastie, Johnstone, and Tibshirani (2004. The core of the dglars package consists of two algorithms implemented in Fortran 90 to efficiently compute the solution curve: a predictor-corrector algorithm, proposed in Augugliaro et al. (2013, and a cyclic coordinate descent algorithm, proposed in Augugliaro, Mineo, and Wit (2012. The latter algorithm, as shown here, is significantly faster than the predictor-corrector algorithm. For comparison purposes, we have implemented both algorithms.
Two-step variable selection in quantile regression models
FAN Yali
2015-06-01
Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions,in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform l1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.
ESTIMATE OF DISCRETE NONLINEARITIES IN A MAINLY LINEAR DYNAMIC SYSTEM
无
2001-01-01
The class of system considered is a single degree of freedom undamped vibrating system with a clearance in which the dynamical behavior is described by a state-space representation in real time. The direct identification technique for the estimate of a clearance and other parameters in the system is presented in terms of least squares method and stepby-step iteration approach. For numerical simulation purpose, the simulated data are achieved by corrupting the modeled responses. The mathematical algorithm, which is put forward, has proven to be effective through a practical numerical example.
Optimal linear shrinkage corrections of sample LMMSE and MVDR estimators
2012-01-01
La proposició d'estimadors shrinkage òptims que corregeixen la degradació dels mètodes sample LMMSE i sample MUDR en el règim on el número de mostres és petit en comparació a la dimensió de les observacions. [ANGLÈS] This master thesis proposes optimal shrinkage estimators that counteract the performance degradation of the sample LMMSE and sample MVDR methods in the regime where the sample size is small compared to the observation dimension. [CASTELLÀ] Esta máster tesis propone estimado...
On asymptotics of t-type regression estimation in multiple linear model
无
2004-01-01
We consider a robust estimator (t-type regression estimator) of multiple linear regression model by maximizing marginal likelihood of a scaled t-type error t-distribution.The marginal likelihood can also be applied to the de-correlated response when the withinsubject correlation can be consistently estimated from an initial estimate of the model based on the independent working assumption. This paper shows that such a t-type estimator is consistent.
Linnet, K
1990-12-01
The linear relationship between the measurements of two methods is estimated on the basis of a weighted errors-in-variables regression model that takes into account a proportional relationship between standard deviations of error distributions and true variable levels. Weights are estimated by an interative procedure. As shown by simulations, the regression procedure yields practically unbiased slope estimates in realistic situations. Standard errors of slope and location difference estimations are derived by the jackknife principle. For illustration, the linear relationship is estimated between the measurements of two albumin methods with proportional errors.
Robust observer-based fault estimation and accommodation of discrete-time piecewise linear systems
Tabatabaeipour, Mojtaba; Bak, Thomas
2013-01-01
In this paper a new integrated observer-based fault estimation and accommodation strategy for discrete-time piecewise linear (PWL) systems subject to actuator faults is proposed. A robust estimator is designed to simultaneously estimate the state of the system and the actuator fault. Then, the es...
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Set-membership state estimation framework for uncertain linear differential-algebraic equations
Zhuk, Serhiy
2008-01-01
We investigate a problem of state estimation for the dynamical system described by the linear operator equation with unknown parameters in Hilbert space. We present explicit expressions for linear minimax estimation and error provided that any pair of uncertain parameters belongs to the quadratic bounding set. As an application of the introduced approach we introduce a notion of minimax directional observability and index of non-causality for linear noncausal DAEs. Application of these notions to the problem of state estimation for the linear uncertain noncausal DAEs allows to construct the state estimation in the form of the recursive minimax filter. A numerical example of the state estimation for 3D non-causal descriptor system is presented.
Input and state estimation for linear systems with a rank-deficient direct feedthrough matrix.
Wang, Haokun; Zhao, Jun; Xu, Zuhua; Shao, Zhijiang
2015-07-01
The problem of joint input and state estimation for linear stochastic systems with a rank-deficient direct feedthrough matrix is discussed in this paper. Results from previous studies only solve the state estimation problem; globally optimal estimation of the unknown input is not provided. Based on linear minimum-variance unbiased estimation, a five-step recursive filter with global optimality is proposed to estimate both the unknown input and the state. The relationship between the proposed filter and the existing results is addressed. We show that the unbiased input estimation does not require any new information or additional constraints. Both the state and the unknown input can be estimated under the same unbiasedness condition. Global optimalities of both the state estimator and the unknown input estimator are proven in the minimum-variance unbiased sense.
Cannon, Alex
2017-04-01
Estimating historical trends in short-duration rainfall extremes at regional and local scales is challenging due to low signal-to-noise ratios and the limited availability of homogenized observational data. In addition to being of scientific interest, trends in rainfall extremes are of practical importance, as their presence calls into question the stationarity assumptions that underpin traditional engineering and infrastructure design practice. Even with these fundamental challenges, increasingly complex questions are being asked about time series of extremes. For instance, users may not only want to know whether or not rainfall extremes have changed over time, they may also want information on the modulation of trends by large-scale climate modes or on the nonstationarity of trends (e.g., identifying hiatus periods or periods of accelerating positive trends). Efforts have thus been devoted to the development and application of more robust and powerful statistical estimators for regional and local scale trends. While a standard nonparametric method like the regional Mann-Kendall test, which tests for the presence of monotonic trends (i.e., strictly non-decreasing or non-increasing changes), makes fewer assumptions than parametric methods and pools information from stations within a region, it is not designed to visualize detected trends, include information from covariates, or answer questions about the rate of change in trends. As a remedy, monotone quantile regression (MQR) has been developed as a nonparametric alternative that can be used to estimate a common monotonic trend in extremes at multiple stations. Quantile regression makes efficient use of data by directly estimating conditional quantiles based on information from all rainfall data in a region, i.e., without having to precompute the sample quantiles. The MQR method is also flexible and can be used to visualize and analyze the nonlinearity of the detected trend. However, it is fundamentally a
Modeling energy expenditure in children and adolescents using quantile regression
Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obes...
Testing for Stock Market Contagion: A Quantile Regression Approach
S.Y. Park (Sung); W. Wang (Wendun); N. Huang (Naijing)
2015-01-01
markdownabstract__Abstract__ Regarding the asymmetric and leptokurtic behavior of financial data, we propose a new contagion test in the quantile regression framework that is robust to model misspecification. Unlike conventional correlation-based tests, the proposed quantile contagion test allows
Relationship between Urbanization and Cancer Incidence in Iran Using Quantile Regression.
Momenyan, Somayeh; Sadeghifar, Majid; Sarvi, Fatemeh; Khodadost, Mahmoud; Mosavi-Jarrahi, Alireza; Ghaffari, Mohammad Ebrahim; Sekhavati, Eghbal
2016-01-01
Quantile regression is an efficient method for predicting and estimating the relationship between explanatory variables and percentile points of the response distribution, particularly for extreme percentiles of the distribution. To study the relationship between urbanization and cancer morbidity, we here applied quantile regression. This cross-sectional study was conducted for 9 cancers in 345 cities in 2007 in Iran. Data were obtained from the Ministry of Health and Medical Education and the relationship between urbanization and cancer morbidity was investigated using quantile regression and least square regression. Fitting models were compared using AIC criteria. R (3.0.1) software and the Quantreg package were used for statistical analysis. With the quantile regression model all percentiles for breast, colorectal, prostate, lung and pancreas cancers demonstrated increasing incidence rate with urbanization. The maximum increase for breast cancer was in the 90th percentile (β=0.13, p-valuecancer was in the 75th percentile (β=0.048, p-valuecancer the 95th percentile (β=0.55, p-valuecancer was in 95th percentile (β=0.52, p-value=0.006), for pancreas cancer was in 10th percentile (β=0.011, p-valuecancers, with increasing urbanization, the incidence rate was decreased. The maximum decrease for gastric cancer was in the 90th percentile(β=0.003, p-valuecancer the 95th (β=0.04, p-value=0.4) and for skin cancer also the 95th (β=0.145, p-value=0.071). The AIC showed that for upper percentiles, the fitting of quantile regression was better than least square regression. According to the results of this study, the significant impact of urbanization on cancer morbidity requirs more effort and planning by policymakers and administrators in order to reduce risk factors such as pollution in urban areas and ensure proper nutrition recommendations are made.
Tao Hu; Heng-jian Cui; Xing-wei Tong
2009-01-01
This article considers a semiparametric varying-coefficient partially linear regression model with current status data. The semiparametric varying-coefficient partially linear regression model which is a gen-eralization of the partially linear regression model and varying-coefficient regression model that allows one to explore the possibly nonlinear effect of a certain covariate on the response variable. A Sieve maximum likelihood estimation method is proposed and the asymptotic properties of the proposed estimators are discussed. Under some mild conditions, the estimators are shown to be strongly consistent. The convergence rate of the estima-tor for the unknown smooth function is obtained and the estimator for the unknown parameter is shown to be asymptotically efficient and normally distributed. Simulation studies are conducted to examine the small-sample properties of the proposed estimates and a real dataset is used to illustrate our approach.
Kesavan.E
2013-04-01
Full Text Available This paper suggests an idea to design an adaptive PID controller for Non-linear liquid tank System and is implemented in PLC. Online estimation of linear parameters (Time constant and Gain brings an exact model of the process to take perfect control action. Based on these estimated values, the controller parameters will be well tuned by internal model control. Internal model control is an unremarkably used technique and provides well tuned controller in order to have a good controlling process. PLC with its ability to have both continues control for PID Control and digital control for fault diagnosis which ascertains faults in the system and provides alerts about the status of the entire process.
truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models
Maria Karlsson
2014-05-01
Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.
XING Chunbing
2007-01-01
In this paper,quantile regressions is used to estimate wage equations of different ownerships.Quantile regressions give us distributions rather than a single estimate of the returns both to education and experience in each ownership sector.For state-owned enterprises (SOE),the returns to education tended to be larger at the bottom of the conditional distribution of wages in 1991 and 1993,and there was no such trend in 1997.For the private sector,however,the retrains to education tended to be larger at the top positions in 1993 and 1997.It is also found that the growth rates of the wages at the bottom of the conditional distribution of wages arc higher than those at the top in SOEs.No such patterns for the private sector is found.It is suggested the wage mechanism in the private sector is more market-oriented.
Efficent Estimation of the Non-linear Volatility and Growth Model
2009-01-01
Ramey and Ramey (1995) introduced a non-linear model relating volatility to growth. The solution of this model by generalised computer algorithms for non-linear maximum likelihood estimation encounters the usual difficulties and is, at best, tedious. We propose an algebraic solution for the model that provides fully efficient estimators and is elementary to implement as a standard ordinary least squares procedure. This eliminates issues such as the ‘guesstimation’ of initial values and mul...
Linear regressive model structures for estimation and prediction of compartmental diffusive systems
Vries, D.; Keesman, K.J.; Zwart, H.
2006-01-01
Abstract In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state
Linear regressive model structures for estimation and prediction of compartmental diffusive systems
Vries, D.; Keesman, K.J.; Zwart, H.J.
2006-01-01
In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state space for
A probabilistic risk assessment for dengue fever by a threshold based-quantile regression
Chiu, Chuan-Hung; Tan, Yih-Chi; Wen, Tzai-Hung; Chien, Lung-Chang; Yu, Hwa-Lung
2014-05-01
This article introduces an important concept "return period" to analyze potential incident rate of dengue fever by bringing together two models: the quantile regression model and the threshold-based method. The return period provided the frequency of incidence of dengue fever, and established the risk maps for potential incidence of dengue fever to point out highest risk in certain areas. A threshold-based linear quantile regression model was constructed to find significantly main effects and interactions based on collinearity test and stepwise selection, and also showed the performance of our model via pseudo R2. Finally, the spatial risk maps of the specified return periods and average incident rates were given, and indicated that high population density place (e.g., residential area), water conservancy facilities, and corresponding interactions could lead to a positive influence on dengue fever. These factors would be the key point to disease protection in a given study area.
DIFFERENCES IN DECLINE: QUANTILE REGRESSION OF MALE–FEMALE EARNINGS DIFFERENTIAL IN MALAYSIA
SIEW CHING GOY; GERAINT JOHNES
2015-01-01
Semiparametric estimation has gained significant attention in the study of wage inequality between men and women in recent years. By extending the wage gap at the mean towards the entire wage distribution using quantile regression, it enables researchers to ascertain the direction and the proportions of differences in characteristics and returns to these characteristics at different parts of the wage distribution. This line of research has been prominent in western society but has not yet bee...
Estimations of non-linearities in structural vibrations of string musical instruments
Ege, Kerem; Boutillon, Xavier
2012-01-01
Under the excitation of strings, the wooden structure of string instruments is generally assumed to undergo linear vibrations. As an alternative to the direct measurement of the distortion rate at several vibration levels and frequencies, we characterise weak non-linearities by a signal-model approach based on cascade of Hammerstein models. In this approach, in a chain of two non-linear systems, two measurements are sufficient to estimate the non-linear contribution of the second (sub-)system which cannot be directly linearly driven, as a function of the exciting frequency. The experiment consists in exciting the instrument acoustically. The linear and non-linear contributions to the response of (a) the loudspeaker coupled to the room, (b) the instrument can be separated. Some methodological issues will be discussed. Findings pertaining to several instruments - one piano, two guitars, one violin - will be presented.
robustlmm: An R Package for Robust Estimation of Linear Mixed-Effects Models
Manuel Koller
2016-12-01
Full Text Available As any real-life data, data modeled by linear mixed-effects models often contain outliers or other contamination. Even little contamination can drive the classic estimates far away from what they would be without the contamination. At the same time, datasets that require mixed-effects modeling are often complex and large. This makes it difficult to spot contamination. Robust estimation methods aim to solve both problems: to provide estimates where contamination has only little influence and to detect and flag contamination. We introduce an R package, robustlmm, to robustly fit linear mixed-effects models. The package's functions and methods are designed to closely equal those offered by lme4, the R package that implements classic linear mixed-effects model estimation in R. The robust estimation method in robustlmm is based on the random effects contamination model and the central contamination model. Contamination can be detected at all levels of the data. The estimation method does not make any assumption on the data's grouping structure except that the model parameters are estimable. robustlmm supports hierarchical and non-hierarchical (e.g., crossed grouping structures. The robustness of the estimates and their asymptotic efficiency is fully controlled through the function interface. Individual parts (e.g., fixed effects and variance components can be tuned independently. In this tutorial, we show how to fit robust linear mixed-effects models using robustlmm, how to assess the model fit, how to detect outliers, and how to compare different fits.
A speed estimation unit for induction motors based on adaptive linear combiner
Marei, Mostafa I.; Shaaban, Mostafa F.; El-Sattar, Ahmed A. [Department of Electrical Power and Machines, Faculty of Engineering, Ain Shams University, Cairo 11517 (Egypt)
2009-07-15
This paper presents a new induction motor speed estimation technique, which can estimate the rotor resistance as well, from the measured voltage and current signals. Moreover, the paper utilizes a novel adaptive linear combiner (ADALINE) structure for speed and rotor resistance estimations. This structure can deal with the multi-output systems and it is called MO-ADALINE. The model of the induction motor is arranged in a linear form, in the stationary reference frame, to cope with the proposed speed estimator. There are many advantages of the proposed unit such as wide speed range capability, immunity against harmonics of measured waveforms, and precise estimation of the speed and the rotor resistance at different dynamic changes. Different types of induction motor drive systems are used to evaluate the dynamic performance and to examine the accuracy of the proposed unit for speed and rotor resistance estimation. (author)
Design of Non-fragile Satisfactory Estimator for Linear Continuous Perturbed Stochastic Systems
ZANG Wen-li; WANG Yuan-gang; GUO Zhi
2006-01-01
The design problem of non-fragile estimator is addressed for a class of perturbed linear continuous systems. The perturbations occur on the plant and estimator parameters. The estimator designed should force the error system to achieve the desired decay rate and force the steady error variance less than the specified upper bound irrelevancy of the admissible plant perturbations and estimator perturbations. Consistency problem of the decay rate with the variance upper bound is first considered via linear matrix inequality (LMI) approach. The solution of the estimator parameter under specifications to be consistent is then discussed. The consistency condition of specifications and estimator parameter solution are transformed to feasible or minimum problems subject to a set of LMI respectively. The method is illustrated by a numerical example.
Solutions to estimation problems for scalar hamilton-jacobi equations using linear programming
Claudel, Christian G.
2014-01-01
This brief presents new convex formulations for solving estimation problems in systems modeled by scalar Hamilton-Jacobi (HJ) equations. Using a semi-analytic formula, we show that the constraints resulting from a HJ equation are convex, and can be written as a set of linear inequalities. We use this fact to pose various (and seemingly unrelated) estimation problems related to traffic flow-engineering as a set of linear programs. In particular, we solve data assimilation and data reconciliation problems for estimating the state of a system when the model and measurement constraints are incompatible. We also solve traffic estimation problems, such as travel time estimation or density estimation. For all these problems, a numerical implementation is performed using experimental data from the Mobile Century experiment. In the context of reproducible research, the code and data used to compute the results presented in this brief have been posted online and are accessible to regenerate the results. © 2013 IEEE.
An Adaptive Finite Element Method Based on Optimal Error Estimates for Linear Elliptic Problems
汤雁
2004-01-01
The subject of the work is to propose a series of papers about adaptive finite element methods based on optimal error control estimate. This paper is the third part in a series of papers on adaptive finite element methods based on optimal error estimates for linear elliptic problems on the concave corner domains. In the preceding two papers (part 1:Adaptive finite element method based on optimal error estimate for linear elliptic problems on concave corner domain; part 2:Adaptive finite element method based on optimal error estimate for linear elliptic problems on nonconvex polygonal domains), we presented adaptive finite element methods based on the energy norm and the maximum norm. In this paper, an important result is presented and analyzed. The algorithm for error control in the energy norm and maximum norm in part 1 and part 2 in this series of papers is based on this result.
KAYODE AYINDE
2012-11-01
Full Text Available Performances of estimators of linear regression model with autocorrelated error term have been attributed to the nature and specification of the explanatory variables. The violation of assumption of the independence of the explanatory variables is not uncommon especially in business, economic and social sciences, leading to the development of many estimators. Moreover, prediction is one of the main essences of regression analysis. This work, therefore, attempts to examine the parameter estimates of the Ordinary Least Square estimator (OLS, Cochrane-Orcutt estimator (COR, Maximum Likelihood estimator (ML and the estimators based on Principal Component analysis (PC in prediction of linear regression model with autocorrelated error terms under the violations of assumption of independent regressors (multicollinearity using Monte-Carlo experiment approach. With uniform variables as regressors, it further identifies the best estimator that can be used for prediction purpose by averaging the adjusted co-efficient of determination of each estimator over the number of trials. Results reveal that the performances of COR and ML estimators at each level of multicollinearity over the levels of autocorrelation are convex – like while that of the OLS and PC estimators are concave; and that asthe level of multicollinearity increases, the estimators perform much better at all the levels of autocorrelation. Except when the sample size is small (n=10, the performances of the COR and ML estimators are generally best and asymptotically the same. When the sample size is small, the COR estimator is still best except when the autocorrelation level is low. At these instances, the PC estimator is either best or competes with the best estimator. Moreover, at low level of autocorrelation in all the sample sizes, the OLS estimator competes with the best estimator in all the levels of multicollinearity.
On the use of Lineal Energy Measurements to Estimate Linear Energy Transfer Spectra
Adams, David A.; Howell, Leonard W., Jr.; Adam, James H., Jr.
2007-01-01
This paper examines the error resulting from using a lineal energy spectrum to represent a linear energy transfer spectrum for applications in the space radiation environment. Lineal energy and linear energy transfer spectra are compared in three diverse but typical space radiation environments. Different detector geometries are also studied to determine how they affect the error. LET spectra are typically used to compute dose equivalent for radiation hazard estimation and single event effect rates to estimate radiation effects on electronics. The errors in the estimations of dose equivalent and single event rates that result from substituting lineal energy spectra for linear energy spectra are examined. It is found that this substitution has little effect on dose equivalent estimates in interplanetary quiet-time environment regardless of detector shape. The substitution has more of an effect when the environment is dominated by solar energetic particles or trapped radiation, but even then the errors are minor especially if a spherical detector is used. For single event estimation, the effect of the substitution can be large if the threshold for the single event effect is near where the linear energy spectrum drops suddenly. It is judged that single event rate estimates made from lineal energy spectra are unreliable and the use of lineal energy spectra for single event rate estimation should be avoided.
Bourgeois, Brian S.; Elmore, Paul A.; Avera, William E.; Zambo, Samantha J.
2016-07-01
This paper examines and contrasts two estimation methods, Kalman filtering and linear smoothing, for creating interpolated data products from bathymetry measurements. Using targeted examples, we demonstrate previously obscured behavior showing the dependence of linear smoothers on the spatial arrangement of the measurements, yielding markedly different estimation results than the Kalman filter. For bathymetry data, we have modified the variance estimates from both the Kalman filter and linear smoothers to obtain comparable estimators for dense data. These comparable estimators produce uncertainty estimates that have statistically insignificant differences via hypothesis testing. Achieving comparable estimation is accomplished by applying the "propagated uncertainty" concept and a numerical realization of Tobler's principle to the measurement data prior to the computation of the estimate. We show new mathematical derivations for these modifications. In addition, we show test results with (a) synthetic data and (b) gridded bathymetry in the area of the Scripps and La Jolla Canyons. Our tenfold cross-validation for case (b) shows that the modified equations create comparable uncertainty for both gridding algorithms with null hypothesis acceptance rates of greater than 99.95% of the data points. In contrast, bilinear interpolation has 10 times the amount of rejection. We then discuss how the uncertainty estimators are, in principle, applicable to interpolate geophysical data other than bathymetry.
Giovanni Bonaccolto
2016-07-01
Full Text Available Several market and macro-level variables influence the evolution of equity risk in addition to the well-known volatility persistence. However, the impact of those covariates might change depending on the risk level, being different between low and high volatility states. By combining equity risk estimates, obtained from the Realized Range Volatility, corrected for microstructure noise and jumps, and quantile regression methods, we evaluate the forecasting implications of the equity risk determinants in different volatility states and, without distributional assumptions on the realized range innovations, we recover both the points and the conditional distribution forecasts. In addition, we analyse how the the relationships among the involved variables evolve over time, through a rolling window procedure. The results show evidence of the selected variables’ relevant impacts and, particularly during periods of market stress, highlight heterogeneous effects across quantiles.
Direction of Arrival Estimation Based on MUSIC Algorithm Using Uniform and Non-Uniform Linear Arrays
Eva Kwizera
2017-03-01
Full Text Available In signal processing, the direction of arrival (DOA estimation denotes the direction from which a propagating wave arrives at a point, where a set of antennas is located. Using the array antenna has an advantage over the single antenna in achieving an improved performance by applying Multiple Signal Classification (MUSIC algorithm. This paper focuses on estimating the DOA using uniform linear array (ULA and non-uniform linear array (NLAof antennas to analyze the performance factors that affect the accuracy and resolution of the system based on MUSIC algorithm. The direction of arrival estimation is simulated on a MATLAB platform with a set of input parameters such as array elements, signal to noise ratio, number of snapshots and number of signal sources. An extensive simulation has been conducted and the results show that the NLA with DOA estimation for co-prime array can achieve an accurate and efficient DOA estimation
Sieve M-estimation for semiparametric varying-coefficient partially linear regression model
无
2010-01-01
This article considers a semiparametric varying-coefficient partially linear regression model.The semiparametric varying-coefficient partially linear regression model which is a generalization of the partially linear regression model and varying-coefficient regression model that allows one to explore the possibly nonlinear effect of a certain covariate on the response variable.A sieve M-estimation method is proposed and the asymptotic properties of the proposed estimators are discussed.Our main object is to estimate the nonparametric component and the unknown parameters simultaneously.It is easier to compute and the required computation burden is much less than the existing two-stage estimation method.Furthermore,the sieve M-estimation is robust in the presence of outliers if we choose appropriate ρ(·).Under some mild conditions,the estimators are shown to be strongly consistent;the convergence rate of the estimator for the unknown nonparametric component is obtained and the estimator for the unknown parameter is shown to be asymptotically normally distributed.Numerical experiments are carried out to investigate the performance of the proposed method.
The Solution Structure and Error Estimation for The Generalized Linear Complementarity Problem
Tingfa Yan
2014-07-01
Full Text Available In this paper, we consider the generalized linear complementarity problem (GLCP. Firstly, we develop some equivalent reformulations of the problem under milder conditions, and then characterize the solution of the GLCP. Secondly, we also establish the global error estimation for the GLCP by weakening the assumption. These results obtained in this paper can be taken as an extension for the classical linear complementarity problems.
Yueyang Li
2014-01-01
Full Text Available This paper investigates the H∞ fixed-lag fault estimator design for linear discrete time-varying (LDTV systems with intermittent measurements, which is described by a Bernoulli distributed random variable. Through constructing a novel partially equivalent dynamic system, the fault estimator design is converted into a deterministic quadratic minimization problem. By applying the innovation reorganization technique and the projection formula in Krein space, a necessary and sufficient condition is obtained for the existence of the estimator. The parameter matrices of the estimator are derived by recursively solving two standard Riccati equations. An illustrative example is provided to show the effectiveness and applicability of the proposed algorithm.
A process fault estimation strategy for non-linear dynamic systems
Pazera, Marcin; Korbicz, Józef
2017-01-01
The paper deals with the problem of simultaneous state and process fault estimation for non-linear dynamic systems. Instead of estimating the fault directly, its product with state and the state itself are estimated. To derive the fault from the product, a simple algebraic approach is proposed. The estimation strategy is based on the quadratic boundedness approach. The final part of the paper presents an illustrative example concerning a laboratory multi-tank system. The real data experiments clearly exhibit the performance of the proposed approach.
Variance estimation for complex indicators of poverty and inequality using linearization techniques
Guillaume Osier
2009-12-01
Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.
A note on constrained M-estimation and its recursive analog in multivariate linear regression models
RAO; Calyampudi; R
2009-01-01
In this paper,the constrained M-estimation of the regression coeffcients and scatter parameters in a general multivariate linear regression model is considered.Since the constrained M-estimation is not easy to compute,an up-dating recursion procedure is proposed to simplify the com-putation of the estimators when a new observation is obtained.We show that,under mild conditions,the recursion estimates are strongly consistent.In addition,the asymptotic normality of the recursive constrained M-estimators of regression coeffcients is established.A Monte Carlo simulation study of the recursion estimates is also provided.Besides,robustness and asymptotic behavior of constrained M-estimators are briefly discussed.
Robust state estimation for uncertain linear systems with deterministic input signals
Huabo LIU; Tong ZHOU
2014-01-01
In this paper, we investigate state estimations of a dynamical system in which not only process and measurement noise, but also parameter uncertainties and deterministic input signals are involved. The sensitivity penalization based robust state estimation is extended to uncertain linear systems with deterministic input signals and parametric uncertainties which may nonlinearly affect a state-space plant model. The form of the derived robust estimator is similar to that of the well-known Kalman filter with a comparable computational complexity. Under a few weak assumptions, it is proved that though the derived state estimator is biased, the bound of estimation errors is finite and the covariance matrix of estimation errors is bounded. Numerical simulations show that the obtained robust filter has relatively nice estimation performances.
A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers
Melboe, Hallgeir
2001-10-01
This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)
Chandra Nagasuma R
2009-02-01
Full Text Available Abstract Background A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN from transcript profiling data. Results The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting problem and solved finally by formulating a Linear Program (LP. A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known
Simultaneous Robust Fault and State Estimation for Linear Discrete-Time Uncertain Systems
Feten Gannouni
2017-01-01
Full Text Available We consider the problem of robust simultaneous fault and state estimation for linear uncertain discrete-time systems with unknown faults which affect both the state and the observation matrices. Using transformation of the original system, a new robust proportional integral filter (RPIF having an error variance with an optimized guaranteed upper bound for any allowed uncertainty is proposed to improve robust estimation of unknown time-varying faults and to improve robustness against uncertainties. In this study, the minimization problem of the upper bound of the estimation error variance is formulated as a convex optimization problem subject to linear matrix inequalities (LMI for all admissible uncertainties. The proportional and the integral gains are optimally chosen by solving the convex optimization problem. Simulation results are given in order to illustrate the performance of the proposed filter, in particular to solve the problem of joint fault and state estimation.
A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays
Fenggang Sun
2016-08-01
Full Text Available The problem of direction-of-arrival (DOA estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT, the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity–performance tradeoff as compared to other existing methods.
A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays.
Sun, Fenggang; Gao, Bin; Chen, Lizhen; Lan, Peng
2016-08-25
The problem of direction-of-arrival (DOA) estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT), the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity-performance tradeoff as compared to other existing methods.
Forecasting Uncertainty in Electricity Smart Meter Data by Boosting Additive Quantile Regression
Taieb, Souhaib Ben
2016-03-02
Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared with traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation and distribution. We propose to estimate an additive quantile regression model for a set of quantiles of the future distribution using a boosting procedure. By doing so, we can benefit from flexible and interpretable models, which include an automatic variable selection. We compare our approach with three benchmark methods on both aggregated and disaggregated scales using a smart meter data set collected from 3639 households in Ireland at 30-min intervals over a period of 1.5 years. The empirical results demonstrate that our approach based on quantile regression provides better forecast accuracy for disaggregated demand, while the traditional approach based on a normality assumption (possibly after an appropriate Box-Cox transformation) is a better approximation for aggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.
Ahn, Kuk-Hyun; Palmer, Richard
2016-09-01
Despite wide use of regression-based regional flood frequency analysis (RFFA) methods, the majority are based on either ordinary least squares (OLS) or generalized least squares (GLS). This paper proposes 'spatial proximity' based RFFA methods using the spatial lagged model (SLM) and spatial error model (SEM). The proposed methods are represented by two frameworks: the quantile regression technique (QRT) and parameter regression technique (PRT). The QRT develops prediction equations for flooding quantiles in average recurrence intervals (ARIs) of 2, 5, 10, 20, and 100 years whereas the PRT provides prediction of three parameters for the selected distribution. The proposed methods are tested using data incorporating 30 basin characteristics from 237 basins in Northeastern United States. Results show that generalized extreme value (GEV) distribution properly represents flood frequencies in the study gages. Also, basin area, stream network, and precipitation seasonality are found to be the most effective explanatory variables in prediction modeling by the QRT and PRT. 'Spatial proximity' based RFFA methods provide reliable flood quantile estimates compared to simpler methods. Compared to the QRT, the PRT may be recommended due to its accuracy and computational simplicity. The results presented in this paper may serve as one possible guidepost for hydrologists interested in flood analysis at ungaged sites.
A novel suboptimal algorithm for state estimation of Markov jump linear systems
无
2011-01-01
This paper is concerned with state estimation problem for Markov jump linear systems where the disturbances involved in the systems equations and measurement equations are assumed to be Gaussian noise sequences.Based on two properties of conditional expectation,orthogonal projective theorem is applied to the state estimation problem of the considered systems so that a novel suboptimal algorithm is obtained.The novelty of the algorithm lies in using orthogonal projective theorem instead of Kalman filters to ...
Mayr, Andreas; Hothorn, Torsten; Fenske, Nora
2012-01-25
The construction of prediction intervals (PIs) for future body mass index (BMI) values of individual children based on a recent German birth cohort study with n = 2007 children is problematic for standard parametric approaches, as the BMI distribution in childhood is typically skewed depending on age. We avoid distributional assumptions by directly modelling the borders of PIs by additive quantile regression, estimated by boosting. We point out the concept of conditional coverage to prove the accuracy of PIs. As conditional coverage can hardly be evaluated in practical applications, we conduct a simulation study before fitting child- and covariate-specific PIs for future BMI values and BMI patterns for the present data. The results of our simulation study suggest that PIs fitted by quantile boosting cover future observations with the predefined coverage probability and outperform the benchmark approach. For the prediction of future BMI values, quantile boosting automatically selects informative covariates and adapts to the age-specific skewness of the BMI distribution. The lengths of the estimated PIs are child-specific and increase, as expected, with the age of the child. Quantile boosting is a promising approach to construct PIs with correct conditional coverage in a non-parametric way. It is in particular suitable for the prediction of BMI patterns depending on covariates, since it provides an interpretable predictor structure, inherent variable selection properties and can even account for longitudinal data structures.
Asymptotics of Huber-Dutter Estimators for Partial Linear Model with Nonstochastic Designs
Xing-wei Tong; Heng-jian Cui; Hui Zhao
2005-01-01
For partial linear model Y = Xτ0 + g0(T) + e with unknown β0 ∈ Rd and an unknown smooth function go, this paper considers the Huber-Dutter estimators ofβ0, scale σ for the errors and the function go respectively, in which the smoothing B-spline function is used. Under some regular conditions, it is shown that the Huber-Dutter estimators of β0 and σ are asymptotically normal with convergence rate n-1/2 and the B-spline Huber-Dutter estimator of g0 achieves the optimal convergence rate in nonparametric regression.A simulation study demonstrates that the Huber-Dutter estimator ofβ0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator. An example is presented after the simulation study.
Consistency and normality of Huber-Dutter estimators for partial linear model
2008-01-01
For partial linear model Y = Xτβ0 + g0(T) + with unknown β0 ∈ Rd and an unknown smooth function g0, this paper considers the Huber-Dutter estimators of β0, scale σ for the errors and the function g0 approximated by the smoothing B-spline functions, respectively. Under some regularity conditions, the Huber-Dutter estimators of β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of g0 achieves the optimal rate of convergence in nonparametric regression. A simulation study and two examples demonstrate that the Huber-Dutter estimator of β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.
A Class of Biased Estimators Besed on SVD in Linear Model
GUIQing-ming; DUANQing-tang; GUOJian-feng; ZHOUQiao-yun
2003-01-01
In this paper,a class of new biased estimators for linear model is proposed by modifying the singular values of the design matrix so as to directly overcome the difficulties caused by ill-conditioning in the design matrix.Some important properties of these new estimators are obtained.By appropriate choices of the biased parameters,we construct many useful and important estimators.An application of these new estimators in three-dimensional position adjustment by distance in a spatial coordiate surveys is given.The results show that the proposed biased estimators can effectively overcome ill-conditioning and their numerical stabilities are preferable to ordinary least square estimation.
Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan
This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set of ...... Carlo experiment. We find that estimation of the parameters in the transition function can be problematic but that there may be significant benefits in terms of forecast performance....... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...
Consistency and normality of Huber-Dutter estimators for partial linear model
TONG XingWei; CUI HengJian; YU Peng
2008-01-01
For partial linear model Y = Xτβ0 + g0(T) + ∈ with unknown/β0 ∈ Rd and an unknown smooth function g0,this paper considers the Huber-Dutter estimators of/β0,scale σ for the errors and the function g0 approximated by the smoothing B-spline functions,respectively.Under some regularity conditions,the Huber-Dutter estimators of/β0 and σ are shown to be asymptotically normal with the rate of convergence n-1/2 and the B-spline Huber-Dutter estimator of go achieves the optimal rate of convergence in nonparametric regression.A simulation study and two examples demonstrate that the Huber-Dutter estimator of/β0 is competitive with its M-estimator without scale parameter and the ordinary least square estimator.
Jie Li DING; Xi Ru CHEN
2006-01-01
For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.
Point Estimates and Confidence Intervals for Variable Importance in Multiple Linear Regression
Thomas, D. Roland; Zhu, PengCheng; Decady, Yves J.
2007-01-01
The topic of variable importance in linear regression is reviewed, and a measure first justified theoretically by Pratt (1987) is examined in detail. Asymptotic variance estimates are used to construct individual and simultaneous confidence intervals for these importance measures. A simulation study of their coverage properties is reported, and an…
Measurement Error in Income and Schooling and the Bias of Linear Estimators
Bingley, Paul; Martinello, Alessandro
2017-01-01
We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing and R...
MA Qinghua; YANG Enhao
2000-01-01
An estimation method for solutions to the general linear system of Volterratype integral inequalities containing several iterated integral functionals is obtained. This method is based on a result proved by the present second author in Journ. Math. Anal. Appl.(1984). A certain two-dimensional system of nonlinear ordinary differential equations is also discussed to demonstrate the usefulness of our method.
STRONG CONSISTENCY OF M ESTIMATOR IN LINEAR MODEL FOR NEGATIVELY ASSOCIATED SAMPLES
Qunying WU
2006-01-01
This paper discusses the strong consistency of M estimator of regression parameter in linear model for negatively associated samples. As a result, the author extends Theorem 1 and Theorem 2 of Shanchao YANG (2002) to the NA errors without necessarily imposing any extra condition.
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
Li, Shanzhi; Wang, Haoping; Aitouche, Abdel; Tian, Yang; Christov, Nicolai
2017-01-01
This paper proposes a robust unknown input observer for state estimation and fault detection using linear parameter varying model. Since the disturbance and actuator fault is mixed together in the physical system, it is difficult to isolate the fault from the disturbance. Using the state transforation, the estimation of the original state becomes to associate with the transform state. By solving the linear matrix inequalities (LMIs)and linear matrix equalities (LMEs), the parameters of the UIO can be obtained. The convergence of the UIO is also analysed by the Layapunov theory. Finally, a wind turbine system with disturbance and actuator fault is tested for the proposed method. From the simulations, it demonstrates the effectiveness and performances of the proposed method.
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...... function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes...
Effects of linear trends on estimation of noise in GNSS position time-series
Dmitrieva, K.; Segall, P.; Bradley, A. M.
2017-01-01
A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this paper, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that the effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.
Refining Our Understanding of Beta through Quantile Regressions
Allen B. Atkins
2014-05-01
Full Text Available The Capital Asset Pricing Model (CAPM has been a key theory in financial economics since the 1960s. One of its main contributions is to attempt to identify how the risk of a particular stock is related to the risk of the overall stock market using the risk measure Beta. If the relationship between an individual stock’s returns and the returns of the market exhibit heteroskedasticity, then the estimates of Beta for different quantiles of the relationship can be quite different. The behavioral ideas first proposed by Kahneman and Tversky (1979, which they called prospect theory, postulate that: (i people exhibit “loss-aversion” in a gain frame; and (ii people exhibit “risk-seeking” in a loss frame. If this is true, people could prefer lower Beta stocks after they have experienced a gain and higher Beta stocks after they have experienced a loss. Stocks that exhibit converging heteroskedasticity (22.2% of our sample should be preferred by investors, and stocks that exhibit diverging heteroskedasticity (12.6% of our sample should not be preferred. Investors may be able to benefit by choosing portfolios that are more closely aligned with their preferences.
Adaptive semiparametric wavelet estimator and goodness-of-fit test for long memory linear processes
Bardet, Jean-Marc
2010-01-01
This paper is first devoted to study an adaptive wavelet based estimator of the long memory parameter for linear processes in a general semi-parametric frame. This is an extension of Bardet {\\it et al.} (2008) which only concerned Gaussian processes. Moreover, the definition of the long memory parameter estimator is modified and asymptotic results are improved even in the Gaussian case. Finally an adaptive goodness-of-fit test is also built and easy to be employed: it is a chi-square type test. Simulations confirm the interesting properties of consistency and robustness of the adaptive estimator and test.
Method for quantitative estimation of position perception using a joystick during linear movement.
Wada, Y; Tanaka, M; Mori, S; Chen, Y; Sumigama, S; Naito, H; Maeda, M; Yamamoto, M; Watanabe, S; Kajitani, N
1996-12-01
We designed a method for quantitatively estimating self-motion perceptions during passive body movement on a sled. The subjects were instructed to tilt a joystick in proportion to perceived displacement from a giving starting position during linear movement with varying displacements of 4 m, 10 m and 16 m induced by constant acceleration of 0.02 g, 0.05 g and 0.08 g along the antero-posterior axis. With this method, we could monitor not only subjective position perceptions but also response latencies for the beginning (RLbgn) and end (RLend) of the linear movement. Perceived body position fitted Stevens' power law, where R=kSn (R is output of the joystick, k is a constant, S is the displacement from the linear movement and n is an exponent). RLbgn decreased as linear acceleration increased. We conclude that this method is useful in analyzing the features and sensitivities of self-motion perceptions during movement.
Varadarajan, Divya; Haldar, Justin P
2017-08-19
The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.
Nora Fenske
Full Text Available BACKGROUND: Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. OBJECTIVE: We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. DESIGN: Using cross-sectional data for children aged 0-24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. RESULTS: At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. CONCLUSIONS: Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
Brabec, Jiri; Lin, Lin; Shao, Meiyue; Govind, Niranjan; Yang, Chao; Saad, Yousef; Ng, Esmond
2015-10-06
We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate the density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.
Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions
Belkhatir, Zehor
2017-06-28
This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.
Stochastic error whitening algorithm for linear filter estimation with noisy data.
Rao, Yadunandana N; Erdogmus, Deniz; Rao, Geetha Y; Principe, Jose C
2003-01-01
Mean squared error (MSE) has been the most widely used tool to solve the linear filter estimation or system identification problem. However, MSE gives biased results when the input signals are noisy. This paper presents a novel stochastic gradient algorithm based on the recently proposed error whitening criterion (EWC) to tackle the problem of linear filter estimation in the presence of additive white disturbances. We will briefly motivate the theory behind the new criterion and derive an online stochastic gradient algorithm. Convergence proof of the stochastic gradient algorithm is derived making mild assumptions. Further, we will propose some extensions to the stochastic gradient algorithm to ensure faster, step-size independent convergence. We will perform extensive simulations and compare the results with MSE as well as total-least squares in a parameter estimation problem. The stochastic EWC algorithm has many potential applications. We will use this in designing robust inverse controllers with noisy data.
A volume law for specification of linear channel storage for estimation of large floods
Zhang, Shangyou; Cordery, Ian; Sharma, Ashish
2000-02-01
A method of estimating large floods using a linear storage-routing approach is presented. The differences between the proposed approach and those traditionally used are (1) that the flood producing properties of basins are represented by a linear system, (2) the storage parameters of the distributed model are determined using a volume law which, unlike other storage-routing models, accounts for the distribution of storage in natural basins, and (3) the basin outflow hydrograph is determined analytically and expressed in a succinct mathematical form. The single model parameter is estimated from observed data without direct fitting, unlike most traditionally used methods. The model was tested by showing it could reproduce observed large floods on a number of basins. This paper compares the proposed approach with a traditionally used storage routing approach using observed flood data from the Hacking River basin in New South Wales, Australia. Results confirm the usefulness of the proposed approach for estimation of large floods.
Zollanvari, Amin
2013-05-24
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Zollanvari, Amin; Genton, Marc G
2013-08-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays.
Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin
2016-02-23
In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches.
Priyadarshi, Himanshu; Das, Rekha; Kumar, Shivendra; Kishore, Pankaj; Kumar, Sujit
2017-01-01
Identification of a reference gene unaffected by the experimental conditions is obligatory for accurate measurement of gene expression through relative quantification. Most existing methods directly analyze variability in crossing point (Cp) values of reference genes and fail to account for template-independent factors that affect Cp values in their estimates. We describe the use of three simple statistical methods namely analysis of variance (ANOVA), normal quantile-quantile correlation (NQQC) and effective expression support (EES), on pooled expression ratios of reference genes in a panel to overcome this issue. The pooling of expression ratios across the genes in the panel nullify the sample specific effects uniformly affecting all genes that are falsely reflected as instability. Our methods also offer the flexibility to include sample specific PCR efficiencies in estimations, when available, for improved accuracy. Additionally, we describe a correction factor from the ANOVA method to correct the relative fold change of a target gene if no truly stable reference gene could be found in the analyzed panel. The analysis is described on a synthetic data set to simplify the explanation of the statistical treatment of data.
Himanshu Priyadarshi
2017-01-01
Full Text Available Identification of a reference gene unaffected by the experimental conditions is obligatory for accurate measurement of gene expression through relative quantification. Most existing methods directly analyze variability in crossing point (Cp values of reference genes and fail to account for template-independent factors that affect Cp values in their estimates. We describe the use of three simple statistical methods namely analysis of variance (ANOVA, normal quantile-quantile correlation (NQQC and effective expression support (EES, on pooled expression ratios of reference genes in a panel to overcome this issue. The pooling of expression ratios across the genes in the panel nullify the sample specific effects uniformly affecting all genes that are falsely reflected as instability. Our methods also offer the flexibility to include sample specific PCR efficiencies in estimations, when available, for improved accuracy. Additionally, we describe a correction factor from the ANOVA method to correct the relative fold change of a target gene if no truly stable reference gene could be found in the analyzed panel. The analysis is described on a synthetic data set to simplify the explanation of the statistical treatment of data.
ESTIMATION OF LIVE BODY WEIGHT FROM LINEAR BODY MEASUREMENTS FOR FARTA SHEEP
MENGISTIE TAYE
2012-01-01
Full Text Available A study, to develop regression models for prediction of body weight from other linear body measurements, was conducted in Esite, Farta and Lai-Gaint districts of South Gondar, Amhara region. Records on body weight (BW and other linear body measurements (Body Length (BL, Wither Height (WH, Chest Girth (CH, Pelvic Width (PW and Ear Length (EL were taken from 941 sheep. Non-linear, simple linear and multiple linear regression models were developed using Statistical Package for Social Sciences (SPSS version 12.0. For the multiple linear regressions, step-wise regression procedures were used. Predicting models were developed for different age, sex and for the pool. Positive and significant (P<0.01 correlations were observed between body weight and linear body measurements for all sex and age groups. Among the four linear body measurements, heart girth had the highest correlation coefficient (except ear length in all age and sex groups which is followed by body length, height at wither and pelvic width. Heart girth was the first variable to explain more variation than other variables in both sex and age groups. The models developed had a coefficient of determination of 0.26 to 0.89; the highest coefficient of determination was depicted for male while the lowest was for dentition groups having two permanent incisors. Regression models in general were poor in explaining weight for the dentition groups above one pair of permanent incisors. Heart girth alone was able to estimate weight with a coefficient of determination of 0.77, for both sexes and the pool. The coefficient of determination of the fitted equations (in general decreased as the age of sheep advances indicating that the fitted equations can predict weight for younger sheep with better accuracy than for older ones. In general, much of the variation in weight was explained when many traits were included in the model. However, for ease of use and to avoid complexity at field condition, it is
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M; Derocher, Andrew E; Lewis, Mark A; Jonsen, Ian D; Mills Flemming, Joanna
2016-05-25
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
Il Young Song
2015-01-01
Full Text Available This paper focuses on estimation of a nonlinear function of state vector (NFS in discrete-time linear systems with time-delays and model uncertainties. The NFS represents a multivariate nonlinear function of state variables, which can indicate useful information of a target system for control. The optimal nonlinear estimator of an NFS (in mean square sense represents a function of the receding horizon estimate and its error covariance. The proposed receding horizon filter represents the standard Kalman filter with time-delays and special initial horizon conditions described by the Lyapunov-like equations. In general case to calculate an optimal estimator of an NFS we propose using the unscented transformation. Important class of polynomial NFS is considered in detail. In the case of polynomial NFS an optimal estimator has a closed-form computational procedure. The subsequent application of the proposed receding horizon filter and nonlinear estimator to a linear stochastic system with time-delays and uncertainties demonstrates their effectiveness.
Estimating WAIS-IV indexes: proration versus linear scaling in a clinical sample.
Umfleet, Laura Glass; Ryan, Joseph J; Gontkovsky, Sam T; Morris, Jeri
2012-04-01
We compared the accuracy of proration and linear scaling for estimating Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV), Verbal Comprehension Index (VCI), and Perceptual Reasoning Index (PRI) composites from all possible two subtest combinations. The purpose was to provide practice relevant psychometric results in a clinical sample. The present investigation was an archival study that used mostly within-group comparisons. We analyzed WAIS-IV data of a clinical sample comprising 104 patients with brain damage and 37 with no known neurological impairment. In both clinical samples, actual VCI and PRI scores were highly correlated with estimated index scores based on proration and linear scaling (all rs ≥.95). In the brain-impaired sample, significant mean score differences between the actual and estimated composites were found in two comparisons, but these differences were less than three points; no other significant differences emerged. Overall, findings demonstrate that proration and linear scaling methods are feasible procedures when estimating actual Indexes. There was no advantage of one computational method over the other. © 2012 Wiley Periodicals, Inc.
Linear inverse source estimate of combined EEG and MEG data related to voluntary movements.
Babiloni, F; Carducci, F; Cincotti, F; Del Gratta, C; Pizzella, V; Romani, G L; Rossini, P M; Tecchio, F; Babiloni, C
2001-12-01
A method for the modeling of human movement-related cortical activity from combined electroencephalography (EEG) and magnetoencephalography (MEG) data is proposed. This method includes a subject's multi-compartment head model (scalp, skull, dura mater, cortex) constructed from magnetic resonance images, multi-dipole source model, and a regularized linear inverse source estimate based on boundary element mathematics. Linear inverse source estimates of cortical activity were regularized by taking into account the covariance of background EG and MEG sensor noise. EEG (121 sensors) and MEG (43 sensors) data were recorded in separate sessions whereas normal subjects executed voluntary right one-digit movements. Linear inverse source solution of EEG, MEG, and EEG-MEG data were quantitatively evaluated by using three performance indexes. The first two indexes (Dipole Localization Error [DLE] and Spatial Dispersion [SDis]) were used to compute the localization power for the source solutions obtained. Such indexes were based on the information provided by the column of the resolution matrix (i.e., impulse response). Ideal DLE values tend to zero (the source current was correctly retrieved by the procedure). In contrast, high DLE values suggest severe mislocalization in the source reconstruction. A high value of SDis at a source space point mean that such a source will be retrieved by a large area with the linear inverse source estimation. The remaining performance index assessed the quality of the source solution based on the information provided by the rows of the resolution matrix R, i.e., resolution kernels. The i-th resolution kernels of the matrix R describe how the estimation of the i-th source is distorted by the concomitant activity of all other sources. A statistically significant lower dipole localization error was observed and lower spatial dispersion in source solutions produced by combined EEG-MEG data than from EEG and MEG data considered separately (P < 0
Quantile Regression for Right-Censored and Length-Biased Data
Xue-rong CHEN; Yong ZHOU
2012-01-01
Length-biased data arise in many important fields,including epidemiological cohort studies,cancer screening trials and labor economics.Analysis of such data has attracted much attention in the literature.In this paper we propose a quantile regression approach for analyzing right-censored and length-biased data.We derive an inverse probability weighted estimating equation corresponding to the quantile regression to correct the bias due to length-bias sampling and informative censoring. This method can easily handle informative censoring induced by length-biased sampling.This is an appealing feature of our proposed method since it is generally difficult to obtain unbiased estimates of risk factors in the presence of length-bias and informative censoring.We establish the consistency and asymptotic distribution of the proposed estimator using empirical process techniques.A resampling method is adopted to estimate the variance of the estimator.We conduct simulation studies to evaluate its finite sample performance and use a real data set to illustrate the application of the proposed method.
Sensor Fault Estimation Filter Design for Discrete-time Linear Time-varying Systems
WANG Zhen-Hua; RODRIGUES Mickael; THEILLIOL Didier; SHEN Yi
2014-01-01
This paper proposes a sensor fault diagnosis method for a class of discrete-time linear time-varying (LTV) systems. In this paper, the considered system is firstly formulated as a de-scriptor system representation by considering the sensor faults as auxiliary state variables. Based on the descriptor system model, a fault estimation filter which can simultaneously estimate the state and the sensor fault magnitudes is designed via a minimum-variance principle. Then, a fault diagnosis scheme is presented by using a bank of the proposed fault estimation filters. The novelty of this paper lies in developing a sensor fault diagnosis method for discrete LTV systems without any assumption on the dynamic of fault. Another advantage of the proposed method is its ability to detect, isolate and estimate sensor faults in the presence of process noise and measurement noise. Simulation results are given to illustrate the effectiveness of the proposed method.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Losses estimation in transonic wet steam flow through linear blade cascade
Dykas, Sławomir; Majkut, Mirosław; Strozik, Michał; Smołka, Krystian
2015-04-01
Experimental investigations of non-equilibrium spontaneous condensation in transonic steam flow were carried out in linear blade cascade. The linear cascade consists of the stator blades of the last stage of low pressure steam turbine. The applied experimental test section is a part of a small scale steam power plant located at Silesian University of Technology in Gliwice. The steam parameters at the test section inlet correspond to the real conditions in low pressure part of 200MWe steam turbine. The losses in the cascade were estimated using measured static pressure and temperature behind the cascade and the total parameters at inlet. The static pressure measurements on the blade surface as well as the Schlieren pictures were used to assess the flow field in linear cascade of steam turbine stator blades.
Numerical estimation of 3D mechanical forces exerted by cells on non-linear materials.
Palacio, J; Jorge-Peñas, A; Muñoz-Barrutia, A; Ortiz-de-Solorzano, C; de Juan-Pardo, E; García-Aznar, J M
2013-01-04
The exchange of physical forces in both cell-cell and cell-matrix interactions play a significant role in a variety of physiological and pathological processes, such as cell migration, cancer metastasis, inflammation and wound healing. Therefore, great interest exists in accurately quantifying the forces that cells exert on their substrate during migration. Traction Force Microscopy (TFM) is the most widely used method for measuring cell traction forces. Several mathematical techniques have been developed to estimate forces from TFM experiments. However, certain simplifications are commonly assumed, such as linear elasticity of the materials and/or free geometries, which in some cases may lead to inaccurate results. Here, cellular forces are numerically estimated by solving a minimization problem that combines multiple non-linear FEM solutions. Our simulations, free from constraints on the geometrical and the mechanical conditions, show that forces are predicted with higher accuracy than when using the standard approaches.
Strain estimation in 3D by fitting linear and planar data to the March model
Mulchrone, Kieran F.; Talbot, Christopher J.
2016-08-01
The probability density function associated with the March model is derived and used in a maximum likelihood method to estimate the best fit distribution and 3D strain parameters for a given set of linear or planar data. Typically it is assumed that in the initial state (pre-strain) linear or planar data are uniformly distributed on the sphere which means the number of strain parameters estimated needs to be reduced so that the numerical technique succeeds. Essentially this requires that the data are rotated into a suitable reference frame prior to analysis. The method has been applied to a suitable example from the Dalradian of SW Scotland and results obtained are consistent with those from an independent method of strain analysis. Despite March theory having been incorporated deep into the fabric of geological strain analysis, its full potential as a simple direct 3D strain analytical tool has not been achieved. The method developed here may help remedy this situation.
SECANT-FUZZY LINEAR REGRESSION METHOD FOR HARMONIC COMPONENTS ESTIMATION IN A POWER SYSTEM
Garba Inoussa; LUO An
2003-01-01
In order to avoid unnecessary damage of electrical equipments and installations,high quality power should be delivered to the end user and strict control on frequency should be made, Therefore, it is important to estimate the power system's harmonic components with higher accuracy. This paper presents a new approach for estimating harmonic component in a power system using secant - fuzzy linear regression method. In this approach the non - sinusoidal voltage or current waveform is written as I linear function. The coefficient of this function is assumed to be fuzzy number with a membership function that has center and spread value. The time dependent quantity is written as Taylor series with two different time dependent quantities. The objective is to use the sample obtained from the transmission line to find the power system harmonic components and frequencies. We used an experimental voltage signal from a sub power station as a numerical test.
Measurement error in income and schooling, and the bias for linear estimators
Bingley, Paul; Martinello, Alessandro
with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...
Measurement error in income and schooling, and the bias of linear estimators
Bingley, Paul; Martinello, Alessandro
with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result......The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...
Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects
2017-02-22
AFRL-AFOSR-UK-TR-2017-0023 Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects Marco Martorella... UNIVERSITY DI PISA, DEPARTMENT DI INGEGNERIA Final Report 02/22/2017 DISTRIBUTION A: Distribution approved for public release. AF Office Of Scientific Research...Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-14-1-0183 5c. PROGRAM
Ge-mai Chen; Jin-hong You
2005-01-01
Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.
Asymptotic Normality of LS Estimate in Simple Linear EV Regression Model
Jixue LIU
2006-01-01
Though EV model is theoretically more appropriate for applications in which measurement errors exist, people are still more inclined to use the ordinary regression models and the traditional LS method owing to the difficulties of statistical inference and computation. So it is meaningful to study the performance of LS estimate in EV model.In this article we obtain general conditions guaranteeing the asymptotic normality of the estimates of regression coefficients in the linear EV model. It is noticeable that the result is in some way different from the corresponding result in the ordinary regression model.
Slavica M. Perovich
2011-06-01
Full Text Available The subject of the theoretical analysis presented in this paper is an analytical approach to the temperature estimation, as an inverse problem, for different thermistors – linear resistances structures: series and parallel ones, by the STFT - Special Trans Functions Theory (S.M. Perovich. The mathematical formulae genesis of both cases is given. Some numerical and graphical simulations in MATHEMATICA program have been realized. The estimated temperature intervals for strongly determined values of the equivalent resistances of the nonlinear structures are given, as well.
Quantile Acoustic Vectors vs. MFCC Applied to Speaker Verification
Mayorga-Ortiz Pedro
2014-02-01
Full Text Available In this paper we describe speaker and command recognition related experiments, through quantile vectors and Gaussian Mixture Modelling (GMM. Over the past several years GMM and MFCC have become two of the dominant approaches for modelling speaker and speech recognition applications. However, memory and computational costs are important drawbacks, because autonomous systems suffer processing and power consumption constraints; thus, having a good trade-off between accuracy and computational requirements is mandatory. We decided to explore another approach (quantile vectors in several tasks and a comparison with MFCC was made. Quantile acoustic vectors are proposed for speaker verification and command recognition tasks and the results showed very good recognition efficiency. This method offered a good trade-off between computation times, characteristics vector complexity and overall achieved efficiency.
Quantile Acoustic Vectors vs. MFCC Applied to Speaker Verification
Mayorga-Ortiz Pedro
2014-02-01
Full Text Available In this paper we describe speaker and command recognition related experiments, through quantile vectors and Gaussian Mixture Modelling (GMM. Over the past several years GMM and MFCC have become two of the dominant approaches for modelling speaker and speech recognition applications. However, memory and computational costs are important drawbacks, because autonomous systems suffer processing and power consumption constraints; thus, having a good trade-off between accuracy and computational requirements is mandatory. We decided to explore another approach (quantile vectors in several tasks and a comparison with MFCC was made. Quantile acoustic vectors are proposed for speaker verification and command recognition tasks and the results showed very good recognition efficiency. This method offered a good trade-off between computation times, characteristics vector complexity and overall achieved efficiency.
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
Dufrenois, F; Noyer, J C
2013-02-01
Linear discriminant analysis, such as Fisher's criterion, is a statistical learning tool traditionally devoted to separating a training dataset into two or even several classes by the way of linear decision boundaries. In this paper, we show that this tool can formalize the robust linear regression problem as a robust estimator will do. More precisely, we develop a one-class Fischer's criterion in which the maximization provides both the regression parameters and the separation of the data in two classes: typical data and atypical data or outliers. This new criterion is built on the statistical properties of the subspace decomposition of the hat matrix. From this angle, we improve the discriminative properties of the hat matrix which is traditionally used as outlier diagnostic measure in linear regression. Naturally, we call this new approach discriminative hat matrix. The proposed algorithm is fully nonsupervised and needs only the initialization of one parameter. Synthetic and real datasets are used to study the performance both in terms of regression and classification of the proposed approach. We also illustrate its potential application to image recognition and fundamental matrix estimation in computer vision.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Estimate of influenza cases using generalized linear, additive and mixed models.
Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M
2015-01-01
We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.
Linear and nonlinear ARMA model parameter estimation using an artificial neural network
Chon, K. H.; Cohen, R. J.
1997-01-01
This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.
Brassey, Charlotte A; Maidment, Susannah C R; Barrett, Paul M
2015-03-01
Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082-2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses.
Estimates of linearization discs in $p$-adic dynamics with application to ergodicity
Lindahl, Karl-Olof
2009-01-01
We give lower bounds for the size of linearization discs for power series over $\\mathbb{C}_p$. For quadratic maps, and certain power series containing a `sufficiently large' quadratic term, we find the exact linearization disc. For finite extensions of $\\mathbb{Q}_p$, we give a sufficient condition on the multiplier under which the corresponding linearization disc is maximal (i.e. its radius coincides with that of the maximal disc in $\\mathbb{C}_p$ on which $f$ is one-to-one). In particular, in unramified extensions of $\\mathbb{Q}_p$, the linearization disc is maximal if the multiplier map has a maximal cycle on the unit sphere. Estimates of linearization discs in the remaining types of non-Archimedean fields of dimension one were obtained in \\cite{Lindahl:2004,Lindahl:2009,Lindahl:2009eq}. Moreover, it is shown that, for any complete non-Archimedean field, transitivity is preserved under analytic conjugation. Using results by Oxtoby \\cite{Oxtoby:1952}, we prove that transitivity, and hence minimality, is equ...
Elenchezhiyan, M; Prakash, J
2015-09-01
In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme.
Estimating differential quantities from point cloud based on a linear fitting of normal vectors
CHENG ZhangLin; ZHANG XiaoPeng
2009-01-01
Estimation of differential geometric properties on a discrete surface Is a fundamental work in computer graphics and computer vision.In this paper,we present an accurate and robust method for estimating differential quantities from unorganized point cloud.The principal curvatures and principal directions at each point are computed with the help of partial derivatives of the unit normal vector at that point,where the normal derivatives are estimated by fitting a linear function to each component of the normal vectors in a neighborhood.This method takes into account the normal information of all neighboring points and computes curvatures directly from the varlation of unit normal vectors,which improves the accuracy and robustness of curvature estimation on irregular sampled noisy data.The main advantage of our approach is that the estimation of curvatures at a point does not rely on the accuracy of the normal vector at that point,and the normal vectors can he refined In the process of curvature estimation.Compared with the state of the art methods for estimating curvatures and Darboux frames on both synthetic and real point clouds,the approach is shown to be more accurate and robust for noisy and unorganized point cloud data.
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Linear Track Estimation Using Double Pulse Sources for Near-Field Underwater Moving Target
Zhifei Chen; Hong Hou; Jianhua Yang; Jincai Sun; Qian Wang
2013-01-01
The double pulse sources (DPS) method is presented for linear track estimation in this work.In the field of noise identification of underwater moving target,the Doppler will distort the frequency and amplitude of the radiated noise.To eliminate this,the track estimation is necessary.In the DPS method,we first estimate bearings of two sinusoidal pulse sources installed in the moving target through baseline positioning method.Meanwhile,the emitted and recorded time of each pulse are also acquired.Then the linear track parameters will be achieved based on the geometry pattern with the help of double sources spacing.The simulated results confirm that the DPS improves the performance of the previous double source spacing method.The simulated experiments were carried out using a moving battery car to further evaluate its performance.When the target is 40-60m away,the experiment results show that biases of track azimuth and abeam distance of DPS are under 0.6° and 3.4m,respectively.And the average deviation of estimated velocity is around 0.25m/s.
Blast load estimation using Finite Volume Method and linear heat transfer
Lidner Michał
2016-01-01
Full Text Available From the point of view of people and building security one of the main destroying factor is the blast load. Rational estimating of its results should be preceded with knowledge of complex wave field distribution in time and space. As a result one can estimate the blast load distribution in time. In considered conditions, the values of blast load are estimating using the empirical functions of overpressure distribution in time (Δp(t. The Δp(t functions are monotonic and are the approximation of reality. The distributions of these functions are often linearized due to simplifying of estimating the blast reaction of elements. The article presents a method of numerical analysis of the phenomenon of the air shock wave propagation. The main scope of this paper is getting the ability to make more realistic the Δp(t functions. An explicit own solution using Finite Volume Method was used. This method considers changes in energy due to heat transfer with conservation of linear heat transfer. For validation, the results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied.
Van der Zee, K.G.; Van Brummelen, E.H.; De Borst, R.
2010-01-01
We develop duality-based a posteriori error estimates for functional outputs of solutions of free-boundary problems via shape-linearization principles. To derive an appropriate dual (linearized adjoint) problem, we linearize the domain dependence of the very weak form and goal functional of interest
Idiart, Martín I.; Lahellec, Noel
2016-12-01
New estimates are derived for the overall properties of linear solids with pointwise heterogeneous local properties. The derivation relies on the use of 'comparison solids' which, unlike comparison solids considered previously, are themselves pointwise heterogeneous. The estimates are then exploited within an incremental homogenization scheme to determine the overall response of multiphase elasto-viscoplastic solids under arbitrary loading histories. By way of example, the scheme is applied to incompressible Maxwellian solids with power-law plastic dissipation; particularly simple estimates of the Hashin-Shtrikman type are obtained. Predictions are confronted with full-field simulations for particulate composites under cyclic and rotating loading conditions. Good agreement is found for all cases considered. In particular, elasto-plastic transitions, tension-compression asymmetries (Bauschinger effect) and stress-path distortions induced by material heterogeneity are all well-captured, thus improving significantly on commonly used elastic-plastic decoupled schemes.
Projection-Based Linear Constrained Estimation and Fusion over Long-Haul Links
Rao, Nageswara S [ORNL
2016-01-01
In this work, we study estimation and fusion with linear dynamics in long-haul sensor networks, wherein a number of sensors are remotely deployed over a large geographical area for performing tasks such as target tracking, and a remote fusion center serves to combine the information provided by these sensors in order to improve the overall tracking accuracy. In reality, the motion of a dynamic target might be subject to certain constraints, for instance, those defined by a road network. We explore the accuracy performance of projection-based constrained estimation and fusion methods that is affected by information loss over the long-haul links. We use a tracking example to compare the tracking errors under various implementations of centralized and distributed projection-based estimation and fusion methods.
Projection-Based linear constrained estimation and fusion over long-haul links
Rao, Nageswara S [ORNL
2016-01-01
We study estimation and fusion with linear dynamics in long-haul sensor networks, wherein a number of sensors are remotely deployed over a large geographical area for performing tasks such as target tracking, and a remote fusion center serves to combine the information provided by these sensors in order to improve the overall tracking accuracy. In reality, the motion of a dynamic target might be subject to certain constraints, for instance, those defined by a road network. We explore the accuracy performance of projection-based constrained estimation and fusion methods that is affected by information loss over the long-haul links. We use an example to compare the tracking errors under various implementations of centralized and distributed projection-based estimation and fusion methods and demonstrate the effectiveness of using projection-based methods in these settings.
MUSIC 2D-DOA Estimation using Split Vertical Linear and Circular Arrays
Yasser Albagory
2013-06-01
Full Text Available In this paper, the MUSIC 2D-DOA estimation is estimated by splitting the angle into elevation and azimuth components. This technique is based on an array that is composed by a vertical uniform linear array located perpendicularly at the center of another uniform circular array. This array configuration is proposed to reduce the computational burden faced in MUSIC 2D-DOA estimation where the vertical array is used to determine the elevation DOAs (θs which are used subsequently to determine the azimuth DOAs (∅s by the circular array instead of searching in all space of the two angles in the case of using circular array only. The new Split beamformer is investigated and the performance of the MUSIC 2D-DOA under several signal conditions in the presence of noise is studied.
Non-linear shrinkage estimation of large-scale structure covariance
Joachimi, Benjamin
2017-03-01
In many astrophysical settings, covariance matrices of large data sets have to be determined empirically from a finite number of mock realizations. The resulting noise degrades inference and precludes it completely if there are fewer realizations than data points. This work applies a recently proposed non-linear shrinkage estimator of covariance to a realistic example from large-scale structure cosmology. After optimizing its performance for the usage in likelihood expressions, the shrinkage estimator yields subdominant bias and variance comparable to that of the standard estimator with a factor of ∼50 less realizations. This is achieved without any prior information on the properties of the data or the structure of the covariance matrix, at a negligible computational cost.
Quantile hydrologic model selection and model structure deficiency assessment: 2. Applications
Pande, S.
2013-01-01
Quantile hydrologic model selection and structure deficiency assessment is applied in three case studies. The performance of quantile model selection problem is rigorously evaluated using a model structure on the French Broad river basin data set. The case study shows that quantile model selection
Multi-stage kernel-based conditional quantile prediction in time series
de Gooijer, J.G.; Gannoun, A.; Zerom Godefay, D.
2001-01-01
We present a multi-stage conditional quantile predictor for time series of Markovian structure. It is proved that at any quantile level p \\in (0,1), the asymptotic mean squared error (MSE) of the new predictor is smaller than the single-stage conditional quantile predictor. A simulation study
Multi-stage kernel-based conditional quantile prediction in time series
de Gooijer, J.G.; Gannoun, A.; Zerom Godefay, D.
2001-01-01
We present a multi-stage conditional quantile predictor for time series of Markovian structure. It is proved that at any quantile level p \\in (0,1), the asymptotic mean squared error (MSE) of the new predictor is smaller than the single-stage conditional quantile predictor. A simulation study confir
A supervised machine learning estimator for the non-linear matter power spectrum - SEMPS
Mohammed, Irshad
2015-01-01
In this article, we argue that models based on machine learning (ML) can be very effective in estimating the non-linear matter power spectrum ($P(k)$). We employ the prediction ability of the supervised ML algorithms to build an estimator for the $P(k)$. The estimator is trained on a set of cosmological models, and redshifts for which the $P(k)$ is known, and it learns to predict $P(k)$ for any other set. We review three ML algorithms -- Random Forest, Gradient Boosting Machines, and K-Nearest Neighbours -- and investigate their prime parameters to optimize the prediction accuracy of the estimator. We also compute an optimal size of the training set, which is realistic enough, and still yields high accuracy. We find that, employing the optimal values of the internal parameters, a set of $50-100$ cosmological models is enough to train the estimator that can predict the $P(k)$ for a wide range of cosmological models, and redshifts. Using this configuration, we build a blackbox -- Supervised Estimator for Matter...
The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.
Nevill, Alan M; Cooke, Carlton B
2017-05-01
This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.
FUNDAMENTAL MATRIX OF LINEAR CONTINUOUS SYSTEM IN THE PROBLEM OF ESTIMATING ITS TRANSPORT DELAY
N. A. Dudarenko
2014-09-01
Full Text Available The paper deals with the problem of quantitative estimation for transport delay of linear continuous systems. The main result is received by means of fundamental matrix of linear differential equations solutions specified in the normal Cauchy form for the cases of SISO and MIMO systems. Fundamental matrix has the dual property. It means that the weight function of the system can be formed as a free motion of systems. Last one is generated by the vector of initial system conditions, which coincides with the matrix input of the system being researched. Thus, using the properties of the system- solving for fundamental matrix has given the possibility to solve the problem of estimating transport linear continuous system delay without the use of derivation procedure in hardware environment and without formation of exogenous Dirac delta function. The paper is illustrated by examples. The obtained results make it possible to solve the problem of modeling the pure delay links using consecutive chain of aperiodic links of the first order with the equal time constants. Modeling results have proved the correctness of obtained computations. Knowledge of transport delay can be used when configuring multi- component technological complexes and in the diagnosis of their possible functional degeneration.
Estimating VDT Mental Fatigue Using Multichannel Linear Descriptors and KPCA-HMM
Yi Ouyang
2008-04-01
Full Text Available The impacts of prolonged visual display terminal (VDT work on central nervous system and autonomic nervous system are observed and analyzed based on electroencephalogram (EEG and heart rate variability (HRV. Power spectral indices of HRV, the P300 components based on visual oddball task, and multichannel linear descriptors of EEG are combined to estimate the change of mental fatigue. The results show that long-term VDT work induces the mental fatigue. The power spectral of HRV, the P300 components, and multichannel linear descriptors of EEG are correlated with mental fatigue level. The cognitive information processing would come down after long-term VDT work. Moreover, the multichannel linear descriptors of EEG can effectively reflect the changes of ÃŽÂ¸, ÃŽÂ±, and ÃŽÂ² waves and may be used as the indices of the mental fatigue level. The kernel principal component analysis (KPCA and hidden Markov model (HMM are combined to differentiate two mental fatigue states. The investigation suggests that the joint KPCA-HMM method can effectively reduce the dimensions of the feature vectors, accelerate the classification speed, and improve the accuracy of mental fatigue to achieve the maximum 88%. Hence KPCA-HMM could be a promising model for the estimation of mental fatigue.
Estimating VDT Mental Fatigue Using Multichannel Linear Descriptors and KPCA-HMM
Zhang, Chong; Zheng, Chongxun; Yu, Xiaolin; Ouyang, Yi
2008-12-01
The impacts of prolonged visual display terminal (VDT) work on central nervous system and autonomic nervous system are observed and analyzed based on electroencephalogram (EEG) and heart rate variability (HRV). Power spectral indices of HRV, the P300 components based on visual oddball task, and multichannel linear descriptors of EEG are combined to estimate the change of mental fatigue. The results show that long-term VDT work induces the mental fatigue. The power spectral of HRV, the P300 components, and multichannel linear descriptors of EEG are correlated with mental fatigue level. The cognitive information processing would come down after long-term VDT work. Moreover, the multichannel linear descriptors of EEG can effectively reflect the changes of θ, α, and β waves and may be used as the indices of the mental fatigue level. The kernel principal component analysis (KPCA) and hidden Markov model (HMM) are combined to differentiate two mental fatigue states. The investigation suggests that the joint KPCA-HMM method can effectively reduce the dimensions of the feature vectors, accelerate the classification speed, and improve the accuracy of mental fatigue to achieve the maximum 88%. Hence KPCA-HMM could be a promising model for the estimation of mental fatigue.
Binder, Martin; Coad, Alex
2010-01-01
Standard regression techniques are only able to give an incomplete picture of the relationship between subjective well-being and its determinants since the very idea of conventional estimators such as OLS is the averaging out over the whole distribution: studies based on such regression techniques thus are implicitly only interested in Average Joe's happiness. Using cross-sectional data from the British Household Panel Survey (BHPS) for the year 2006, we apply quantile regressions to analyze ...
Yang, D.; Shiau, J.
2013-12-01
ABSTRACT BODY: Abstract Surface water quality is an essential issue in water-supply for human uses and sustaining healthy ecosystem of rivers. However, water quality of rivers is easily influenced by anthropogenic activities such as urban development and wastewater disposal. Long-term monitoring of water quality can assess whether water quality of rivers deteriorates or not. Taiwan is a population-dense area and heavily depends on surface water for domestic, industrial, and agricultural uses. Dong-gang River is one of major resources in southern Taiwan for agricultural requirements. The water-quality data of four monitoring stations of the Dong-gang River for the period of 2000-2012 are selected for trend analysis. The parameters used to characterize water quality of rivers include biochemical oxygen demand (BOD), dissolved oxygen (DO), suspended solids (SS), and ammonia nitrogen (NH3-N). These four water-quality parameters are integrated into an index called river pollution index (RPI) to indicate the pollution level of rivers. Although widely used non-parametric Mann-Kendall test and linear regression exhibit computational efficiency to identify trends of water-quality indices, limitations of such approaches include sensitive to outliers and estimations of conditional mean only. Quantile regression, capable of identifying changes over time of any percentile values, is employed in this study to detect long-term trend of water-quality indices for the Dong-gang River located in southern Taiwan. The results show that Dong-gang River 4 stations from 2000 to 2012 monthly long-term trends in water quality.To analyze s Dong-gang River long-term water quality trends and pollution characteristics. The results showed that the bridge measuring ammonia Long-dong, BOD5 measure in that station on a downward trend, DO, and SS is on the rise, River Pollution Index (RPI) on a downward trend. The results form Chau-Jhou station also ahowed simialar trends .more and more near the
Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data
Lei Wang
2008-06-01
Full Text Available MODIS (Moderate Resolution Imaging Spectroradiometer is a key instrument aboard the Terra (EOS AM and Aqua (EOS PM satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day, which implies that MODIS data could be used as satellite data source for rice cultivation area estimation, possibly rice growth monitoring and yield forecasting on the regional scale.
Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data
2008-01-01
MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites.Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers.Shaoxing county of Zhcjiang Province in China was chosen to be the study site and early rice was selected as the study crop.The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day,which implies that MODIS data could be used as satellite data source for rice cultivation area estimation,possibly rice growth monitoring and yield forecasting on the regional scale.
Development of a web-based simulator for estimating motion errors in linear motion stages
Khim, G.; Oh, J.-S.; Park, C.-H.
2017-08-01
This paper presents a web-based simulator for estimating 5-DOF motion errors in the linear motion stages. The main calculation modules of the simulator are stored on the server computer. The clients uses the client software to send the input parameters to the server and receive the computed results from the server. By using the simulator, we can predict performances such as 5-DOF motion errors, bearing and table stiffness by entering the design parameters in a design step before fabricating the stages. Motion errors are calculated using the transfer function method from the rail form errors which is the most dominant factor on the motion errors. To verify the simulator, the predicted motion errors are compared to the actually measured motion errors in the linear motion stage.
Markov Jump Linear Systems-Based Position Estimation for Lower Limb Exoskeletons
Samuel L. Nogueira
2014-01-01
Full Text Available In this paper, we deal with Markov Jump Linear Systems-based filtering applied to robotic rehabilitation. The angular positions of an impedance-controlled exoskeleton, designed to help stroke and spinal cord injured patients during walking rehabilitation, are estimated. Standard position estimate approaches adopt Kalman filters (KF to improve the performance of inertial measurement units (IMUs based on individual link configurations. Consequently, for a multi-body system, like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank are not taken into account in other link position estimation (e.g., the foot. In this paper, we propose a collective modeling of all inertial sensors attached to the exoskeleton, combining them in a Markovian estimation model in order to get the best information from each sensor. In order to demonstrate the effectiveness of our approach, simulation results regarding a set of human footsteps, with four IMUs and three encoders attached to the lower limb exoskeleton, are presented. A comparative study between the Markovian estimation system and the standard one is performed considering a wide range of parametric uncertainties.
Markov jump linear systems-based position estimation for lower limb exoskeletons.
Nogueira, Samuel L; Siqueira, Adriano A G; Inoue, Roberto S; Terra, Marco H
2014-01-22
In this paper, we deal with Markov Jump Linear Systems-based filtering applied to robotic rehabilitation. The angular positions of an impedance-controlled exoskeleton, designed to help stroke and spinal cord injured patients during walking rehabilitation, are estimated. Standard position estimate approaches adopt Kalman filters (KF) to improve the performance of inertial measurement units (IMUs) based on individual link configurations. Consequently, for a multi-body system, like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link position estimation (e.g., the foot). In this paper, we propose a collective modeling of all inertial sensors attached to the exoskeleton, combining them in a Markovian estimation model in order to get the best information from each sensor. In order to demonstrate the effectiveness of our approach, simulation results regarding a set of human footsteps, with four IMUs and three encoders attached to the lower limb exoskeleton, are presented. A comparative study between the Markovian estimation system and the standard one is performed considering a wide range of parametric uncertainties.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Qibing GAO; Yaohua WU; Chunhua ZHU; Zhanfeng WANG
2008-01-01
In generalized linear models with fixed design, under the assumption ~ →∞ and otherregularity conditions, the asymptotic normality of maximum quasi-likelihood estimator (β)n, which is the root of the quasi-likelihood equation with natural link function ∑n/i=1Xi(yi-μ(X1/iβ))=0, is obtained,where λ/-n denotes the minimum eigenvalue of ∑n/i=1XiX/1/i, Xi are bounded p x q regressors, and yi are q × 1 responses.
Quasi-Maximum Likelihood Estimators in Generalized Linear Models with Autoregressive Processes
Hong Chang HU; Lei SONG
2014-01-01
The paper studies a generalized linear model (GLM) yt=h(xTtβ)+εt, t=1, 2, . . . , n, whereε1=η1,εt=ρεt-1+ηt, t=2,3,...,n, h is a continuous diff erentiable function,ηt’s are independent and identically distributed random errors with zero mean and finite varianceσ 2. Firstly, the quasi-maximum likelihood (QML) estimators ofβ,ρandσ 2 are given. Secondly, under mild conditions, the asymptotic properties (including the existence, weak consistency and asymptotic distribution) of the QML estimators are investigated. Lastly, the validity of method is illuminated by a simulation example.
Iwaoka, Nobuyuki; Hagita, Katsumi; Takano, Hiroshi
2014-03-01
On the basis of relaxation mode analysis (RMA), we present an efficient method to estimate the linear viscoelasticity of polymer melts in a molecular dynamics (MD) simulation. Slow relaxation phenomena appeared in polymer melts cause a problem that a calculation of the stress relaxation function in MD simulations, especially in the terminal time region, requires large computational efforts. Relaxation mode analysis is a method that systematically extracts slow relaxation modes and rates of the polymer chain from the time correlation of its conformations. We show the computational cost may be drastically reduced by combining a direct calculation of the stress relaxation function based on the Green-Kubo formula with the relaxation rates spectra estimated by RMA. N. I. acknowledges the Graduate School Doctoral Student Aid Program from Keio University.
Yu, Jung-Lang; Chen, Chia-Hao
Orthogonal frequency-division multiplexing (OFDM) systems often use a cyclic prefix (CP) to simplify the equalization design at the cost of bandwidth efficiency. To increase the bandwidth efficiency, we study the blind equalization with linear smoothing [1] for single-input multiple-output (SIMO) OFDM systems without CP insertion in this paper. Due to the block Toeplitz structure of channel matrix, the block matrix scheme is applied to the linear smoothing channel estimation, which equivalently increases the number of sample vectors and thus reduces the perturbation of sample autocorrelation matrix. Compared with the linear smoothing and subspace methods, the proposed block linear smoothing requires the lowest computational complexity. Computer simulations show that the block linear smoothing yields a channel estimation error smaller than that from linear smoothing, and close to that of the subspace method. Evaluating by the minimum mean-square error (MMSE) equalizer, the block linear smoothing and subspace methods have nearly the same bit-error-rates (BERs).
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Yi, Feng; Sun, Chao; Bai, Xiao-Hui
2012-11-01
A new signal-subspace high-resolution bearing estimation method based on the orthogonal projections technique is proposed in this paper. Firstly, the received data are calculated step by step to form a set of basis vectors for the signal-subspace, utilizing an orthogonal projections algorithm that does not construct and eigen-decompose the covariance matrix. This procedure retains a linear complexity in computation and guarantees maximum signal energy in the spanned signal-subspace. Then the algorithm exploits the singular value decomposition of the matrix, comprised of the signal-subspace and the modal subspace that is obtained also from the received data, and the source bearings are estimated by detecting the intersection between the estimated signal-subspace and the modal subspace. The computational complexity of the proposed method is compared to that of the subspace intersection method, and its performance is compared to that of the conventional bearing estimation method, including conventional beamforming (CBF), and minimum variance distortionless response beamforming (MVDR). The performance of the proposed method under different condition such as sensor number, sensor inter-space, received signal-noise ratio (SNR), snapshot number is also investigated. Numerical simulation results in typical shallow water demonstrate the effectiveness of the proposed method.
Zhang Han
2009-01-01
Full Text Available We address the problem of superimposed trainings- (STs- based linearly time-varying (LTV channel estimation and symbol detection for orthogonal frequency-division multiplexing access (OFDMA systems at the uplink receiver. The LTV channel coefficients are modeled by truncated discrete Fourier bases (DFBs. By judiciously designing the superimposed pilot symbols, we estimate the LTV channel transfer functions over the whole frequency band by using a weighted average procedure, thereby providing validity for adaptive resource allocation. We also present a performance analysis of the channel estimation approach to derive a closed-form expression for the channel estimation variances. In addition, an iterative symbol detector is presented to mitigate the superimposed training effects on information sequence recovery. By the iterative mitigation procedure, the demodulator achieves a considerable gain in signal-interference ratio and exhibits a nearly indistinguishable symbol error rate (SER performance from that of frequency-division multiplexed trainings. Compared to existing frequency-division multiplexed training schemes, the proposed algorithm does not entail any additional bandwidth while with the advantage for system adaptive resource allocation.
Post-L1-Penalized Estimators in High-Dimensional Linear Regression Models
Belloni, Alexandre
2010-01-01
In this paper we study the post-penalized estimator which applies ordinary, unpenalized linear regression to the model selected by the first step penalized estimators, typically the LASSO. We show that post-LASSO can perform as well or nearly as well as the LASSO in terms of the rate of convergence. We show that this performance occurs even if the LASSO-based model selection "fails", in the sense of missing some components of the "true" regression model. Furthermore, post-LASSO can perform strictly better than LASSO, in the sense of a strictly faster rate of convergence, if the LASSO-based model selection correctly includes all components of the "true" model as a subset and enough sparsity is obtained. Of course, in the extreme case, when LASSO perfectly selects the true model, the past-LASSO estimator becomes the oracle estimator. We show that the results hold in both parametric and non-parametric models; and by the "true" model we mean the best $s$-dimensional approximation to the true regression model, whe...
The LDA beamformer: Optimal estimation of ERP source time series using linear discriminant analysis.
Treder, Matthias S; Porbadnigk, Anne K; Shahbazi Avarvand, Forooz; Müller, Klaus-Robert; Blankertz, Benjamin
2016-04-01
We introduce a novel beamforming approach for estimating event-related potential (ERP) source time series based on regularized linear discriminant analysis (LDA). The optimization problems in LDA and linearly-constrained minimum-variance (LCMV) beamformers are formally equivalent. The approaches differ in that, in LCMV beamformers, the spatial patterns are derived from a source model, whereas in an LDA beamformer the spatial patterns are derived directly from the data (i.e., the ERP peak). Using a formal proof and MEG simulations, we show that the LDA beamformer is robust to correlated sources and offers a higher signal-to-noise ratio than the LCMV beamformer and PCA. As an application, we use EEG data from an oddball experiment to show how the LDA beamformer can be harnessed to detect single-trial ERP latencies and estimate connectivity between ERP sources. Concluding, the LDA beamformer optimally reconstructs ERP sources by maximizing the ERP signal-to-noise ratio. Hence, it is a highly suited tool for analyzing ERP source time series, particularly in EEG/MEG studies wherein a source model is not available.
Ana Calabrese
Full Text Available In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF, a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM. In this model, each cell's input is described by: 1 a stimulus filter (STRF; and 2 a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs and modulation limited (ml noise. We compare this model to normalized reverse correlation (NRC, the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.
Prediction of an outcome using trajectories estimated from a linear mixed model.
Maruyama, Nami; Takahashi, Fumiaki; Takeuchi, Masahiro
2009-09-01
In longitudinal data, interest is usually focused on the repeatedly measured variable itself. In some situations, however, the pattern of variation of the variable over time may contain information about a separate outcome variable. In such situations, longitudinal data provide an opportunity to develop predictive models for future observations of the separate outcome variable given the current data for an individual. In particular, longitudinally changing patterns of repeated measurements of a variable measured up to time t, or trajectories, can be used to predict an outcome measure or event that occurs after time t. In this article, we propose a method for predicting an outcome variable based on a generalized linear model, specifically, a logistic regression model, the covariates of which are variables that characterize the trajectory of an individual. Since the trajectory of an individual contains estimation error, the proposed logistic regression model constitutes a measurement error model. The model is fitted in two steps. First, a linear mixed model is fitted to the longitudinal data to estimate the random effect that characterizes the trajectory for each individual while adjusting for other covariates. In the second step, a conditional likelihood approach is applied to account for the estimation error in the trajectory. Prediction of an outcome variable is based on the logistic regression model in the second step. The receiver operating characteristic curve is used to compare the discrimination ability of a model with trajectories to one without trajectories as covariates. A simulation study is used to assess the performance of the proposed method, and the method is applied to clinical trial data.
Eliseu Verly-Jr
Full Text Available A reduction in homocysteine concentration due to the use of supplemental folic acid is well recognized, although evidence of the same effect for natural folate sources, such as fruits and vegetables (FV, is lacking. The traditional statistical analysis approaches do not provide further information. As an alternative, quantile regression allows for the exploration of the effects of covariates through percentiles of the conditional distribution of the dependent variable.To investigate how the associations of FV intake with plasma total homocysteine (tHcy differ through percentiles in the distribution using quantile regression.A cross-sectional population-based survey was conducted among 499 residents of Sao Paulo City, Brazil. The participants provided food intake and fasting blood samples. Fruit and vegetable intake was predicted by adjusting for day-to-day variation using a proper measurement error model. We performed a quantile regression to verify the association between tHcy and the predicted FV intake. The predicted values of tHcy for each percentile model were calculated considering an increase of 200 g in the FV intake for each percentile.The results showed that tHcy was inversely associated with FV intake when assessed by linear regression whereas, the association was different when using quantile regression. The relationship with FV consumption was inverse and significant for almost all percentiles of tHcy. The coefficients increased as the percentile of tHcy increased. A simulated increase of 200 g in the FV intake could decrease the tHcy levels in the overall percentiles, but the higher percentiles of tHcy benefited more.This study confirms that the effect of FV intake on lowering the tHcy levels is dependent on the level of tHcy using an innovative statistical approach. From a public health point of view, encouraging people to increase FV intake would benefit people with high levels of tHcy.
A Homogeneous Linear Estimation Method for System Error in Data Assimilation
WU Wei; WU Zengmao; GAO Shanhong; ZHENG Yi
2013-01-01
In this paper,a new bias estimation method is proposed and applied in a regional ensemble Kalman filter (EnKF) based on the Weather Research and Forecasting (WRF) Model.The method is based on a homogeneous linear bias model,and the model bias is estimated using statistics at each assimilation cycle,which is different from the state augmentation methods proposed in previous literatures.The new method provides a good estimation for the model bias of some specific variables,such as sea level pressure (SLP).A series of numerical experiments with EnKF are performed to examine the new method under a severe weather condition.Results show the positive effect of the method on the forecasting of circulation pattern and meso-scale systems,and the reduction of analysis errors.The background error covariance structures of surface variables and the effects of model system bias on EnKF are also studied under the error covariance structures and a new concept 'correlation scale' is introduced.However,the new method needs further evaluation with more cases of assimilation.
DOA and polarization estimation via signal reconstruction with linear polarization-sensitive arrays
Liu Zhangmeng
2015-12-01
Full Text Available This paper addresses the problem of direction-of-arrival (DOA and polarization estimation with polarization sensitive arrays (PSA, which has been a hot topic in the area of array signal processing during the past two or three decades. The sparse Bayesian learning (SBL technique is introduced to exploit the sparsity of the incident signals in space to solve this problem and a new method is proposed by reconstructing the signals from the array outputs first and then exploiting the reconstructed signals to realize parameter estimation. Only 1-D searching and numerical calculations are contained in the proposed method, which makes the proposed method computationally much efficient. Based on a linear array consisting of identically structured sensors, the proposed method can be used with slight modifications in PSA with different polarization structures. It also performs well in the presence of coherent signals or signals with different degrees of polarization. Simulation results are given to demonstrate the parameter estimation precision of the proposed method.
Simulating the Effect of Non-Linear Mode-Coupling in Cosmological Parameter Estimation
Kiessling, A; Heavens, A F
2011-01-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment, and to optimise the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimisation it is usually assumed the power-spectra covariance matrix is diagonal in Fourier-space. But in the low-redshift Universe, non-linear mode-coupling will tend to correlate small-scale power, moving information from lower to higher-order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naive Gaussian Fisher matrix forecasts with a Maximum Likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2-D and tomographic shear analysis of a Eucl...
Estimation of failure probabilities of linear dynamic systems by importance sampling
Anna Ivanova Olsen; Arvid Naess
2006-08-01
An iterative method for estimating the failure probability for certain time-variant reliability problems has been developed. In the paper, the focus is on the displacement response of a linear oscillator driven by white noise. Failure is then assumed to occur when the displacement response exceeds a critical threshold. The iteration procedure is a two-step method. On the ﬁrst iteration, a simple control function promoting failure is constructed using the design point weighting principle. After time discretization, two points are chosen to construct a compound deterministic control function. It is based on the time point when the ﬁrst maximum of the homogenous solution has occurred and on the point at the end of the considered time interval. An importance sampling technique is used in order to estimate the failure probability functional on a set of initial values of state space variables and time. On the second iteration, the concept of optimal control function can be implemented to construct a Markov control which allows much better accuracy in the failure probability estimate than the simple control function. On both iterations, the concept of changing the probability measure by the Girsanov transformation is utilized. As a result the CPU time is substantially reduced compared with the crude Monte Carlo procedure.
Andreasen, Martin Møller; Christensen, Bent Jesper
This paper suggests a new and easy approach to estimate linear and non-linear dynamic term structure models with latent factors. We impose no distributional assumptions on the factors and they may therefore be non-Gaussian. The novelty of our approach is to use many observables (yields or bonds p...
Jimenez, M.J.; Madsen, Henrik; Bloem, J.J.
2008-01-01
(MAP) estimation is presented along with a software implementation. As a case study, the modelling of the thermal characteristics of a building integrated PV component is considered. The EC-JRC Ispra has made experimental data available. Both linear and non-linear models are identified. It is shown...
Margherita Velucchi
2014-09-01
Full Text Available Labor productivity is very complex to analyze across time, sectors and countries. In particular, in Italy, labor productivity has shown a prolonged slowdown but sector analyses highlight the presence of specific niches that have good levels of productivity and performance. This paper investigates how firms' characteristics might have affected the dynamics of the Italian service and manufacturing firms labor productivity in recent years (1998-2007, comparing them and focusing on some relevant sectors. We use a micro level original panel from the Italian National Institute of Statistics (ISTAT and a longitudinal quantile regression approach that allow us to show that labor productivity is highly heterogeneous across sectors and that the links between labor productivity and firms' characteristics are not constant across quantiles. We show that average estimates obtained via GLS do not capture the complex dynamics and heterogeneity of the service and manufacturing firms' labor productivity. Using this approach, we show that innovativeness and human capital, in particular, have a very strong impact on fostering labor productivity of lower productive firms. From the sector analysis on four service' sectors (restaurants & hotels, trade distributors, trade shops and legal & accountants we show that heterogeneity is more intense at a sector level and we derive some common features that may be useful in terms of policy implications.
Are estimated control charts in control?
Albers, W.; Kallenberg, W.C.M.
2001-01-01
Standard control chart practice assumes normality and uses estimated parameters. Because of the extreme quantiles involved, large relative errors result. Here simple corrections are derived to bring such estimated charts under control. As a criterion, suitable exceedance probabilities are used.
Monopole and dipole estimation for multi-frequency sky maps by linear regression
Wehus, I K; Eriksen, H K; Banday, A J; Dickinson, C; Ghosh, T; Gorski, K M; Lawrence, C R; Leahy, J P; Maino, D; Reich, P; Reich, W
2014-01-01
We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called "T-T plots". Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted when ever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the 9-year WMAP, Planck 2013, SFD 100 um, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provides an estimate of residual systematic uncertainties. In general, our values are...
OMPRAKASH TEMBHURNE; DEEPTI SHRIMANKAR
2017-07-01
A study of abundance estimation has vital importance in spectral unmixing of hyperspectral image. Recently, various methods have been proposed for spectral unmixing to achieve higher performance using an evolutionary approach. However, these methods are based on unconstrained optimisation problems. Theirperformance was also based on proper tuning parameters. We have proposed a new non-parametric algorithm using teaching-learning-based optimisation technique with an inbuilt constraints maintenance mechanism using the linear mixing model. In this approach, the unmixing problem is transformed into a combinatorial optimisation problem by introducing abundance sum to one constraint and abundance non-negative constraint. A comparative analysis of the proposed algorithm is conducted with other two state-of-the-art algorithms.Experimental results in known and unknown environments with varying signal-to-noise ratio on simulated and real hyper spectral data demonstrate that the proposed method outperforms the other methods.
Zhou, Si-Da; Heylen, Ward; Sas, Paul; Liu, Li
2014-05-01
This paper investigates the problem of modal parameter estimation of time-varying structures under unknown excitation. A time-frequency-domain maximum likelihood estimator of modal parameters for linear time-varying structures is presented by adapting the frequency-domain maximum likelihood estimator to the time-frequency domain. The proposed estimator is parametric, that is, the linear time-varying structures are represented by a time-dependent common-denominator model. To adapt the existing frequency-domain estimator for time-invariant structures to the time-frequency methods for time-varying cases, an orthogonal polynomial and z-domain mapping hybrid basis function is presented, which has the advantageous numerical condition and with which it is convenient to calculate the modal parameters. A series of numerical examples have evaluated and illustrated the performance of the proposed maximum likelihood estimator, and a group of laboratory experiments has further validated the proposed estimator.
2008-01-01
In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.
ZHANG SanGuo; LIAO Yuan
2008-01-01
In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.
On the error of estimating the sparsest solution of underdetermined linear systems
Babaie-Zadeh, Massoud; Mohimani, Hosein
2011-01-01
Let A be an n by m matrix with m>n, and suppose that the underdetermined linear system As=x admits a sparse solution s0 for which ||s0||_0 < 1/2 spark(A). Such a sparse solution is unique due to a well-known uniqueness theorem. Suppose now that we have somehow a solution s_hat as an estimation of s0, and suppose that s_hat is only `approximately sparse', that is, many of its components are very small and nearly zero, but not mathematically equal to zero. Is such a solution necessarily close to the true sparsest solution? More generally, is it possible to construct an upper bound on the estimation error ||s_hat-s0||_2 without knowing s0? The answer is positive, and in this paper we construct such a bound based on minimal singular values of submatrices of A. We will also state a tight bound, which is more complicated, but besides being tight, enables us to study the case of random dictionaries and obtain probabilistic upper bounds. We will also study the noisy case, that is, where x=As+n. Moreover, we will s...
Mohd. Azam, Sazuan Nazrah
2017-01-01
In this paper, we used the modified quadruple tank system that represents a multi-input-multi-output (MIMO) system as an example to present the realization of a linear discrete-time state space model and to obtain the state estimation using Kalman filter in a methodical mannered. First, an existing...... dynamics of the system of stochastic differential equations is linearized to produce the deterministic-stochastic linear transfer function. Then the linear transfer function is discretized to produce a linear discrete-time state space model that has a deterministic and a stochastic component. The filtered...... part of the Kalman filter is used to estimates the current state, based on the model and the measurements. The static and dynamic Kalman filter is compared and all results is demonstrated through simulations....
Azam, Sazuan N. M.
2017-01-01
In this paper, we used the modified quadruple tank system that represents a multi-input-multi-output (MIMO) system as an example to present the realization of a linear discrete-time state space model and to obtain the state estimation using Kalman filter in a methodical mannered. First, an existing dynamics of the system of stochastic differential equations is linearized to produce the deterministic-stochastic linear transfer function. Then the linear transfer function is discretized to produce a linear discrete-time state space model that has a deterministic and a stochastic component. The filtered part of the Kalman filter is used to estimates the current state, based on the model and the measurements. The static and dynamic Kalman filter is compared and all results is demonstrated through simulations.
Nielsen, Henrik Aalborg; Madsen, Henrik; Nielsen, Torben Skov
2006-01-01
For operational planning it is important to provide information about the situation-dependent uncertainty of a wind power forecast. Factors which influence the uncertainty of a wind power forecast include the predictability of the actual meteorological situation, the level of the predicted wind...... speed (due to the non-linearity of the power curve) and the forecast horizon. With respect to the predictability of the actual meteorological situation a number of explanatory variables are considered, some inspired by the literature. The article contains an overview of related work within the field....... An existing wind power forecasting system (Zephyr/WPPT) is considered and it is shown how analysis of the forecast error can be used to build a model of the quantiles of the forecast error. Only explanatory variables or indices which are predictable are considered, whereby the model obtained can be used...
带线性约束的新两参数估计%New Two Parameters Estimation for the Linear Model with Linear Restrictions
郭淑妹; 顾勇为; 郭杰
2013-01-01
针对带约束的最小二乘估计在参数估计中处理复共线性的不足，引入随机线性约束，提出了约束新两参数估计。并且得到在均方误差下，约束新两参数估计与约束最小二乘估计，约束岭估计和约束Liu估计相比的优良性。%In order to overcome the shortage of the multicollinearity in ordinary restricted least square estimation with parameter estimate based on the stochastic linear restrictions,a new estimation as restricted linear new two parameters estimation is proposed. In the mean squared error sense,compared with the properties the ordinary restricted least squares estimation,and the restricted ridge estimation,the method we proposed was superior.
YANGXiao-Jun; WENGZheng-Xin; TIANZuo-Hua; SHISong-Jiao
2005-01-01
The H∞ hybrid estimation problem for linear continuous time-varying systems is investigated in this paper, where estimated signals are linear combination of state and input. Design objective requires the worst-case energy gain from disturbance to estimation error be less than a prescribed level. Optimal solution of the hybrid estimation problem is the saddle point of a two-player zero sum differential game. Based on the differential game approach, necessary and sufficient solvable conditions for the hybrid estimation problem are provided in terms of solutions to a Riccati differential equation. Moreover, one possible estimator is proposed if the solvable conditions are satisfied.The estimator is characterized by a gain matrix and an output mapping matrix that reflects the internal relations between the unknown input and output estimation error. Both state and unknown inputs estimation are realized by the proposed estimator. Thus, the results in this paper are also capable of dealing with fault diagnosis problems of linear time-varying systems. At last, a numerical example is provided to illustrate the proposed approach.
Hu, L; Zhang, Z G; Mouraux, A; Iannetti, G D
2015-05-01
Transient sensory, motor or cognitive event elicit not only phase-locked event-related potentials (ERPs) in the ongoing electroencephalogram (EEG), but also induce non-phase-locked modulations of ongoing EEG oscillations. These modulations can be detected when single-trial waveforms are analysed in the time-frequency domain, and consist in stimulus-induced decreases (event-related desynchronization, ERD) or increases (event-related synchronization, ERS) of synchrony in the activity of the underlying neuronal populations. ERD and ERS reflect changes in the parameters that control oscillations in neuronal networks and, depending on the frequency at which they occur, represent neuronal mechanisms involved in cortical activation, inhibition and binding. ERD and ERS are commonly estimated by averaging the time-frequency decomposition of single trials. However, their trial-to-trial variability that can reflect physiologically-important information is lost by across-trial averaging. Here, we aim to (1) develop novel approaches to explore single-trial parameters (including latency, frequency and magnitude) of ERP/ERD/ERS; (2) disclose the relationship between estimated single-trial parameters and other experimental factors (e.g., perceived intensity). We found that (1) stimulus-elicited ERP/ERD/ERS can be correctly separated using principal component analysis (PCA) decomposition with Varimax rotation on the single-trial time-frequency distributions; (2) time-frequency multiple linear regression with dispersion term (TF-MLRd) enhances the signal-to-noise ratio of ERP/ERD/ERS in single trials, and provides an unbiased estimation of their latency, frequency, and magnitude at single-trial level; (3) these estimates can be meaningfully correlated with each other and with other experimental factors at single-trial level (e.g., perceived stimulus intensity and ERP magnitude). The methods described in this article allow exploring fully non-phase-locked stimulus-induced cortical
An Analysis of Bank Service Satisfaction Based on Quantile Regression and Grey Relational Analysis
Wen-Tsao Pan
2016-01-01
Full Text Available Bank service satisfaction is vital to the success of a bank. In this paper, we propose to use the grey relational analysis to gauge the levels of service satisfaction of the banks. With the grey relational analysis, we compared the effects of different variables on service satisfaction. We gave ranks to the banks according to their levels of service satisfaction. We further used the quantile regression model to find the variables that affected the satisfaction of a customer at a specific quantile of satisfaction level. The result of the quantile regression analysis provided a bank manager with information to formulate policies to further promote satisfaction of the customers at different quantiles of satisfaction level. We also compared the prediction accuracies of the regression models at different quantiles. The experiment result showed that, among the seven quantile regression models, the median regression model has the best performance in terms of RMSE, RTIC, and CE performance measures.
2002-01-01
This paper presents recursive least-squares (RLS) estimation algorithms using the covariance information in linear discrete-time distributed parameter systems. The signal is estimated with the observations containing some uncertain observations. In the uncertain observations, there are cases where the observed value does not contain the signal and consists of observation noise only. The probability that the signal exists in the observed value is used in the estimation algorithms. The algorith...
Analysis of retirement income adequacy using quantile regression: A case study in Malaysia
Alaudin, Ros Idayuwati; Ismail, Noriszura; Isa, Zaidi
2015-09-01
Quantile regression is a statistical analysis that does not restrict attention to the conditional mean and therefore, permitting the approximation of the whole conditional distribution of a response variable. Quantile regression is a robust regression to outliers compared to mean regression models. In this paper, we demonstrate how quantile regression approach can be used to analyze the ratio of projected wealth to needs (wealth-needs ratio) during retirement.
J. Szilagyi
2009-05-01
Full Text Available Under simplifying conditions catchment-scale vapor pressure at the drying land surface can be calculated as a function of its watershed-representative temperature (<T_{s}> by the wet-surface equation (WSE, similar to the wet-bulb equation in meteorology for calculating the dry-bulb thermometer vapor pressure of the Complementary Relationship of evaporation. The corresponding watershed ET rate,
Monopole and dipole estimation for multi-frequency sky maps by linear regression
Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.
2017-01-01
We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.
Frommer, A; Lippert, Th; Rittich, H
2012-01-01
The Lanczos process constructs a sequence of orthonormal vectors v_m spanning a nested sequence of Krylov subspaces generated by a hermitian matrix A and some starting vector b. In this paper we show how to cheaply recover a secondary Lanczos process, starting at an arbitrary Lanczos vector v_m and how to use this secondary process to efficiently obtain computable error estimates and error bounds for the Lanczos approximations to a solution of a linear system Ax = b as well as, more generally, for the Lanczos approximations to the action of a rational matrix function on a vector. Our approach uses the relation between the Lanczos process and quadrature as developed by Golub and Meurant. It is different from methods known so far because of its use of the secondary Lanczos process. With our approach, it is now in particular possible to efficiently obtain upper bounds for the error in the 2-norm, provided a lower bound on the smallest eigenvalue of A is known. This holds for the error of the cg iterates as well ...
Do-Sik Yoo
2015-01-01
Full Text Available We propose a low complexity subspace-based direction-of-arrival (DOA estimation algorithm employing a direct signal space construction method (DSPCM by subsampling the autocorrelation matrix of a uniform linear array (ULA. Three major contributions of this paper are as follows. First of all, we introduce the method of autocorrelation matrix subsampling which enables us to employ a low complexity algorithm based on a ULA without computationally complex eigenvalue decomposition or singular-value decomposition. Secondly, we introduce a signal vector separation method to improve the distinguishability among signal vectors, which can greatly improve the performance, particularly, in low signal-to-noise ratio (SNR regime. Thirdly, we provide a root finding (RF method in addition to a spectral search (SS method as the angle finding scheme. Through simulations, we illustrate that the performance of the proposed scheme is reasonably close to computationally much more expensive MUSIC- (MUltiple SIgnal Classification- based algorithms. Finally, we illustrate that the computational complexity of the proposed scheme is reduced, in comparison with those of MUSIC-based schemes, by a factor of O(N2/K, where K is the number of sources and N is the number of antenna elements.
Qiutong Jin
2016-06-01
Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.
Non-linear parameter estimation for the LTP experiment: analysis of an operational exercise
Congedo, G; Ferraioli, L; Hueller, M; Vitale, S; Hewitson, M; Nofrarias, M; Monsky, A; Armano, M; Grynagier, A; Diaz-Aguilo, M; Plagnol, E; Rais, B
2011-01-01
The precursor ESA mission LISA-Pathfinder, to be flown in 2013, aims at demonstrating the feasibility of the free-fall, necessary for LISA, the upcoming space-born gravitational wave observatory. LISA Technology Package (LTP) is planned to carry out a number of experiments, whose main targets are to identify and measure the disturbances on each test-mass, in order to reach an unprecedented low-level residual force noise. To fulfill this plan, it is then necessary to correctly design, set-up and optimize the experiments to be performed on-flight and do a full system parameter estimation. Here we describe the progress on the non-linear analysis using the methods developed in the framework of the \\textit{LTPDA Toolbox}, an object-oriented MATLAB Data Analysis environment: the effort is to identify the critical parameters and remove the degeneracy by properly combining the results of different experiments coming from a closed-loop system like LTP.
Urrutia, Jackie D.; Tampis, Razzcelle L.; Mercado, Joseph; Baygan, Aaron Vito M.; Baccay, Edcon B.
2016-02-01
The objective of this research is to formulate a mathematical model for the Philippines' Real Gross Domestic Product (Real GDP). The following factors are considered: Consumers' Spending (x1), Government's Spending (x2), Capital Formation (x3) and Imports (x4) as the Independent Variables that can actually influence in the Real GDP in the Philippines (y). The researchers used a Normal Estimation Equation using Matrices to create the model for Real GDP and used α = 0.01.The researchers analyzed quarterly data from 1990 to 2013. The data were acquired from the National Statistical Coordination Board (NSCB) resulting to a total of 96 observations for each variable. The data have undergone a logarithmic transformation particularly the Dependent Variable (y) to satisfy all the assumptions of the Multiple Linear Regression Analysis. The mathematical model for Real GDP was formulated using Matrices through MATLAB. Based on the results, only three of the Independent Variables are significant to the Dependent Variable namely: Consumers' Spending (x1), Capital Formation (x3) and Imports (x4), hence, can actually predict Real GDP (y). The regression analysis displays that 98.7% (coefficient of determination) of the Independent Variables can actually predict the Dependent Variable. With 97.6% of the result in Paired T-Test, the Predicted Values obtained from the model showed no significant difference from the Actual Values of Real GDP. This research will be essential in appraising the forthcoming changes to aid the Government in implementing policies for the development of the economy.
Silva, Juan P.; Lasso, Ana; Lubberding, Henk J.; Peña, Miguel R.; Gijzen, Hubert J.
2015-05-01
The closed static chamber technique is widely used to quantify greenhouse gases (GHG) i.e. CH4, CO2 and N2O from aquatic and wastewater treatment systems. However, chamber-measured fluxes over air-water interfaces appear to be subject to considerable uncertainty, depending on the chamber design, lack of air mixing in the chamber, concentration gradient changes during the deployment, and irregular eruptions of gas accumulated in the sediment. In this study, the closed static chamber technique was tested in an anaerobic pond operating under tropical conditions. The closed static chambers were found to be reliable to measure GHG, but an intrinsic limitation of using closed static chambers is that not all the data for gas concentrations measured within a chamber headspace can be used to estimate the flux due to gradient concentration curves with non-plausible and physical explanations. Based on the total data set, the percentage of curves accepted was 93.6, 87.2, and 73% for CH4, CO2 and N2O, respectively. The statistical analyses demonstrated that only considering linear regression was inappropriate (i.e. approximately 40% of the data for CH4, CO2 and N2O were best fitted to a non-linear regression) for the determination of GHG flux from stabilization ponds by the closed static chamber technique. In this work, it is clear that when R2adj-non-lin > R2adj-lin, the application of linear regression models is not recommended, as it leads to an underestimation of GHG fluxes by 10-50%. This suggests that adopting only or mostly linear regression models will affect the GHG inventories obtained by using closed static chambers. According to our results, the misuse of the usual R2 parameter and only the linear regression model to estimate the fluxes will lead to reporting erroneous information on the real contribution of GHG emissions from wastewater. Therefore, the R2adj and non-linear regression model analysis should be used to reduce the biases in flux estimation by the
B. Thrasher
2012-09-01
Full Text Available When applying a quantile mapping-based bias correction to daily temperature extremes simulated by a global climate model (GCM, the transformed values of maximum and minimum temperatures are changed, and the diurnal temperature range (DTR can become physically unrealistic. While causes are not thoroughly explored, there is a strong relationship between GCM biases in snow albedo feedback during snowmelt and bias correction resulting in unrealistic DTR values. We propose a technique to bias correct DTR, based on comparing observations and GCM historic simulations, and combine that with either bias correcting daily maximum temperatures and calculating daily minimum temperatures or vice versa. By basing the bias correction on a base period of 1961–1980 and validating it during a test period of 1981–1999, we show that bias correcting DTR and maximum daily temperature can produce more accurate estimations of daily temperature extremes while avoiding the pathological cases of unrealistic DTR values.
无
2008-01-01
A class of estimators of the mean survival time with interval censored data are studied by unbiased transformation method.The estimators are constructed based on the observations to ensure unbiasedness in the sense that the estimators in a certain class have the same expectation as the mean survival time.The estimators have good properties such as strong consistency (with the rate of O(n-1/2 (log log n)1/2)) and asymptotic normality.The application to linear regression is considered and the simulation reports are given.
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Janson, Lucas; Rajaratnam, Bala
Great strides have been made in the field of reconstructing past temperatures based on models relating temperature to temperature-sensitive paleoclimate proxies. One of the goals of such reconstructions is to assess if current climate is anomalous in a millennial context. These regression based approaches model the conditional mean of the temperature distribution as a function of paleoclimate proxies (or vice versa). Some of the recent focus in the area has considered methods which help reduce the uncertainty inherent in such statistical paleoclimate reconstructions, with the ultimate goal of improving the confidence that can be attached to such endeavors. A second important scientific focus in the subject area is the area of forward models for proxies, the goal of which is to understand the way paleoclimate proxies are driven by temperature and other environmental variables. One of the primary contributions of this paper is novel statistical methodology for (1) quantile regression with autoregressive residual structure, (2) estimation of corresponding model parameters, (3) development of a rigorous framework for specifying uncertainty estimates of quantities of interest, yielding (4) statistical byproducts that address the two scientific foci discussed above. We show that by using the above statistical methodology we can demonstrably produce a more robust reconstruction than is possible by using conditional-mean-fitting methods. Our reconstruction shares some of the common features of past reconstructions, but we also gain useful insights. More importantly, we are able to demonstrate a significantly smaller uncertainty than that from previous regression methods. In addition, the quantile regression component allows us to model, in a more complete and flexible way than least squares, the conditional distribution of temperature given proxies. This relationship can be used to inform forward models relating how proxies are driven by temperature.
Sign and Quantiles of the Realized Stock-Bond Correlation
Aslanidis, Nektarios; Christiansen, Charlotte
We scrutinize the monthly realized stock-bond correlation based upon high frequency returns. In particular, we use a probit model to track the dynamics of the sign of the correlation relative to its various economic forces. The sign is predictable to a large extent with bond market liquidity being...... the most important variable. Moreover, stock market volatility, inflation uncertainty, short rate volatility, and bond volatility have significant effects upon the sign. In addition, we use quantile regressions to pin down the systematic variation of the extreme tails of the realized stock-bond correlation...... of the stock-bond correlation....
Evaluation of Spatio-temporal Drought using Water Resource Quantile Map
Moon, Soojin; Suh, Aesook; Kang, Boosik
2016-04-01
Among those various natural disasters, the drought which is contrasted to the flood is not defined in only one case and it is true that the standard to estimate and conclude the drought is in vague with the long-term water insufficiency following the local and time-periodic rainfall disparity. Drought indices is mainly used as an index for evaluating drought. However, it is not an absolute indicator that can evaluate drought. Depending on the characteristics of each index in a variety of conditions such as local and environment, after grasping a better applicability in the use surfaces to suit the purpose of the user, using the appropriate index to be drought evaluation shall. After considering the various characteristics such as regional and environment with each index, the drought index have to use appropriately. Accordingly, there has been a lot of research for drought monitoring. However, objective method that can be evaluated experts as well as the general people on the actual drought situation, is deficient. In this study, it suggested RSQM (Real-time Storage Quantile Map) and RRQM (Real-time Riverflow Quantile Map) in the way to calculating the quantile of the current value corresponding to the usual value of the annual value river water level and storage rate of multi-purpose dam. It was calculated the probability distribution by selecting a typical water level stations and multipurpose dam of each basin. And the RSQM and RRQM were comparison and analysis to SPI and PDSI Index. These schemes can be objectively judged insufficient degree and drought conditions in water in real time. The RSQM and RSQM are meaning the supply potential of water resources and stress value of river environment. RRQM is mainly due to represent the adjusted value of downstream of multi-purpose dam. Accordingly it does not show the tendency of the representation of the drought to match exactly. However, RRQM is more directly represented about visually showing drought conditions
Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals
Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys
2016-04-01
The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.
Estimating forest species abundance through linear unmixing of CHRIS/PROBA imagery
Stagakis, Stavros; Vanikiotis, Theofilos; Sykioti, Olga
2016-09-01
The advancing technology of hyperspectral remote sensing offers the opportunity of accurate land cover characterization of complex natural environments. In this study, a linear spectral unmixing algorithm that incorporates a novel hierarchical Bayesian approach (BI-ICE) was applied on two spatially and temporally adjacent CHRIS/PROBA images over a forest in North Pindos National Park (Epirus, Greece). The scope is to investigate the potential of this algorithm to discriminate two different forest species (i.e. beech - Fagus sylvatica, pine - Pinus nigra) and produce accurate species-specific abundance maps. The unmixing results were evaluated in uniformly distributed plots across the test site using measured fractions of each species derived by very high resolution aerial orthophotos. Landsat-8 images were also used to produce a conventional discrete-type classification map of the test site. This map was used to define the exact borders of the test site and compare the thematic information of the two mapping approaches (discrete vs abundance mapping). The required ground truth information, regarding training and validation of the applied mapping methodologies, was collected during a field campaign across the study site. Abundance estimates reached very good overall accuracy (R2 = 0.98, RMSE = 0.06). The most significant source of error in our results was due to the shadowing effects that were very intense in some areas of the test site due to the low solar elevation during CHRIS acquisitions. It is also demonstrated that the two mapping approaches are in accordance across pure and dense forest areas, but the conventional classification map fails to describe the natural spatial gradients of each species and the actual species mixture across the test site. Overall, the BI-ICE algorithm presented increased potential to unmix challenging objects with high spectral similarity, such as different vegetation species, under real and not optimum acquisition conditions. Its
El Allaki, Farouk; Christensen, Jette; Vallières, André; Paré, Julie
2014-10-01
The objective of this study was to estimate the population size of Canadian poultry farms in 3 subpopulations (British Columbia, Ontario, and Other) by poultry category. We used data for 2008 to 2011 from the Canadian Notifiable Avian Influenza (NAI) Surveillance System (CanNAISS). Log-linear capture-recapture models were applied to estimate the number of commercial chicken and turkey farms. The estimated size of farm populations was validated by comparing sizes to data provided by the Canadian poultry industry in 2007, which were assumed to be complete and exhaustive. Our results showed that the log-linear modelling approach was an appropriate tool to estimate the population size of Canadian commercial chicken and turkey farms. The 2007 farm population size for each poultry category was included in the 95% confidence intervals of the farm population size estimates. Log-linear capture-recapture modelling might be useful for estimating the number of farms using surveillance data when no comprehensive registry exists.
N.G. HOSSEIN-ZADEH
2008-12-01
Full Text Available Data on stillbirth from the Animal Breeding Center of Iran collected from January 1990 to December 2007 and comprising 668810 Holstein calving events from 2506 herds were analyzed. Linear and threshold animal and sire models were used to estimate genetic parameters and genetic trends for stillbirth in the first, second, and third parities. Mean incidence of stillbirth decreased from first to third parities: 23.7%, 22.1%, and 21.8%, respectively. Phenotypic rates of stillbirth decreased from 1993 to 1998, for first, second and third calvings, and then increased from 1998 to 2007 for the first three parities. Direct heritability estimates of stillbirth for parities 1, 2 and 3 ranged from 2.2 to 8.7%, 0.6 to 5.1% and 0.1 to 3.8%, respectively, and maternal heritability estimates of stillbirth for parities 1, 2 and 3 ranged from 1.4 to 6.3%, 0.5 to 4.2% and 0.08 to 2.0%, respectively, using linear and threshold animal models. The threshold sire model estimates of heritabilities for stillbirth in this study were 0.021 to 0.071, while the linear sire model estimates of heritabilities for stillbirth in the current study were from 0.003 to 0.021 over the parities. There was a slightly increasing genetic trend for stillbirth rate in parities 1 and 2 over time with the analysis of linear animal and linear sire models. There was a significant decreasing genetic trend for stillbirth rate in parity 1 and 3 over time with the analysis of threshold animal and threshold sire models, but the genetic trend for stillbirth rate in parity 2 with these models of analysis was significantly positive. The low estimates of heritability obtained in this study implied that much of the improvement in stillbirth could be attained by improvement of production environment rather than genetic selection.;
Chernozhukov, Victor
2009-01-01
Quantile regression is an increasingly important empirical tool in economics and other sciences for analyzing the impact of a set of regressors on the conditional distribution of an outcome. Extremal quantile regression, or quantile regression applied to the tails, is of interest in many economic and financial applications, such as conditional value-at-risk, production efficiency, and adjustment bands in (S,s) models. In this paper we provide feasible inference tools for extremal conditional quantile models that rely upon extreme value approximations to the distribution of self-normalized quantile regression statistics. The methods are simple to implement and can be of independent interest even in the non-regression case. We illustrate the results with two empirical examples analyzing extreme fluctuations of a stock return and extremely low percentiles of live infants' birthweights in the range between 250 and 1500 grams.
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...
Clark, G
2003-04-28
This report describes a feasibility study. We are interested in calculating the angular and linear velocities of a re-entry vehicle using six acceleration signals from a distributed accelerometer inertial measurement unit (DAIMU). Earlier work showed that angular and linear velocity calculation using classic nonlinear ordinary differential equation (ODE) solvers is not practically feasible, due to mathematical and numerical difficulties. This report demonstrates the theoretical feasibility of using model-based nonlinear state estimation techniques to obtain the angular and linear velocities in this problem. Practical numerical and calibration issues require additional work to resolve. We show that the six accelerometers in the DAIMU are not sufficient to provide observability, so additional measurements of the system states are required (e.g. from a Global Positioning System (GPS) unit). Given the constraint that our system cannot use GPS, we propose using the existing on-board 3-axis magnetometer to measure angular velocity. We further show that the six nonlinear ODE's for the vehicle kinematics can be decoupled into three ODE's in the angular velocity and three ODE's in the linear velocity. This allows us to formulate a three-state Gauss-Markov system model for the angular velocities, using the magnetometer signals in the measurement model. This re-formulated model is observable, allowing us to build an Extended Kalman Filter (EKF) for estimating the angular velocities. Given the angular velocity estimates from the EKF, the three ODE's for the linear velocity become algebraic, and the linear velocity can be calculated by numerical integration. Thus, we do not need direct measurements of the linear velocity to provide observability, and the technique is mathematically feasible. Using a simulation example, we show that the estimator adds value over the numerical ODE solver in the presence of measurement noise. Calculating the velocities in the
Scale and scope economies in nursing homes: a quantile regression approach.
Christensen, Eric W
2004-04-01
Nursing homes vary widely between facilities with very few beds and facilities with several hundred beds. Previous studies, which estimate nursing home scale and scope economies, do not account for this heterogeneity and implicitly assume that all nursing homes face the same cost structure. To account for heterogeneity, this paper uses quantile regression to estimate cost functions for skilled and intermediate care nursing homes. The results show that the parameters of nursing home cost functions vary significantly by output mix and across the cost distribution. Estimates show that product-specific scale economies systematically increase across the cost distribution for both skilled and intermediate care facilities, with diseconomies of scale in the lower deciles and no significant scale economies in the higher deciles. As for ray scale economies, estimates show economies of scale in the lower deciles and diseconomies of scale or no significant scale economies at higher deciles. The estimates also show that scope economies exist in the lower cost deciles and that no scope economies exist in the higher cost deciles. Additionally, the degree of scope economies monotonically decreases across the deciles.
Sidik, S. M.
1975-01-01
Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.
污染线性模型的非参数估计%NON-PARAMETRIC ESTIMATION IN CONTAMINATED LINEAR MODEL
柴根象; 孙燕; 杨筱菡
2001-01-01
In this paper, the following contaminated linear model is considered: yi=(1-ε)xτiβ+zi, 1≤i≤n, where r.v.'s ｛yi｝ are contaminated with errors ｛zi｝. To assume that the errors have the finite moment of order 2 only. The non-parametric estimation of contaminated coefficient ε and regression parameter β are established, and the strong consistency and convergence rate almost surely of the estimators are obtained. A simulated example is also given to show the visual performance of the estimations.
Time-course window estimator for ordinary differential equations linear in the parameters
Vujacic, Ivan; Dattner, Itai; Gonzalez, Javier; Wit, Ernst
2015-01-01
In many applications obtaining ordinary differential equation descriptions of dynamic processes is scientifically important. In both, Bayesian and likelihood approaches for estimating parameters of ordinary differential equations, the speed and the convergence of the estimation procedure may crucial
Time-course window estimator for ordinary differential equations linear in the parameters
Vujacic, Ivan; Dattner, Itai; Gonzalez, Javier; Wit, Ernst
2015-01-01
In many applications obtaining ordinary differential equation descriptions of dynamic processes is scientifically important. In both, Bayesian and likelihood approaches for estimating parameters of ordinary differential equations, the speed and the convergence of the estimation procedure may
Estimation of central shapes of error distributions in linear regression problems
Lai, P Y; Lee, Stephen M. S
2013-01-01
.... Both methods are motivated by the well-known Hill estimator, which has been extensively studied in the related problem of estimating tail indices, but substitute reciprocals of small L p residuals...
Liu, Xiang; Saat, M Rapik; Qin, Xiao; Barkan, Christopher P L
2013-10-01
Derailments are the most common type of freight-train accidents in the United States. Derailments cause damage to infrastructure and rolling stock, disrupt services, and may cause casualties and harm the environment. Accordingly, derailment analysis and prevention has long been a high priority in the rail industry and government. Despite the low probability of a train derailment, the potential for severe consequences justify the need to better understand the factors influencing train derailment severity. In this paper, a zero-truncated negative binomial (ZTNB) regression model is developed to estimate the conditional mean of train derailment severity. Recognizing that the mean is not the only statistic describing data distribution, a quantile regression (QR) model is also developed to estimate derailment severity at different quantiles. The two regression models together provide a better understanding of train derailment severity distribution. Results of this work can be used to estimate train derailment severity under various operational conditions and by different accident causes. This research is intended to provide insights regarding development of cost-efficient train safety policies. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sidi Ali Ould Abdi
2011-01-01
Full Text Available Given a stationary multidimensional spatial process (i=(i,i∈ℝ×ℝ,i∈ℤ, we investigate a kernel estimate of the spatial conditional quantile function of the response variable i given the explicative variable i. Asymptotic normality of the kernel estimate is obtained when the sample considered is an -mixing sequence.
Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data
Lei Wang; Satoshi Uchida
2008-01-01
MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classificat...
吴启光; 杨国庆
2002-01-01
In this paper, we study the existence of the uniformly minimum risk equivariant (UMRE) estimators of parameters in a class of normal linear models, which include the normal variance components model,the growth curve model, the extended growth curve model, and the seemingly unrelated regression equations model, and so on. The necessary and sufficient conditions are given for the existence of UMRE estimators of the estimable linear functions of regression coefficients, the covariance matrix V and (trV)a, where a＞ 0is known, in the models under an affine group of transformations for quadratic losses and matrix losses, respectively. Under the (extended) growth curve model and the seemingly unrelated regression equations model,the conclusions given in literature for estimating regression coefficients can be derived by applying the general results in this paper, and the sufficient conditions for non-existence of UMRE estimators of V and tr(V) are expanded to be necessary and sufficient conditions. In addition, the necessary and sufficient conditions that there exist UMRE estimators of parameters in the variance components model are obtained for the first time.
Brassey, CA; Maidment, SC; Barrett, PM
2015-01-01
© 2015 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited. The file attached is the published version of the article. Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outpu...
Quach, Minh; Brunel, Nicolas; d'Alché-Buc, Florence
2007-12-01
Statistical inference of biological networks such as gene regulatory networks, signaling pathways and metabolic networks can contribute to build a picture of complex interactions that take place in the cell. However, biological systems considered as dynamical, non-linear and generally partially observed processes may be difficult to estimate even if the structure of interactions is given. Using the same approach as Sitz et al. proposed in another context, we derive non-linear state-space models from ODEs describing biological networks. In this framework, we apply Unscented Kalman Filtering (UKF) to the estimation of both parameters and hidden variables of non-linear state-space models. We instantiate the method on a transcriptional regulatory model based on Hill kinetics and a signaling pathway model based on mass action kinetics. We successfully use synthetic data and experimental data to test our approach. This approach covers a large set of biological networks models and gives rise to simple and fast estimation algorithms. Moreover, the Bayesian tool used here directly provides uncertainty estimates on parameters and hidden states. Let us also emphasize that it can be coupled with structure inference methods used in Graphical Probabilistic Models. Matlab code available on demand.
Linear measurements of the leaf blade in xaraes and massai grasses for estimation of the leaf area
Wilton Ladeira da Silva
2013-09-01
Full Text Available Knowledge on the leaf area of foraging grasses is essential, since it’s one of the most important variables in the evaluation of plant growth. Thus, one aimed at determining equations which allow, through simple measurements of leaf length, as well as average and maximum width, to quickly and accurately estimate the actual leaf area of Brachiaria brizantha cv. Xaraes and Panicum maximum cv. Massai. One measured with millimeter rulers the length along the main vein (L, the maximum width perpendicular to the main vein (Wmax, and the average width (Wave of leaf blades in both species. For determining the actual leaf areas (ALA, one used the Li-Cor®, model LI 3000. Regression and correlation studies were performed between ALA and the leaf area estimated through the linear or exponential equations for choosing the best equations. For xaraes grass the equation with the best accuracy for estimating ALA was the linear 0.53+0.98 LWave and for massai grass the best options were the linear 1.30+0.92 LWave and the exponential 8.86e0.04LWmax and 10.30e0.03LWave. Estimates of the leaf area of xaraes grass and massai grass through simple measurements of leaf length and width have proved to be effective and accurate.
Determinants of the Slovak Enterprises Profi tability: Quantile Regression Approach
Štefan Kováč
2013-09-01
Full Text Available Th e goal of this paper is to analyze profi tability of the Slovak enterprises by means of quantile regression. Th eanalysis is based on individual data from the 2001, 2006 and 2011 fi nancial statements of the Slovak companies.Profi tability is proxied by ratio of profi t/loss to total assets, and twelve covariates are used in the study,including two nominal variables: region and sector. According to the fi ndings size, short- and long-term indebtedness,ratio of long-term assets to total assets, ratio of sales revenue to cost of sales, region and sectorare the possible determinants of profi tability of the companies in Slovakia. Th e results further suggest that thechanges over time have infl uenced the magnitude of the eff ects of given variables.
Tsai, Cheng-Ying; Li, Rui; Tennant, Chris
2015-01-01
As is known, microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy recovery linac machines. To more accurately quantify MBI in a single-pass system and for more complete analyses, we further extend and continue to increase the capabilities of our previously developed linear Vlasov solver [1] to incorporate more relevant impedance models into the code, including transient and steady-state free-space and/or shielding coherent synchrotron radiation (CSR) impedances, the longitudinal space charge (LSC) impedances, and the linac geometric impedances with extension of the existing formulation to include beam acceleration [2]. Then, we directly solve the linearized Vlasov equation numerically for microbunching gain amplification factor. In this study we apply this code to a beamline lattice of transport arc [3] following an upstream linac...
A Bayesian Estimator for Linear Calibration Error Effects in Thermal Remote Sensing
Morgan, J A
2005-01-01
The Bayesian Land Surface Temperature estimator previously developed has been extended to include the effects of imperfectly known gain and offset calibration errors. It is possible to treat both gain and offset as nuisance parameters and, by integrating over an uninformative range for their magnitudes, eliminate the dependence of surface temperature and emissivity estimates upon the exact calibration error.
Selection of the Linear Regression Model According to the Parameter Estimation
无
2000-01-01
In this paper, based on the theory of parameter estimation, we give a selection method and ,in a sense of a good character of the parameter estimation,we think that it is very reasonable. Moreover,we offera calculation method of selection statistic and an applied example.
Muhammed Çetin
2015-01-01
Full Text Available An approximation method based on Lucas polynomials is presented for the solution of the system of high-order linear differential equations with variable coefficients under the mixed conditions. This method transforms the system of ordinary differential equations (ODEs to the linear algebraic equations system by expanding the approximate solutions in terms of the Lucas polynomials with unknown coefficients and by using the matrix operations and collocation points. In addition, the error analysis based on residual function is developed for present method. To demonstrate the efficiency and accuracy of the method, numerical examples are given with the help of computer programmes written in Maple and Matlab.
Belkhatir, Zehor
2017-05-31
This paper proposes a two-stage estimation algorithm to solve the problem of joint estimation of the parameters and the fractional differentiation orders of a linear continuous-time fractional system with non-commensurate orders. The proposed algorithm combines the modulating functions and the first-order Newton methods. Sufficient conditions ensuring the convergence of the method are provided. An error analysis in the discrete case is performed. Moreover, the method is extended to the joint estimation of smooth unknown input and fractional differentiation orders. The performance of the proposed approach is illustrated with different numerical examples. Furthermore, a potential application of the algorithm is proposed which consists in the estimation of the differentiation orders of a fractional neurovascular model along with the neural activity considered as input for this model.
Belkhatir, Zehor
2015-11-05
This paper deals with the joint estimation of the unknown input and the fractional differentiation orders of a linear fractional order system. A two-stage algorithm combining the modulating functions with a first-order Newton method is applied to solve this estimation problem. First, the modulating functions approach is used to estimate the unknown input for a given fractional differentiation orders. Then, the method is combined with a first-order Newton technique to identify the fractional orders jointly with the input. To show the efficiency of the proposed method, numerical examples illustrating the estimation of the neural activity, considered as input of a fractional model of the neurovascular coupling, along with the fractional differentiation orders are presented in both noise-free and noisy cases.
Abdallah, Saeed; Psaromiligkos, Ioannis N.
2012-03-01
We analyze the mean-squared error (MSE) performance of widely linear (WL) and conventional subspace-based channel estimation for single-input multiple-output (SIMO) flat-fading channels employing binary phase-shift-keying (BPSK) modulation when the covariance matrix is estimated using a finite number of samples. The conventional estimator suffers from a phase ambiguity that reduces to a sign ambiguity for the WL estimator. We derive closed-form expressions for the MSE of the two estimators under four different ambiguity resolution scenarios. The first scenario is optimal resolution, which minimizes the Euclidean distance between the channel estimate and the actual channel. The second scenario assumes that a randomly chosen coefficient of the actual channel is known and the third assumes that the one with the largest magnitude is known. The fourth scenario is the more realistic case where pilot symbols are used to resolve the ambiguities. Our work demonstrates that there is a strong relationship between the accuracy of ambiguity resolution and the relative performance of WL and conventional subspace-based estimators, and shows that the less information available about the actual channel for ambiguity resolution, or the lower the accuracy of this information, the higher the performance gap in favor of the WL estimator.
Nonparametric methods for drought severity estimation at ungauged sites
Sadri, S.; Burn, D. H.
2012-12-01
The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Lo, Ching F.
1999-01-01
The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.
Pal, Debdatta; Mitra, Subrata Kumar
2016-10-01
This study used a quantile autoregressive distributed lag (QARDL) model to capture asymmetric impact of rainfall on food production in India. It was found that the coefficient corresponding to the rainfall in the QARDL increased till the 75th quantile and started decreasing thereafter, though it remained in the positive territory. Another interesting finding is that at the 90th quantile and above the coefficients of rainfall though remained positive was not statistically significant and therefore, the benefit of high rainfall on crop production was not conclusive. However, the impact of other determinants, such as fertilizer and pesticide consumption, is quite uniform over the whole range of the distribution of food grain production.
Asymptotically exact Discontinuous Galerkin error estimates for linear symmetric hyperbolic systems
Adjerid, S.; Weinhart, T.
2014-01-01
We present an a posteriori error analysis for the discontinuous Galerkin discretization error of first-order linear symmetric hyperbolic systems of partial differential equations with smooth solutions. We perform a local error analysis by writing the local error as a series and showing that its lead
ASYMPTOTIC ESTIMATION FOR SOLUTION OF A CLASS OF SEMI-LINEAR ROBIN PROBLEMS
Cheng Ouyang
2005-01-01
A class of semi-linear Robin problem is considered. Under appropriate assumptions, the existence and asymptotic behavior of its solution are studied more carefully. Using stretched variables, the formal asymptotic expansion of solution for the problem is constructed and the uniform validity of the solution is obtained by using the method of upper and lower solution.
Estimation of saturation and coherence effects in the KGBJS equation - a non-linear CCFM equation
Deak, Michal
2012-01-01
We solve the modified non-linear extension of the CCFM equation - KGBJS equation - numerically for certain initial conditions and compare the resulting gluon Green functions with those obtained from solving the original CCFM equation and the BFKL and BK equations for the same initial conditions. We improve the low transversal momentum behaviour of the KGBJS equation by a small modification.
N.G. HOSSEIN-ZADEH
2008-01-01
Data on stillbirth from the Animal Breeding Center of Iran collected from January 1990 to December 2007 and comprising 668810 Holstein calving events from 2506 herds were analyzed. Linear and threshold animal and sire models were used to estimate genetic parameters and genetic trends for stillbirth in the first, second, and third parities. Mean incidence of stillbirth decreased from first to third parities: 23.7%, 22.1%, and 21.8%, respectively. Phenotypic rates of stillbirth decreased from 1...
Nengjun Yi
2011-12-01
Full Text Available Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/.
Yi, Nengjun; Liu, Nianjun; Zhi, Degui; Li, Jun
2011-01-01
Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants) for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:22144906
Crimi, Alessandro; Lillholm, Martin; Nielsen, Mads
2011-01-01
the estimates' influence on a missing-data reconstruction task, where high resolution vertebra and cartilage models are reconstructed from incomplete and lower dimensional representations. Our results demonstrate that our methods outperform the traditional ML method and Tikhonov regularization......., and may lead to unreliable results. In this paper, we discuss regularization by prior knowledge using maximum a posteriori (MAP) estimates. We compare ML to MAP using a number of priors and to Tikhonov regularization. We evaluate the covariance estimates on both synthetic and real data, and we analyze...
Jinhong YOU; CHEN Min; Gemai CHEN
2004-01-01
Consider a semiparametric regression model with linear time series errors Yκ = x′κβ + g(tκ) + εκ,1 ≤ k ≤ n, where Yκ's are responses, xκ= (xκ1,xκ2,…,xκp)′and tκ ∈ T( ) R are fixed design points, β = (β1,β2,…… ,βp)′ is an unknown parameter vector, g(.) is an unknown bounded real-valued function defined on a compact subset T of the real line R, and εκ is a linear process given by εκ = ∑∞j=0 ψjeκ-j, ψ0 = 1, where ∑∞j=0 |ψj| ＜∞, and ej, j = 0,±1,±2,…, are I.I.d, random variables. In this paper we establish the asymptotic normality of the least squares estimator ofβ, a smooth estimator of g(·), and estimators of the autocovariance and autocorrelation functions of the linear process εκ.
On the Estimation of Muscle Fiber Conduction Velocity Using a Co-Linear Electrodes Array
2007-11-02
an important parameter of the myoelectric signal which describes muscle fatigue manifestation during voluntary or elicited contractions. It may...Surface Myoelectric Signals Part I: Model Implementation”, IEEE Trans. on BME, Vol. 46, No. 7, pp. 810-820, 1999 [6] W. Muhammad, “Estimation de retards...optimal method to estimate the TD is the Generalized Cross- Correlation method (GCC), but this requires a priori knowledge about signal and noise. In
Villain, Jonathan; Minguez, Laetitia; Halm-Lemeille, Marie-Pierre; Durrieu, Gilles; Bureau, Ronan
2016-02-01
The acute toxicities of 36 pharmaceuticals towards green algae were estimated from a set of quantile regression models representing the first global quantitative structure-activity relationships. The selection of these pharmaceuticals was based on their predicted environmental concentrations. An agreement between the estimated values and the observed acute toxicity values was found for several families of pharmaceuticals, in particular, for antidepressants. A recent classification (BDDCS) of drugs based on ADME properties (Absorption, Distribution, Metabolism and Excretion) was clearly correlated with the acute ecotoxicities towards algae. Over-estimation of toxicity from our QSAR models was observed for classes 2, 3 and 4 whereas our model results were in agreement for the class 1 pharmaceuticals. Clarithromycin, a class 3 antibiotic characterized by weak metabolism and high solubility, was the most toxic to algae (molecular stability and presence in surface water).
Blackman, Karin; Perret, Laurent
2016-09-01
In the present work, a boundary layer developing over a rough-wall consisting of staggered cubes with a plan area packing density, λp = 25%, is studied within a wind tunnel using combined particle image velocimetry and hot-wire anemometry to investigate the non-linear interactions between large-scale momentum regions and small-scale structures induced by the presence of the roughness. Due to the highly turbulent nature of the roughness sub-layer and measurement equipment limitations, temporally resolved flow measurements are not feasible, making the conventional filtering methods used for triple decomposition unsuitable for the present work. Thus, multi-time delay linear stochastic estimation is used to decompose the flow into large-scales and small-scales. Analysis of the scale-decomposed skewness of the turbulent velocity (u') shows a significant contribution of the non-linear term uL ' uS ' 2 ¯ , which represents the influence of the large-scales ( uL ' ) onto the small-scales ( uS ' ). It is shown that this non-linear influence of the large-scale momentum regions occurs with all three components of velocity in a similar manner. Finally, through two-point spatio-temporal correlation analysis, it is shown quantitatively that large-scale momentum regions influence small-scale structures throughout the boundary layer through a non-linear top-down mechanism.
Carroll, Raymond
2009-04-23
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.
Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure
2016-01-01
Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048
Model reduction and parameter estimation of non-linear dynamical biochemical reaction networks.
Sun, Xiaodian; Medvedovic, Mario
2016-02-01
Parameter estimation for high dimension complex dynamic system is a hot topic. However, the current statistical model and inference approach is known as a large p small n problem. How to reduce the dimension of the dynamic model and improve the accuracy of estimation is more important. To address this question, the authors take some known parameters and structure of system as priori knowledge and incorporate it into dynamic model. At the same time, they decompose the whole dynamic model into subset network modules, based on different modules, and then they apply different estimation approaches. This technique is called Rao-Blackwellised particle filters decomposition methods. To evaluate the performance of this method, the authors apply it to synthetic data generated from repressilator model and experimental data of the JAK-STAT pathway, but this method can be easily extended to large-scale cases.
The Entire Quantile Path of a Risk-Agnostic SVM Classifier
Yu, Jin; Zhang, Jian
2012-01-01
A quantile binary classifier uses the rule: Classify x as +1 if P(Y = 1|X = x) >= t, and as -1 otherwise, for a fixed quantile parameter t {[0, 1]. It has been shown that Support Vector Machines (SVMs) in the limit are quantile classifiers with t = 1/2 . In this paper, we show that by using asymmetric cost of misclassification SVMs can be appropriately extended to recover, in the limit, the quantile binary classifier for any t. We then present a principled algorithm to solve the extended SVM classifier for all values of t simultaneously. This has two implications: First, one can recover the entire conditional distribution P(Y = 1|X = x) = t for t {[0, 1]. Second, we can build a risk-agnostic SVM classifier where the cost of misclassification need not be known apriori. Preliminary numerical experiments show the effectiveness of the proposed algorithm.
Bellili, Faouzi; Meftehi, Rabii; Affes, Sofiene; Stephenne, Alex
2015-01-01
In this paper, we tackle for the first time the problem of maximum likelihood (ML) estimation of the signal-to-noise ratio (SNR) parameter over time-varying single-input multiple-output (SIMO) channels. Both the data-aided (DA) and the non-data-aided (NDA) schemes are investigated. Unlike classical techniques where the channel is assumed to be slowly time-varying and, therefore, considered as constant over the entire observation period, we address the more challenging problem of instantaneous (i.e., short-term or local) SNR estimation over fast time-varying channels. The channel variations are tracked locally using a polynomial-in-time expansion. First, we derive in closed-form expressions the DA ML estimator and its bias. The latter is subsequently subtracted in order to obtain a new unbiased DA estimator whose variance and the corresponding Cram\\'er-Rao lower bound (CRLB) are also derived in closed form. Due to the extreme nonlinearity of the log-likelihood function (LLF) in the NDA case, we resort to the expectation-maximization (EM) technique to iteratively obtain the exact NDA ML SNR estimates within very few iterations. Most remarkably, the new EM-based NDA estimator is applicable to any linearly-modulated signal and provides sufficiently accurate soft estimates (i.e., soft detection) for each of the unknown transmitted symbols. Therefore, hard detection can be easily embedded in the iteration loop in order to improve its performance at low to moderate SNR levels. We show by extensive computer simulations that the new estimators are able to accurately estimate the instantaneous per-antenna SNRs as they coincide with the DA CRLB over a wide range of practical SNRs.
Estimating developmental states of tumors and normal tissues using a linear time-ordered model
Xuan Zhenyu
2011-02-01
Full Text Available Abstract Background Tumor cells are considered to have an aberrant cell state, and some evidence indicates different development states appearing in the tumorigenesis. Embryonic development and stem cell differentiation are ordered processes in which the sequence of events over time is highly conserved. The "cancer attractor" concept integrates normal developmental processes and tumorigenesis into a high-dimensional "cell state space", and provides a reasonable explanation of the relationship between these two biological processes from theoretical viewpoint. However, it is hard to describe such relationship by using existed experimental data; moreover, the measurement of different development states is also difficult. Results Here, by applying a novel time-ordered linear model based on a co-bisector which represents the joint direction of a series of vectors, we described the trajectories of development process by a line and showed different developmental states of tumor cells from developmental timescale perspective in a cell state space. This model was used to transform time-course developmental expression profiles of human ESCs, normal mouse liver, ovary and lung tissue into "cell developmental state lines". Then these cell state lines were applied to observe the developmental states of different tumors and their corresponding normal samples. Mouse liver and ovarian tumors showed different similarity to early development stage. Similarly, human glioma cells and ovarian tumors became developmentally "younger". Conclusions The time-ordered linear model captured linear projected development trajectories in a cell state space. Meanwhile it also reflected the change tendency of gene expression over time from the developmental timescale perspective, and our finding indicated different development states during tumorigenesis processes in different tissues.
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer Moesgaard;
2016-01-01
problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter...
O'Hara, Mackie
2017-05-01
Recently, studies have interpreted regular spacing and average number of perikymata between dental enamel defects in orangutans to reflect seasonal episodes of physiological stress. To estimate the amount of time between developmental defects (enamel hypoplasia), studies have relied on perikymata counts. Unfortunately, perikymata are frequently not continuously visible between defects, significantly reducing data sets. A method is presented here for estimating the number of perikymata between defects using standard perikymata profiles (SPP) that allow the number of perikymata between all pairs of defects across a tooth to be analyzed. The SPP method should allow the entire complement of defects to be analyzed within the context of an individual's crown formation time. The average number of perikymata were established per decile and charted to create male and female Pongo pygmaeus SPPs. The position of the beginning of each defect was recorded for lower canines from males (n = 6) and females (n = 17). The number of perikymata between defects estimated by the SPP was compared to the actual count (where perikymata were continuously visible). The number of perikymata between defects estimated by the SPPs was accurate within three perikymata and highly correlated with the actual counts, significantly increasing the number of analyzable defect pairs. SPPs allow all defect pairs to be included in studies of defect timing, not just those with continuously visible perikymata. Establishing an individual's entire complement of dental defects makes it possible to calculate the regularity (and potential seasonality) of defects. © 2017 Wiley Periodicals, Inc.
W. Holmes Finch
2016-05-01
Full Text Available Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates exhibit very high variance and can therefore not be trusted, or because the statistical algorithm cannot converge on parameter estimates at all. There exist an alternative set of model estimation procedures, known collectively as regularization methods, which can be used in such circumstances, and which have been shown through simulation research to yield accurate parameter estimates. The purpose of this paper is to describe, for those unfamiliar with them, the most popular of these regularization methods, the lasso, and to demonstrate its use on an actual high dimensional dataset involving adults with autism, using the R software language. Results of analyses involving relating measures of executive functioning with a full scale intelligence test score are presented, and implications of using these models are discussed.
PkANN I: Non-Linear Matter Power Spectrum Estimation through Artificial Neural Networks
Agarwal, Shankar; Feldman, Hume A; Lahav, Ofer; Thomas, Shaun A
2012-01-01
We investigate a new approach to confront small-scale non-linearities in the power spectrum of matter fluctuations. This ever-present and pernicious uncertainty is often the Achilles' heel in cosmological studies and must be reduced if we are to see the advent of precision cosmology in the late-time Universe. We show that an optimally trained Artificial Neural Network (ANN), when presented with a set of cosmological parameters ($\\Omega_{\\rm m} h^2, \\Omega_{\\rm b} h^2, n_s, w_0, \\sigma_8, \\sum m_\
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Strong consistency of maximum quasi-likelihood estimates in generalized linear models
YiN; Changming; ZHAO; Lincheng
2005-01-01
In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni＝1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.
Carolino, Nuno; Gama, Luis T
2008-01-01
Records from up to 19 054 registered cows and 10 297 calves in 155 herds of the Alentejana cattle breed were used to study the effects of individual (Fi) and maternal (Fm) inbreeding on reproductive, growth and carcass traits, as well as assessing the importance of non-linear associations between inbreeding and performance, and evaluating the differences among sire-families in the effect of Fi and Fm on calf weight at 7 months of age (W7M). Overall, regression coefficients of performance traits on inbreeding were small, indicating a minor but still detrimental effect of both Fi and Fm on most traits. The traits with the highest percentage impact of Fi were total number of calvings through life and calf weight at 3 months of age (W3M), followed by longevity and number of calves produced up to 7 years, while the highest effect of Fm was on W3M. Inbreeding depression on feed efficiency and carcass traits was extremely small and not significant. No evidence was found of a non-linear association between inbreeding and performance for the traits analyzed. Large differences were detected among sire-families in inbreeding depression on W7M, for both Fi and Fm, encouraging the possibility of incorporating sire effects on inbreeding depression into selection decisions.
Bahita Mohamed
2011-01-01
Full Text Available In this work, we introduce an adaptive neural network controller for a class of nonlinear systems. The approach uses two Radial Basis Functions, RBF networks. The first RBF network is used to approximate the ideal control law which cannot be implemented since the dynamics of the system are unknown. The second RBF network is used for on-line estimating the control gain which is a nonlinear and unknown function of the states. The updating laws for the combined estimator and controller are derived through Lyapunov analysis. Asymptotic stability is established with the tracking errors converging to a neighborhood of the origin. Finally, the proposed method is applied to control and stabilize the inverted pendulum system.
A New Entropy Formula and Gradient Estimates for the Linear Heat Equation on Static Manifold
Abimbola Abolarinwa
2014-08-01
Full Text Available In this paper we prove a new monotonicity formula for the heat equation via a generalized family of entropy functionals. This family of entropy formulas generalizes both Perelman’s entropy for evolving metric and Ni’s entropy on static manifold. We show that this entropy satisfies a pointwise differential inequality for heat kernel. The consequences of which are various gradient and Harnack estimates for all positive solutions to the heat equation on compact manifold.
Parametric study of the cost estimate for radio frequency system of compact linear collider
Nummela, Antti; Österberg, Kenneth
In this thesis the cost of so called RF units of CLIC particle collider was examined when RF units’ configuration was considered to be lengthened according to several alternative scenarios. According to current estimates these structures correspond to approximately 20 % of the total cost of CLIC collider and as such the savings achieved in their cost could be significant when total cost of CLIC project is looked into. The unit cost of longer RF units would be greater when compared to the baseline scenario but as smaller quantity would be required cost savings might be achieved. The aim was to find out if cost savings would accumulate and if so, how significant these savings might be. Research material used was mainly internal CERN resources such as earlier cost estimates and tenders received from the industry for production of different components. Based on these cost estimate models were created for three different configurations for lengthening the RF units. The research was limited to the cost of RF unit...
Tan, Ziwen; Qin, Guoyou; Zhou, Haibo
2016-10-01
Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ristya Widi Endah Yani
2008-12-01
Full Text Available Background: Bootstrap is a computer simulation-based method that provides estimation accuracy in estimating inferential statistical parameters. Purpose: This article describes a research using secondary data (n = 30 aimed to elucidate bootstrap method as the estimator of linear regression test based on the computer programs MINITAB 13, SPSS 13, and MacroMINITAB. Methods: Bootstrap regression methods determine ˆ β and Yˆ value from OLS (ordinary least square, ε i = Yi −Yˆi value, determine how many repetition for bootstrap (B, take n sample by replacement from ε i to ε (i , Yi = Yˆi + ε (i value, ˆ β value from sample bootstrap at i vector. If the amount of repetition less than, B a recalculation should be back to take n sample by using replacement from ε i . Otherwise, determine ˆ β from “bootstrap” methods as the average ˆ β value from the result of B times sample taken. Result: The result has similar result compared to linear regression equation with OLS method (α = 5%. The resulting regression equation for caries was = 1.90 + 2.02 (OHI-S, indicating that every one increase of OHI-S unit will result in caries increase of 2.02 units. Conclusion: This was conducted with B as many as 10,500 with 10 times iterations.
Fauziana, F.; Danoedoro, P.; Heru Murti, S.
2016-11-01
Remote sensing has been utilized especially for agriculture yield estimation. Tea yield is effected by biology characteristic including crown density. The challenge of tea yield estimation uses multispectral remote sensing data is the presence of object beside tea. This mixed pixel problem can disturb spectrally to recognize tea tree, so it is necessary to use pixel approach. The aims of this research are (1) to determine fraction of tea and non-tea; (2) to estimate crown density percentage based on tea Normalized Difference Vegetation Index (NDVI); (3) to estimate tea yield based on crown density. SPOT-7 was utilized for this application. Linear Spectral Mixture Analysis (LSMA) has applied to determination fraction percentage each pixel. Each pure endmember was read the NDVI value. NDVI of tea tree has sensitivity with crown density. Counting tea NDVI was applied for NDVI mixed pixel. Linear regression analysis has applied for estimating crown density and tea yield. The results of this research are SPOT -7 which can recognize tea, tree shade, impervious and soil each pixel with accuracy 99,84%. Although it produced high accuracy, it has overestimate at certain tea estate because of the attendance of impervious. Regression analysis of crown density and NDVI showed coeffisien determination 52%. This model result 4-100% crown density percentage, where crown density 4-55% were located beside tea tree or pruned-tea block. Regression analysis of crown density and tea yield relation showed coeffisien determination 45%. This model produced 161,34-1296,8 kg/ha. Each this model resulted Root Mean Square Error (RMSE) 14,27% and 551,52 kg/ha.
Estimating the Eutectic Composition of Simple Binary Alloy System Using Linear Geometry
Muhammed Olawale Hakeem AMUDA
2008-06-01
Full Text Available A simple linear equation was developed and applied to a hypothetical binary equilibrium diagram to evaluate the eutectic composition of the binary alloy system. Solution of the equations revealed that the eutectic composition of the case study Pb – Sn, Bi – Cd and Al – Si alloys are 39.89% Pb, 60.11% Sn, 58.01% Bi, 41.99% Cd and 90.94% Al, 9.06% Si respectively. These values are very close to experimental values. The percent deviation of analytical values from experimental values ranged between 2.87 and 5% for the three binary systems considered, except for Si – Al alloy in which the percent deviation for the silicon element was 22%.It is concluded that equation of straight line could be used to predict the eutectic composition of simple binary alloys within tolerable experimental deviation range of 2.5%.
Partial state estimation for linear systems with output and input time delays.
Ha, Q P; That, Nguyen D; Nam, Phan T; Trinh, H
2014-03-01
This paper deals with the problem of partial state observer design for linear systems that are subject to time delays in the measured output as well as the control input. By choosing a set of appropriate augmented Lyapunov-Krasovskii functionals with a triple-integral term and using the information of both the delayed output and input, a novel approach to design a minimal-order observer is proposed to guarantee that the observer error is ε-convergent with an exponential rate. Existence conditions of such an observer are derived in terms of matrix inequalities for the cases with time delays in both the output and input and with output delay only. Constructive design algorithms are introduced. Numerical examples are provided to illustrate the design procedure, practicality and effectiveness of the proposed observer.
RCS estimation of linear and planar dipole phased arrays approximate model
Singh, Hema; Jha, Rakesh Mohan
2016-01-01
In this book, the RCS of a parallel-fed linear and planar dipole array is derived using an approximate method. The signal propagation within the phased array system determines the radar cross section (RCS) of phased array. The reflection and transmission coefficients for a signal at different levels of the phased-in scattering array system depend on the impedance mismatch and the design parameters. Moreover the mutual coupling effect in between the antenna elements is an important factor. A phased array system comprises of radiating elements followed by phase shifters, couplers, and terminating load impedance. These components lead to respective impedances towards the incoming signal that travels through them before reaching receive port of the array system. In this book, the RCS is approximated in terms of array factor, neglecting the phase terms. The mutual coupling effect is taken into account. The dependence of the RCS pattern on the design parameters is analyzed. The approximate model is established as a...
Measuring risk of crude oil at extreme quantiles
Saša Žiković
2011-06-01
Full Text Available The purpose of this paper is to investigate the performance of VaR models at measuring risk for WTI oil one-month futures returns. Risk models, ranging from industry standards such as RiskMetrics and historical simulation to conditional extreme value model, are used to calculate commodity market risk at extreme quantiles: 0.95, 0.99, 0.995 and 0.999 for both long and short trading positions. Our results show that out of the tested fat tailed distributions, generalised Pareto distribution provides the best fit to both tails of oil returns although tails differ significantly, with the right tail having a higher tail index, indicative of more extreme events. The main conclusion is that, in the analysed period, only extreme value theory based models provide a reasonable degree of safety while widespread VaR models do not provide adequate risk coverage and their performance is especially weak for short position in oil.
Pre-Trained Neural Networks used for Non-Linear State Estimation
Bayramoglu, Enis; Andersen, Nils Axel; Ravn, Ole
2011-01-01
The paper focuses on nonlinear state estimation assuming non-Gaussian distributions of the states and the disturbances. The posterior distribution and the aposteriori distribution is described by a chosen family of paramtric distributions. The state transformation then results in a transformation...... of the paramters in the distribution. This transformation is approximated by a neural network using offline training, which is based on monte carlo sampling. In the paper, there will also be presented a method to construct a flexible distributions well suited for covering the effect of the non...
A consistent local linear estimator of the covariate adjusted correlation coefficient.
Nguyen, Danh V; Sentürk, Damla
2009-08-01
Consider the correlation between two random variables (X, Y), both not directly observed. One only observes X̃ = φ(1)(U)X + φ(2)(U) and Ỹ = ψ(1)(U)Y + ψ(2)(U), where all four functions {φ(l)(·),ψ(l)(·), l = 1, 2} are unknown/unspecified smooth functions of an observable covariate U. We consider consistent estimation of the correlation between the unobserved variables X and Y, adjusted for the above general dual additive and multiplicative effects of U, based on the observed data (X̃, Ỹ, U).
Páez-Borrallo José M
2006-01-01
Full Text Available Location estimation is a recent interesting research area that 0exploits the possibilities of modern communication technology. In this paper, we present a new location system for wireless networks that is especially suitable for indoor terminal-based architectures, as it improves both the speed and the memory requirements. The algorithm is based on the application of linear discriminant functions and Markovian models and its performance has been compared with other systems presented in the literature. Simulation results show a very good performance in reducing the computing time and memory space and displaying an adequate behavior under conditions of few a priori calibration points per position.
Knudsen, Jesper Viese; Bendtsen, Jan Dimon; Andersen, Palle;
2016-01-01
In this paper, a self-tuning linear quadratic supervisory regulator using a large-signal state estimator for a diesel driven generator set is proposed. The regulator improves operational efficiency, in comparison to current implementations, by (i) automating the initial tuning process and (ii......) enabling automated retuning capabilities. Utilizing a first principles-based nonlinear model detailed in [1], the procedure is demonstrated through simulations after real system measurements have been used for parameter identification. The regulator is able to suppress load-induced variations successfully...... throughout the operating range of the diesel generator....
Zuliang Lu
2014-01-01
Full Text Available The aim of this work is to investigate the discretization of general linear hyperbolic convex optimal control problems by using the mixed finite element methods. The state and costate are approximated by the k order (k≥0 Raviart-Thomas mixed finite elements and the control is approximated by piecewise polynomials of order k. By applying the elliptic projection operators and Gronwall’s lemma, we derive a priori error estimates of optimal order for both the coupled state and the control approximation.
Shishir B Sahay; T Meghasyam; Rahul K Roy; Gaurav Pooniwala; Sasank Chilamkurthy; Vikram Gadre
2015-06-01
This paper is targeted towards a general readership in signal processing. It intends to provide a brief tutorial exposure to the Fractional Fourier Transform, followed by a report on experiments performed by the authors on a Generalized Time Frequency Transform (GTFT) proposed by them in an earlier paper. The paper also discusses the extension of the uncertainty principle to the GTFT. This paper discusses some analytical results of the GTFT. We identify the eigenfunctions and eigenvalues of the GTFT. The time shift property of the GTFT is discussed. The paper describes methods for estimation of parameters of individual chirp signals on receipt of a noisy mixture of chirps. A priori knowledge of the nature of chirp signals in the mixture – linear or quadratic is required, as the two proposed methods fall in the category of model-dependent methods for chirp parameter estimation.
Stability estimates for linearized near-field phase retrieval in X-ray phase contrast imaging
Maretzke, Simon
2016-01-01
Propagation-based X-ray phase contrast enables nanoscale imaging of biological tissue by probing not only the attenuation, but also the real part of the refractive index of the sample. Since only intensities of diffracted waves can be measured, the main mathematical challenge consists in a phase-retrieval problem in the near-field regime. We treat an often used linearized version of this problem known as contract transfer function model. Surprisingly, this inverse problem turns out to be well-posed assuming only a compact support of the imaged object. Moreover, we establish bounds on the Lipschitz stability constant. In general this constant grows exponentially with the Fresnel number of the imaging setup. However, both for homogeneous objects, characterized by a fixed ratio of the induced refractive phase shifts and attenuation, and in the case of measurements at two distances, a much more favorable algebraic dependence on the Fresnel number can be shown. In some cases we establish order optimality of our es...
Investment determinants of young and old Portuguese SMEs: A quantile approach
Sílvia Mendes
2014-10-01
Full Text Available Considering two samples of Portuguese SMEs: 582 young SMEs and 1654 old SMEs, using the two-step estimation method and quantile regressions, the empirical evidence allows us to conclude that the determinants of investment have a different impact on young and old SMEs, depending on a firm’ level of investment. In the framework of Acceleration Principle and Neoclassical Theories, the determinants are relevant in explaining the investment of young and old SMEs with high levels of investment. The Growth Domestic Product, as the investment determinant of Acceleration Principle Theory, has a greater impact on the investment of young SMEs with high levels of investment. Sales, as the investment determinant of Neoclassical Theory, have greater impact on the investment of old SMEs with high levels of investment. Cash flow, as the investment determinant of Free Cash Flow Theory, is important in explaining the investment of young and old SMEs with low levels of investment. However, cash flow has greater impact on the investment of young SMEs with low levels of investment. The empirical evidence obtained allows us to make suggestions for policy-makers and the owners/managers of Portuguese SMEs.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint.
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2016-04-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint.
Dunn, Richard A; Tan, Andrew K G; Nayga, Rodolfo M
2012-01-01
Obesity prevalence is unequally distributed across gender and ethnic group in Malaysia. In this paper, we examine the role of socioeconomic inequality in explaining these disparities. The body mass index (BMI) distributions of Malays and Chinese, the two largest ethnic groups in Malaysia, are estimated through the use of quantile regression. The differences in the BMI distributions are then decomposed into two parts: attributable to differences in socioeconomic endowments and attributable to differences in responses to endowments. For both males and females, the BMI distribution of Malays is shifted toward the right of the distribution of Chinese, i.e., Malays exhibit higher obesity rates. In the lower 75% of the distribution, differences in socioeconomic endowments explain none of this difference. At the 90th percentile, differences in socioeconomic endowments account for no more than 30% of the difference in BMI between ethnic groups. Our results demonstrate that the higher levels of income and education that accrue with economic development will likely not eliminate obesity inequality. This leads us to conclude that reduction of obesity inequality, as well the overall level of obesity, requires increased efforts to alter the lifestyle behaviors of Malaysians.
Mellah HACEN
2012-08-01
Full Text Available The induction machine, because of its robustness and low-cost, is commonly used in the industry. Nevertheless, as every type of electrical machine, this machine suffers of some limitations. The most important one is the working temperature which is the dimensioning parameter for the definition of the nominal working point and the machine lifetime. Due to a strong demand concerning thermal monitoring methods appeared in the industry sector. In this context, the adding of temperature sensors is not acceptable and the studied methods tend to use sensorless approaches such as observators or parameters estimators like the extended Kalman Filter (EKF. Then the important criteria are reliability, computational cost ad real time implementation.
Di Lello, Enrico; Trincavelli, Marco; Bruyninckx, Herman; De Laet, Tinne
2014-07-11
In this paper, we introduce a Bayesian time series model approach for gas concentration estimation using Metal Oxide (MOX) sensors in Open Sampling System (OSS). Our approach focuses on the compensation of the slow response of MOX sensors, while concurrently solving the problem of estimating the gas concentration in OSS. The proposed Augmented Switching Linear System model allows to include all the sources of uncertainty arising at each step of the problem in a single coherent probabilistic formulation. In particular, the problem of detecting on-line the current sensor dynamical regime and estimating the underlying gas concentration under environmental disturbances and noisy measurements is formulated and solved as a statistical inference problem. Our model improves, with respect to the state of the art, where system modeling approaches have been already introduced, but only provided an indirect relative measures proportional to the gas concentration and the problem of modeling uncertainty was ignored. Our approach is validated experimentally and the performances in terms of speed of and quality of the gas concentration estimation are compared with the ones obtained using a photo-ionization detector.
Enrico Di Lello
2014-07-01
Full Text Available In this paper, we introduce a Bayesian time series model approach for gas concentration estimation using Metal Oxide (MOX sensors in Open Sampling System (OSS. Our approach focuses on the compensation of the slow response of MOX sensors, while concurrently solving the problem of estimating the gas concentration in OSS. The proposed Augmented Switching Linear System model allows to include all the sources of uncertainty arising at each step of the problem in a single coherent probabilistic formulation. In particular, the problem of detecting on-line the current sensor dynamical regime and estimating the underlying gas concentration under environmental disturbances and noisy measurements is formulated and solved as a statistical inference problem. Our model improves, with respect to the state of the art, where system modeling approaches have been already introduced, but only provided an indirect relative measures proportional to the gas concentration and the problem of modeling uncertainty was ignored. Our approach is validated experimentally and the performances in terms of speed of and quality of the gas concentration estimation are compared with the ones obtained using a photo-ionization detector.
Chesney, Dana L; Matthews, Percival G
2013-12-01
It has been suggested that differences in performance on number-line estimation tasks are indicative of fundamental differences in people's underlying representations of numerical magnitude. However, we were able to induce logarithmic-looking performance in adults for magnitude ranges over which they can typically perform linearly by manipulating their familiarity with the symbolic number formats that we used for the stimuli. This serves as an existence proof that individuals' performances on number-line estimation tasks do not necessarily reflect the functional form of their underlying numerical magnitude representations. Rather, performance differences may result from symbolic difficulties (i.e., number-to-symbol mappings), independently of the underlying functional form. We demonstrated that number-line estimates that are well fit by logarithmic functions need not be produced by logarithmic functions. These findings led us to question the validity of considering logarithmic-looking performance on number-line estimation tasks as being indicative that magnitudes are being represented logarithmically, particularly when symbolic understanding is in question.
Mullah, Muhammad Abu Shadeque; Benedetti, Andrea
2016-11-01
Besides being mainly used for analyzing clustered or longitudinal data, generalized linear mixed models can also be used for smoothing via restricting changes in the fit at the knots in regression splines. The resulting models are usually called semiparametric mixed models (SPMMs). We investigate the effect of smoothing using SPMMs on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data. Through simulations, we compare the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function. Simulation results suggest that, in general, the SPMMs recover the true curves very well and yield reasonable estimates of the correlation and variance parameters. However, for binary outcomes, SPMMs produce biased estimates of the variance parameters for high serially correlated data. We apply these methods to a dataset investigating the association between CD4 cell count and time since seroconversion for HIV infected men enrolled in the Multicenter AIDS Cohort Study.
Angelis, Georgios I; Matthews, Julian C; Kotasidis, Fotis A; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J
2014-11-01
Estimation of nonlinear micro-parameters is a computationally demanding and fairly challenging process, since it involves the use of rather slow iterative nonlinear fitting algorithms and it often results in very noisy voxel-wise parametric maps. Direct reconstruction algorithms can provide parametric maps with reduced variance, but usually the overall reconstruction is impractically time consuming with common nonlinear fitting algorithms. In this work we employed a recently proposed direct parametric image reconstruction algorithm to estimate the parametric maps of all micro-parameters of a two-tissue compartment model, used to describe the kinetics of [[Formula: see text]F]FDG. The algorithm decouples the tomographic and the kinetic modelling problems, allowing the use of previously developed post-reconstruction methods, such as the generalised linear least squares (GLLS) algorithm. Results on both clinical and simulated data showed that the proposed direct reconstruction method provides considerable quantitative and qualitative improvements for all micro-parameters compared to the conventional post-reconstruction fitting method. Additionally, region-wise comparison of all parametric maps against the well-established filtered back projection followed by post-reconstruction non-linear fitting, as well as the direct Patlak method, showed substantial quantitative agreement in all regions. The proposed direct parametric reconstruction algorithm is a promising approach towards the estimation of all individual microparameters of any compartment model. In addition, due to the linearised nature of the GLLS algorithm, the fitting step can be very efficiently implemented and, therefore, it does not considerably affect the overall reconstruction time.
L. Gudmundsson
2012-05-01
Full Text Available The impact of climate change on water resources is usually assessed at the local scale. However, regional climate models (RCM are known to exhibit systematic biases in precipitation. Hence, RCM simulations need to be post-processed in order to produce reliable estimators of local scale climate. A popular post-processing approach is quantile mapping (QM, which is designed to adjust the distribution of modeled data, such that it matches observed climatologies. However, the diversity of suggested QM methods renders the selection of optimal techniques difficult and hence there is a need for clarification. In this paper, QM methods are reviewed and classified into: (1 distribution derived transformations, (2 parametric transformations and (3 nonparametric transformations; each differing with respect to their underlying assumptions. A real world application, using observations of 82 precipitation stations in Norway, showed that nonparametric transformations have the highest skill in systematically reducing biases in RCM precipitation.
Bertrand-Krajewski, J L
2004-01-01
In order to replace traditional sampling and analysis techniques, turbidimeters can be used to estimate TSS concentration in sewers, by means of sensor and site specific empirical equations established by linear regression of on-site turbidity Tvalues with TSS concentrations C measured in corresponding samples. As the ordinary least-squares method is not able to account for measurement uncertainties in both T and C variables, an appropriate regression method is used to solve this difficulty and to evaluate correctly the uncertainty in TSS concentrations estimated from measured turbidity. The regression method is described, including detailed calculations of variances and covariance in the regression parameters. An example of application is given for a calibrated turbidimeter used in a combined sewer system, with data collected during three dry weather days. In order to show how the established regression could be used, an independent 24 hours long dry weather turbidity data series recorded at 2 min time interval is used, transformed into estimated TSS concentrations, and compared to TSS concentrations measured in samples. The comparison appears as satisfactory and suggests that turbidity measurements could replace traditional samples. Further developments, including wet weather periods and other types of sensors, are suggested.
de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente
2016-07-08
Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.
G. Forget
2015-10-01
Full Text Available This paper presents the ECCO v4 non-linear inverse modeling framework and its baseline solution for the evolving ocean state over the period 1992–2011. Both components are publicly available and subjected to regular, automated regression tests. The modeling framework includes sets of global conformal grids, a global model setup, implementations of data constraints and control parameters, an interface to algorithmic differentiation, as well as a grid-independent, fully capable Matlab toolbox. The baseline ECCO v4 solution is a dynamically consistent ocean state estimate without unidentified sources of heat and buoyancy, which any interested user will be able to reproduce accurately. The solution is an acceptable fit to most data and has been found to be physically plausible in many respects, as documented here and in related publications. Users are being provided with capabilities to assess model–data misfits for themselves. The synergy between modeling and data synthesis is asserted through the joint presentation of the modeling framework and the state estimate. In particular, the inverse estimate of parameterized physics was instrumental in improving the fit to the observed hydrography, and becomes an integral part of the ocean model setup available for general use. More generally, a first assessment of the relative importance of external, parametric and structural model errors is presented. Parametric and external model uncertainties appear to be of comparable importance and dominate over structural model uncertainty. The results generally underline the importance of including turbulent transport parameters in the inverse problem.
Baskakov, A G [Voronezh State University (Russian Federation)
2015-08-31
By applying Lyapunov's equation, the method of similar operators, and the methods of harmonic analysis, we obtain estimates for the parameters of exponential dichotomy and for the Green's function constructed for a hyperbolic operator semigroup and a hyperbolic linear relation. Estimates are obtained using quantities which are determined by the resolvent of the infinitesimal operator of the operator semigroup and of the linear relation. Bibliography: 51 titles.
de Asis, Alejandro M.; Omasa, Kenji
Soil conservation planning often requires estimates of soil erosion at a catchment or regional scale. Predictive models such as Universal Soil Loss Equation (USLE) and its subsequent Revised Universal Soil Loss Equation (RUSLE) are useful tools to generate the quantitative estimates necessary for designing sound conservation measures. However, large-scale soil erosion model-factor parameterization and quantification is difficult due to the costs, labor and time involved. Among the soil erosion parameters, the vegetative cover or C factor has been one of the most difficult to estimate over broad geographic areas. The C factor represents the effects of vegetation canopy and ground covers in reducing soil loss. Traditional methods for the extraction of vegetation information from remote sensing data such as classification techniques and vegetation indices were found to be inaccurate. Thus, this study presents a new approach based on Spectral Mixture Analysis (SMA) of Landsat ETM data to map the C factor for use in the modeling of soil erosion. A desirable feature of SMA is that it estimates the fractional abundance of ground cover and bare soils simultaneously, which is appropriate for soil erosion analysis. Hence, we estimated the C factor by utilizing the results of SMA on a pixel-by-pixel basis. We specifically used a linear SMA (LSMA) model and performed a minimum noise fraction (MNF) transformation and pixel purity index (PPI) on Landsat ETM image to derive the proportion of ground cover (vegetation and non-photosynthetic materials) and bare soil within a pixel. The end-members were selected based on the purest pixels found using PPI with reference to very high-resolution QuickBird image and actual field data. Results showed that the C factor value estimated using LSMA correlated strongly with the values measured in the field. The correlation coefficient ( r) obtained was 0.94. A comparative analysis between NDVI- and LSMA-derived C factors also proved that the
Soares dos Santos, T.; Mendes, D.; Rodrigues Torres, R.
2016-01-01
Several studies have been devoted to dynamic and statistical downscaling for analysis of both climate variability and climate change. This paper introduces an application of artificial neural networks (ANNs) and multiple linear regression (MLR) by principal components to estimate rainfall in South America. This method is proposed for downscaling monthly precipitation time series over South America for three regions: the Amazon; northeastern Brazil; and the La Plata Basin, which is one of the regions of the planet that will be most affected by the climate change projected for the end of the 21st century. The downscaling models were developed and validated using CMIP5 model output and observed monthly precipitation. We used general circulation model (GCM) experiments for the 20th century (RCP historical; 1970-1999) and two scenarios (RCP 2.6 and 8.5; 2070-2100). The model test results indicate that the ANNs significantly outperform the MLR downscaling of monthly precipitation variability.
P Shivakumara; G Hemantha Kumar; D S Guru; P Nagabhushan
2005-02-01
When a document is scanned either mechanically or manually for digitization, it often suffers from some degree of skew or tilt. Skew-angle detection plays an important role in the ﬁeld of document analysis systems and OCR in achieving the expected accuracy. In this paper, we consider skew estimation of Roman script. The method uses the boundary growing approach to extract the lowermost and uppermost coordinates of pixels of characters of text lines present in the document, which can be subjected to linear regression analysis (LRA) to determine the skew angle of a skewed document. Further, the proposed technique works ﬁne for scaled text binary documents also. The technique works based on the assumption that the space between the text lines is greater than the space between the words and characters. Finally, in order to evaluate the performance of the proposed methodology we compare the experimental results with those of well-known existing methods.
Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)
Legleiter, Carl
2016-01-01
Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.
Pelayo, R; Solé, M; Sánchez, M J; Molina, A; Valera, M
2016-10-01
Docility is very important for cattle production, and many behavioural tests to measure this trait have been developed. However, very few objective behavioural tests to measure the opposite approach 'aggressive behaviour' have been described. Therefore, the aim of this work was to validate in the Lidia cattle breed a behavioural linear standardized scoring system that measure the aggressiveness and enable genetic analysis of behavioural traits expressing fearless and fighting ability. Reproducibility and repeatability measures were calculated for the 12 linear traits of this scoring system to assess its accuracy, and ranged from 85.3 and 94.2%, and from 66.7 to 97.9%, respectively. Genetic parameters were estimated using an animal model with a Bayesian approach. A total of 1202 behavioural records were used. The pedigree matrix contained 5001 individuals. Heritability values (with standard deviations) ranged between 0.13 (0.04) (Falls of the bull) and 0.41 (0.08) (Speed of approach to horse). Genetic correlations varied from 0.01 (0.07) to 0.90 (0.13). Finally, an exploratory factor analysis using the genetic correlation matrix was calculated. Three main factors were retained to describe the traditional genetic indexes aggressiveness, strength and mobility.
Dobrislav Dobrev∗
2017-02-01
Full Text Available We provide an accurate closed-form expression for the expected shortfall of linear portfolios with elliptically distributed risk factors. Our results aim to correct inaccuracies that originate in Kamdem (2005 and are present also in at least thirty other papers referencing it, including the recent survey by Nadarajah et al. (2014 on estimation methods for expected shortfall. In particular, we show that the correction we provide in the popular multivariate Student t setting eliminates understatement of expected shortfall by a factor varying from at least four to more than 100 across different tail quantiles and degrees of freedom. As such, the resulting economic impact in ﬁnancial risk management applications could be signiﬁcant. We further correct such errors encountered also in closely related results in Kamdem (2007 and 2009 for mixtures of elliptical distributions. More generally, our ﬁndings point to the extra scrutiny required when deploying new methods for expected shortfall estimation in practice.
Semi-empirical Likelihood Confidence Intervals for the Differences of Quantiles with Missing Data
Yong Song QIN; Jun Chao ZHANG
2009-01-01
Detecting population (group) differences is useful in many applications, such as medical research. In this paper, we explore the probabilistic theory for identifying the quantile differences between two populations. ,Suppose that there are two populations x and y with missing data on both of them, where x is nonparametric and y is parametric. We are interested in constructing confidence intervals on the quantile differences of x and y. Random hot deck imputation is used to fill in missing data. Semi-empirical likelihood confidence intervals on the differences are constructed.
Quantiles of the Realized Stock-Bond Correlation and Links to the Macroeconomy
Aslanidis, Nektarios; Christiansen, Charlotte
2014-01-01
This paper adopts quantile regressions to scrutinize the realized stock–bond correlation based upon high frequency returns. The paper provides in-sample and out-of-sample analysis and considers factors constructed from a large number of macro-finance predictors well-known from the return predicta......This paper adopts quantile regressions to scrutinize the realized stock–bond correlation based upon high frequency returns. The paper provides in-sample and out-of-sample analysis and considers factors constructed from a large number of macro-finance predictors well-known from the return...
Udink ten Cate, A.J.
1985-01-01
Discrete-time least-squares algorithms for recursive parameter estimation have continuous-time counterparts, which minimize a quadratic functional. The continuous-time algorithms can also include (in)equality constraints. Asymptotic convergence is demonstrated by means of Lyapunov methods. The constrained algorithms are applied in a stabilized output error configuration for parameter estimation in stochastic linear systems.
G. Forget
2015-05-01
Full Text Available This paper presents the ECCO v4 non-linear inverse modeling framework and its baseline solution for the evolving ocean state over the period 1992–2011. Both components are publicly available and highly integrated with the MITgcm. They are both subjected to regular, automated regression tests. The modeling framework includes sets of global conformal grids, a global model setup, implementations of model-data constraints and adjustable control parameters, an interface to algorithmic differentiation, as well as a grid-independent, fully capable Matlab toolbox. The reference ECCO v4 solution is a dynamically consistent ocean state estimate (ECCO-Production, release 1 without un-identified sources of heat and buoyancy, which any interested user will be able to reproduce accurately. The solution is an acceptable fit to most data and has been found physically plausible in many respects, as documented here and in related publications. Users are being provided with capabilities to assess model-data misfits for themselves. The synergy between modeling and data synthesis is asserted through the joint presentation of the modeling framework and the state estimate. In particular, the inverse estimate of parameterized physics was instrumental in improving the fit to the observed hydrography, and becomes an integral part of the ocean model setup available for general use. More generally, a first assessment of the relative importance of external, parametric and structural model errors is presented. Parametric and external model uncertainties appear to be of comparable importance and dominate over structural model uncertainty. The results generally underline the importance of including turbulent transport parameters in the inverse problem.
N.P. Cardozo
2009-01-01
Full Text Available Esta pesquisa teve como objetivo obter uma equação, por meio de medidas lineares dimensionais das folhas, que permitisse a estimativa da área foliar de Momordica charantia e Pyrostegia venusta. Entre maio e dezembro de 2007, foram estudadas as correlações entre a área folia real (Sf e as medidas dimensionais do limbo foliar, como o comprimento ao longo da nervura principal (C e a largura máxima (L perpendicular à nervura principal. Todas as equações, exponenciais geométricas ou lineares simples, permitiram boas estimativas da área foliar. Do ponto de vista prático, sugere-se optar pela equação linear simples envolvendo o produto C x L, considerando-se o coeficiente linear igual a zero. Desse modo, a estimativa da área foliar de Momordica charantia pode ser feita pela fórmula Sf = 0,4963 x (C x L, e a de Pyrostegia venusta, por Sf = 0,6649 x (C x L.The aim of this study was to obtain a mathematical equation to estimate the leaf area of Momordica charantia and Pyrostegia venusta using linear leaf blade measurements. Correlation studies were conducted involving real leaf area (Sf and leaf length (C, maximum leaf width (L and C x L. The linear and geometric equations involving parameter C provided good leaf area estimates. From a practical viewpoint, the simple linear equation of the regression model is suggested using the C x L parameter, i.e., considering the linear coefficient equal to zero. Thus, leaf area estimate of Momordica charantia can be obtained by using the equation Sf = 0.4963 x (C x L, and that of Pyrostegia venusta by using equation Sf = 0.6649 x (C x L.
Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory
Pande, S.
2013-01-01
A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies
Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory
Pande, S.
2013-01-01
A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies structur
Empirical Likelihood Confidence Intervals for the Differences of Quantiles with Missing Data
Yong-song Qin; Yong-jiang Qian
2009-01-01
Suppose that there are two nonparametric populations x and y with missing data on both of them.We are interested in constructing confidence intervals on the quantile differences of x and y.Random imputation is used.Empirical likelihood confidence intervals on the differences are constructed.
Calibrating regionally downscaled precipitation over Norway through quantile-based approaches
Bolin, David; Frigessi, Arnoldo; Guttorp, Peter; Haug, Ola; Orskaug, Elisabeth; Scheel, Ida; Wallin, Jonas
2016-06-01
Dynamical downscaling of earth system models is intended to produce high-resolution climate information at regional to local scales. Current models, while adequate for describing temperature distributions at relatively small scales, struggle when it comes to describing precipitation distributions. In order to better match the distribution of observed precipitation over Norway, we consider approaches to statistical adjustment of the output from a regional climate model when forced with ERA-40 reanalysis boundary conditions. As a second step, we try to correct downscalings of historical climate model runs using these transformations built from downscaled ERA-40 data. Unless such calibrations are successful, it is difficult to argue that scenario-based downscaled climate projections are realistic and useful for decision makers. We study both full quantile calibrations and several different methods that correct individual quantiles separately using random field models. Results based on cross-validation show that while a full quantile calibration is not very effective in this case, one can correct individual quantiles satisfactorily if the spatial structure in the data are accounted for. Interestingly, different methods are favoured depending on whether ERA-40 data or historical climate model runs are adjusted.
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-01-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures. PMID:27265878
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-06-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
Augusto Hauber Gameiro
2016-04-01
Full Text Available ABSTRACT A linear programming mathematical model was applied to a representative dairy farm located in Brazil. The results showed that optimization models are relevant tools to assist in the planning and management of agricultural production, as well as to assist in estimating potential gains from the use of integrated systems. Diversification was a necessary condition for economic viability. A total cost reduction potential of about 30% was revealed when a scenario of lower levels of diversification was contrasted to one of higher levels. Technical complementarities proved to be important sources of economies. The possibility of reusing nitrogen, phosphorus, and potassium present in animal waste could be increased to 167%, while water reuse could be increased up to 150%. In addition to economic gains, integrated systems bring benefits to the environment, especially with reference to the reuse of resources. The cost dilution of fixed production factors can help economies of scope to be achieved. However, this does not seem to have been the main source of these benefits. Still, the percentage of land use could increase up to 30.7% when the lowest and the highest diversification scenarios were compared. The labor coefficient could have a 4.3 percent increase. Diversification also leads to drastic transaction cost reductions.
Galindo, I; Romero, M C; Sánchez, N; Morales, J M
2016-06-06
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
线性指数分布参数的Bayes估计%The Bayes Estimation of Parameter for Linear Exponential Distribution
谭玲; 李金玉
2011-01-01
对给定容量为n的线性指数分布样本X1,X2,…,Xn,在Linex损失函数下,利用共轭先验分布讨论线性指数分布参数θ的Bayes估计,多层Bayes估计,E-Bayes估计和极大似然估计.%In this paper,the linear exponential distribution given the sample size n is in linex loss function,the use of binomial conjugate prior distribution parameters discusse Bayes estimation,multi-layered Bayes estimation,E-Bayes estimation and the maximum likelihood estimation.
Tang, Robert Y., E-mail: rx-tang@laurentian.ca [Biomolecular Sciences Program, Laurentian University, 935 Ramsey Lake Road, Sudbury, Ontario P3E 2C6 (Canada); Laamanen, Curtis, E-mail: cx-laamanen@laurentian.ca; McDonald, Nancy, E-mail: mcdnancye@gmail.com [Department of Physics, Laurentian University, 935 Ramsey Lake Road, Sudbury, Ontario P3E 2C6 (Canada); LeClair, Robert J., E-mail: rleclair@laurentian.ca [Department of Physics, Laurentian University, 935 Ramsey Lake Road, Sudbury, Ontario P3E 2C6, Canada and Biomolecular Sciences Program, Laurentian University, 935 Ramsey Lake Road, Sudbury, Ontario P3E 2C6 (Canada)
2014-05-15
Purpose: Develop a method to subtract fat tissue contributions to wide-angle x-ray scatter (WAXS) signals of breast biopsies in order to estimate the differential linear scattering coefficients μ{sub s} of fatless tissue. Cancerous and fibroglandular tissue can then be compared independent of fat content. In this work phantom materials with known compositions were used to test the efficacy of the WAXS subtraction model. Methods: Each sample 5 mm in diameter and 5 mm thick was interrogated by a 50 kV 2.7 mm diameter beam for 3 min. A 25 mm{sup 2} by 1 mm thick CdTe detector allowed measurements of a portion of the θ = 6° scattered field. A scatter technique provided means to estimate the incident spectrum N{sub 0}(E) needed in the calculations of μ{sub s}[x(E, θ)] where x is the momentum transfer argument. Values of μ{sup ¯}{sub s} for composite phantoms consisting of three plastic layers were estimated and compared to the values obtained via the sum μ{sup ¯}{sub s}{sup ∑}(x)=ν{sub 1}μ{sub s1}(x)+ν{sub 2}μ{sub s2}(x)+ν{sub 3}μ{sub s3}(x), where ν{sub i} is the fractional volume of the ith plastic component. Water, polystyrene, and a volume mixture of 0.6 water + 0.4 polystyrene labelled as fibphan were chosen to mimic cancer, fat, and fibroglandular tissue, respectively. A WAXS subtraction model was used to remove the polystyrene signal from tissue composite phantoms so that the μ{sub s} of water and fibphan could be estimated. Although the composite samples were layered, simulations were performed to test the models under nonlayered conditions. Results: The well known μ{sub s} signal of water was reproduced effectively between 0.5 < x < 1.6 nm{sup −1}. The μ{sup ¯}{sub s} obtained for the heterogeneous samples agreed with μ{sup ¯}{sub s}{sup ∑}. Polystyrene signals were subtracted successfully from composite phantoms. The simulations validated the usefulness of the WAXS models for nonlayered biopsies. Conclusions: The methodology to
Borzykh, A. N.
2017-01-01
The Seidel method for solving a system of linear algebraic equations and an estimate of its convergence rate are considered. It is proposed to change the order of equations. It is shown that the method described in Faddeevs' book Computational Methods of Linear Algebra can deteriorate the convergence rate estimate rather than improve it. An algorithm for establishing the optimal order of equations is proposed, and its validity is proved. It is shown that the computational complexity of the reordering is 2 n 2 additions and (12) n 2 divisions. Numerical results for random matrices of order 100 are presented that confirm the proposed improvement.
Estimating Conditional Distributions by Neural Networks
Kulczycki, P.; Schiøler, Henrik
1998-01-01
Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...
Neural Network for Estimating Conditional Distribution
Schiøler, Henrik; Kulczycki, P.
Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...
Ravichandran, Ramamoorthy; Binukumar, Johnson Pichy; Davis, Cheriyathmanjiyil Antony
2013-10-01
The measured dose in water at reference point in phantom is a primary parameter for planning the treatment monitor units (MU); both in conventional and intensity modulated/image guided treatments. Traceability of dose accuracy therefore still depends mainly on the calibration factor of the ion chamber/dosimeter provided by the accredited Secondary Standard Dosimetry Laboratories (SSDLs), under International Atomic Energy Agency (IAEA) network of laboratories. The data related to Nd,water calibrations, thermoluminescent dosimetry (TLD) postal dose validation, inter-comparison of different dosimeter/electrometers, and validity of Nd,water calibrations obtained from different calibration laboratories were analyzed to find out the extent of accuracy achievable. Nd,w factors in Gray/Coulomb calibrated at IBA, GmBH, Germany showed a mean variation of about 0.2% increase per year in three Farmer chambers, in three subsequent calibrations. Another ion chamber calibrated in different accredited laboratory (PTW, Germany) showed consistent Nd,w for 9 years period. The Strontium-90 beta check source response indicated long-term stability of the ion chambers within 1% for three chambers. Results of IAEA postal TL "dose intercomparison" for three photon beams, 6 MV (two) and 15 MV (one), agreed well within our reported doses, with mean deviation of 0.03% (SD 0.87%) (n = 9). All the chamber/electrometer calibrated by a single SSDL realized absorbed doses in water within 0.13% standard deviations. However, about 1-2% differences in absorbed dose estimates observed when dosimeters calibrated from different calibration laboratories are compared in solid phantoms. Our data therefore imply that the dosimetry level maintained for clinical use of linear accelerator photon beams are within recommended levels of accuracy, and uncertainties are within reported values.
Recursive Optimal Linear Attitude Estimator%最优递归线性姿态估计算法
傅泽宁; 邵晓巍; 龚德仁; 段登平
2012-01-01
Optimal linear attitude estimator( OLAE) is a fast algorithm based on Rodrigues vector which uses the minimum-element attitude parameterization. However, OLAE is a single time point batch algorithm for spacecraft attitude. A recursive algorithm is presented, which considers all the past time measurements. The measured vectors taken at the previous time are linked up with the ones taken at the current time. This algorithm is based on the so-call z vector and M, which is the crucial element of the OLAE algorithm and the current data processing based on the OLAE algorithm. The attitude simulation tests show that this algorithm provides better precision than the OLAE does when the angular velocity of the spacecraft is constant and slow.%最优线性姿态估计算法以罗格里斯参数作为姿态描述,具有计算量小、精度高等优点.但它是一种基于单点信息的估计算法.提出一种递归思想,整合当前时刻以及历史时刻的多点测量信息,根据最优判定函数建立不同时间节点上测量数据间的数学模型关系,对单点算法中的关键元素M和z进行迭代设计,并由此推导出一种新的递归姿态估计算法.仿真结果表明,最优递归线性姿态估计算法在航天器稳定慢速机动的情况下,解算精度要显著优于单点的最优线性姿态估计算法.