Designing a robust minimum variance controller using discrete slide mode controller approach.
Alipouri, Yousef; Poshtan, Javad
2013-03-01
Designing minimum variance controllers (MVC) for nonlinear systems is confronted with many difficulties. The methods able to identify MIMO nonlinear systems are scarce. Harsh control signals produced by MVC are among other disadvantages of this controller. Besides, MVC is not a robust controller. In this article, the Vector ARX (VARX) model is used for simultaneously modeling the system and disturbance in order to tackle these disadvantages. For ensuring the robustness of the control loop, the discrete slide mode controller design approach is used in designing MVC and generalized MVC (GMVC). The proposed method for controller design is tested on a nonlinear experimental Four-Tank benchmark process and is compared with nonlinear MVCs designed by neural networks. In spite of the simplicity of designing GMVCs for the VARX models with uncertainty, the results show that the proposed method is accurate and implementable.
Linear Minimum variance estimation fusion
ZHU Yunmin; LI Xianrong; ZHAO Juan
2004-01-01
This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.
A Broadband Beamformer Using Controllable Constraints and Minimum Variance
Karimian-Azari, Sam; Benesty, Jacob; Jensen, Jesper Rindom
2014-01-01
The minimum variance distortionless response (MVDR) and the linearly constrained minimum variance (LCMV) beamformers are two optimal approaches in the sense of noise reduction. The LCMV beamformer can also reject interferers using linear constraints at the expense of reducing the degree of freedom...
Minimum Variance Portfolios in the Brazilian Equity Market
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Broadband Minimum Variance Beamforming for Ultrasound Imaging
Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt
2009-01-01
to the ultrasound data. As the error increases, it is seen that the MV beamformer is not as robust compared with the DS beamformer with boxcar an Harming weights. Nevertheless, it is noted that the DS does not outperform the MV beamformer. For errors of 2% and 4% of the correct value, the FWHM are {0.81, 1.25, 0...
minimum variance estimation of yield parameters of rubber tree with ...
2013-03-01
Mar 1, 2013 ... STAMP, an OxMetric modular software system for time series analysis, was used to estimate the yield ... derlying regression techniques. .... Kalman Filter Minimum Variance Estimation of Rubber Tree Yield Parameters. 83.
WU Wentao; PU Jie; LU Yi
2012-01-01
In medical ultrasound imaging field, in order to obtain high resolution and correct the phase errors induced by the velocity in-homogeneity of the tissue, a high-resolution medical ultrasound imaging method combining minimum variance beamforming and general coherence factor was presented. First, the data from the elements is delayed for focusing; then the multi-channel data is used for minimum variance beamforming; at the same time, the data is transformed from array space to beam space to calculate the general coherence factor; in the end, the general coherence factor is used to weight the results of minimum variance beamforming. The medical images are gotten by the imaging system. Experiments based on point object and anechoic cyst object are used to verify the proposed method. The results show the proposed method in the aspects of resolution, contrast and robustness is better than minimum variance beamforming and conventional beamforming.
A note on minimum-variance theory and beyond
Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)
2004-04-30
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.
A comparison between temporal and subband minimum variance adaptive beamforming
Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.
2014-01-01
This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated...
PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS
Daniel Menezes Cavalcante
2016-07-01
Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.
Generalized Minimum Variance Control for MDOF Structures under Earthquake Excitation
Lakhdar Guenfaf
2016-01-01
Full Text Available Control of a multi-degree-of-freedom structural system under earthquake excitation is investigated in this paper. The control approach based on the Generalized Minimum Variance (GMV algorithm is developed and presented. Our approach is a generalization to multivariable systems of the GMV strategy designed initially for single-input-single-output (SISO systems. Kanai-Tajimi and Clough-Penzien models are used to generate the seismic excitations. Those models are calculated using the specific soil parameters. Simulation tests using a 3DOF structure are performed and show the effectiveness of the control method.
Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging
Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt
2007-01-01
This paper investigates the application of adaptive beamforming in medical ultrasound imaging. A minimum variance (MV) approach for near-field beamforming of broadband data is proposed. The approach is implemented in the frequency domain, and it provides a set of adapted, complex apodization...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...
Interdependence of NAFTA capital markets: A minimum variance portfolio approach
López-Herrera Francisco
2014-01-01
Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.
Panea, I.; Drijkoningen, G.G.
2008-01-01
Coherent noise generated by surface waves or ground roll within a heterogeneous near surface is a major problem in land seismic data. Array forming based on single-sensor recordings might reduce such noise more robustly than conventional hardwired arrays. We use the minimum-variance
QI Wen-Juan; ZHANG Peng; DENG Zi-Li
2014-01-01
This paper deals with the problem of designing robust sequential covariance intersection (SCI) fusion Kalman filter for the clustering multi-agent sensor network system with measurement delays and uncertain noise variances. The sensor network is partitioned into clusters by the nearest neighbor rule. Using the minimax robust estimation principle, based on the worst-case conservative sensor network system with conservative upper bounds of noise variances, and applying the unbiased linear minimum variance (ULMV) optimal estimation rule, we present the two-layer SCI fusion robust steady-state Kalman filter which can reduce communication and computation burdens and save energy sources, and guarantee that the actual filtering error variances have a less-conservative upper-bound. A Lyapunov equation method for robustness analysis is proposed, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented and the robust accuracy relations of the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the global SCI fuser is higher than those of the local SCI fusers and the robust accuracies of all SCI fusers are higher than that of each local robust Kalman filter. A simulation example for a tracking system verifies the robustness and robust accuracy relations.
Testing the Minimum Variance Method for Estimating Large Scale Velocity Moments
Agarwal, Shankar; Watkins, Richard
2012-01-01
The estimation and analysis of large-scale bulk flow moments of peculiar velocity surveys is complicated by non-spherical survey geometry, the non-uniform sampling of the matter velocity field by the survey objects, and the typically large measurement errors of the measured line-of-sight velocities. Previously we have developed an optimal "minimum variance" (MV) weighting scheme for using peculiar velocity data to estimate bulk flow moments for idealized dense and isotropic surveys with Gaussian radial distributions that avoids many of these complications. These moments are designed to be easy to interpret and are comparable between surveys. In this paper, we test the robustness of our MV estimators using numerical simulations. Using MV weights, we estimate the underlying bulk flow moments for DEEP, SFI++ and COMPOSITE mock catalogues extracted from the LasDamas and the Horizon Run numerical simulations and compare these estimates to the true moments calculated directly from the simulation boxes. We show that...
Automated Clutch of AMT Vehicle Based on Adaptive Generalized Minimum Variance Controller
Ze Li; Xinhao Yang
2014-01-01
... of the automated clutch of automatic mechanical transmission vehicle. In this paper, an adaptive generalized minimum variance controller is applied to the automated clutch, which is driven by a brushless DC motor...
Soodabeh Darzi
Full Text Available An experience oriented-convergence improved gravitational search algorithm (ECGSA based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α, is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.
Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom
2017-01-01
Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides ...... the MVS beamformer is not suitable for imaging continuous targets, and significant resolution gains were obtained only for isolated targets....
Kaneko, Kunihiko
2013-01-01
The characterization of plasticity, robustness, and evolvability, an important issue in biology, is studied in terms of phenotypic fluctuations. By numerically evolving gene regulatory networks, the proportionality between the phenotypic variances of epigenetic and genetic origins is confirmed. The former is given by the variance of the phenotypic fluctuation due to noise in the developmental process; and the latter, by the variance of the phenotypic fluctuation due to genetic mutation. The r...
SIMULATION STUDY OF GENERALIZED MINIMUM VARIANCE CONTROL FOR AN EXTRACTION TURBINE
Shi Xiaoping
2003-01-01
In an extraction turbine, pressure of the extracted steam and rotate speed of the rotor are two important controlled quantities. The traditional linear state feedback control method is not perfect enough to control the two quantities accurately because of existence of nonlinearity and coupling. A generalized minimum variance control method is studied for an extraction turbine. Firstly, a nonlinear mathematical model of the control system about the two quantities is transformed into a linear system with two white noises. Secondly, a generalized minimum variance control law is applied to the system.A comparative simulation is done. The simulation results indicate that precision and dynamic quality of the regulating system under the new control law are both better than those under the state feedback control law.
Juan ZHAO; Yunmin ZHU
2009-01-01
The optimally weighted least squares estimate and the linear minimum variance estimate are two of the most popular estimation methods for a linear model. In this paper, the authors make a comprehensive discussion about the relationship between the two estimates. Firstly, the authors consider the classical linear model in which the coefficient matrix of the linear model is deterministic,and the necessary and sufficient condition for equivalence of the two estimates is derived. Moreover,under certain conditions on variance matrix invertibility, the two estimates can be identical provided that they use the same a priori information of the parameter being estimated. Secondly, the authors consider the linear model with random coefficient matrix which is called the extended linear model;under certain conditions on variance matrix invertibility, it is proved that the former outperforms the latter when using the same a priori information of the parameter.
Kaneko, Kunihiko
2012-09-01
The characterization of plasticity, robustness, and evolvability, an important issue in biology, is studied in terms of phenotypic fluctuations. By numerically evolving gene regulatory networks, the proportionality between the phenotypic variances of epigenetic and genetic origins is confirmed. The former is given by the variance of the phenotypic fluctuation due to noise in the developmental process; and the latter, by the variance of the phenotypic fluctuation due to genetic mutation. The relationship suggests a link between robustness to noise and to mutation, since robustness can be defined by the sharpness of the distribution of the phenotype. Next, the proportionality between the variances is demonstrated to also hold over expressions of different genes (phenotypic traits) when the system acquires robustness through the evolution. Then, evolution under environmental variation is numerically investigated and it is found that both the adaptability to a novel environment and the robustness are made compatible when a certain degree of phenotypic fluctuations exists due to noise. The highest adaptability is achieved at a certain noise level at which the gene expression dynamics are near the critical state to lose the robustness. Based on our results, we revisit Waddington's canalization and genetic assimilation with regard to the two types of phenotypic fluctuations.
A mean–variance objective for robust production optimization in uncertain geological scenarios
Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne
2014-01-01
directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio......In this paper, we introduce a mean–variance criterion for production optimization of oil reservoirs and suggest the Sharpe ratio as a systematic procedure to optimally trade-off risk and return. We demonstrate by open-loop simulations of a two-phase synthetic oil field that the mean......–variance criterion is able to mitigate the significant inherent geological uncertainties better than the alternative certainty equivalence and robust optimization strategies that have been suggested for production optimization. In production optimization, the optimal water injection profiles and the production...
Image fractal coding algorithm based on complex exponent moments and minimum variance
Yang, Feixia; Ping, Ziliang; Zhou, Suhua
2017-02-01
Image fractal coding possesses very high compression ratio, the main problem is low speed of coding. The algorithm based on Complex Exponent Moments(CEM) and minimum variance is proposed to speed up the fractal coding compression. The definition of CEM and its FFT algorithm are presented, and the multi-distorted invariance of CEM are discussed. The multi-distorted invariance of CEM is fit to the fractal property of an image. The optimal matching pair of range blocks and domain blocks in an image is determined by minimizing the variance of their CEM. Theory analysis and experimental results have proved that the algorithm can dramatically reduce the iteration time and speed up image encoding and decoding process.
An improved minimum variance beamforming applied to plane-wave imaging in medical ultrasound
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh; Jensen, Jørgen Arendt
2016-01-01
Minimum variance beamformer (MVB) is an adaptive beamformer which provides images with higher resolution and contrast in comparison with non-adaptive beamformers like delay and sum (DAS). It finds weight vector of beamformer by minimizing output power while keeping the desired signal unchanged. We...... used the eigen-based MVB and generalized coherence factor (GCF) to further improve the quality of MVB beamformed images. The eigen-based MVB projects the weight vector with a transformation matrix constructed from eigen-decomposing of the array covariance matrix that increases resolution and contrast...
Minimum variance system identification with application to digital adaptive flight control
Kotob, S.; Kaufman, H.
1975-01-01
A new on-line minimum variance filter for the identification of systems with additive and multiplicative noise is described which embodies both accuracy and computational efficiency. The resulting filter is shown to use both the covariance of the parameter vector itself and the covariance of the error in identification. A bias reduction scheme can be used to yield asymptotically unbiased estimates. Experimental results for simulated linearized lateral aircraft motion in a digital closed loop mode are presented, showing the utility of the identification schemes.
Kaneko Kunihiko
2011-01-01
Full Text Available Abstract Background Characterization of robustness and plasticity of phenotypes is a basic issue in evolutionary and developmental biology. The robustness and plasticity are concerned with changeability of a biological system against external perturbations. The perturbations are either genetic, i.e., due to mutations in genes in the population, or epigenetic, i.e., due to noise during development or environmental variations. Thus, the variances of phenotypes due to genetic and epigenetic perturbations provide quantitative measures for such changeability during evolution and development, respectively. Results Using numerical models simulating the evolutionary changes in the gene regulation network required to achieve a particular expression pattern, we first confirmed that gene expression dynamics robust to mutation evolved in the presence of a sufficient level of transcriptional noise. Under such conditions, the two types of variances in the gene expression levels, i.e. those due to mutations to the gene regulation network and those due to noise in gene expression dynamics were found to be proportional over a number of genes. The fraction of such genes with a common proportionality coefficient increased with an increase in the robustness of the evolved network. This proportionality was generally confirmed, also under the presence of environmental fluctuations and sexual recombination in diploids, and was explained from an evolutionary robustness hypothesis, in which an evolved robust system suppresses the so-called error catastrophe - the destabilization of the single-peaked distribution in gene expression levels. Experimental evidences for the proportionality of the variances over genes are also discussed. Conclusions The proportionality between the genetic and epigenetic variances of phenotypes implies the correlation between the robustness (or plasticity against genetic changes and against noise in development, and also suggests that
Effect of variance ratio on ANOVA robustness: Might 1.5 be the limit?
Blanca, María J; Alarcón, Rafael; Arnau, Jaume; Bono, Roser; Bendayan, Rebecca
2017-06-22
Inconsistencies in the research findings on F-test robustness to variance heterogeneity could be related to the lack of a standard criterion to assess robustness or to the different measures used to quantify heterogeneity. In the present paper we use Monte Carlo simulation to systematically examine the Type I error rate of F-test under heterogeneity. One-way, balanced, and unbalanced designs with monotonic patterns of variance were considered. Variance ratio (VR) was used as a measure of heterogeneity (1.5, 1.6, 1.7, 1.8, 2, 3, 5, and 9), the coefficient of sample size variation as a measure of inequality between group sizes (0.16, 0.33, and 0.50), and the correlation between variance and group size as an indicator of the pairing between them (1, .50, 0, -.50, and -1). Overall, the results suggest that in terms of Type I error a VR above 1.5 may be established as a rule of thumb for considering a potential threat to F-test robustness under heterogeneity with unequal sample sizes.
Impulse Noise Filtering Using Robust Pixel-Wise S-Estimate of Variance
Nemanja I. Petrović
2010-01-01
Full Text Available A novel method for impulse noise suppression in images, based on the pixel-wise S-estimator, is introduced. The S-estimator is an alternative for the well-known robust estimate of variance MAD, which does not require a location estimate and hence is more appropriate for asymmetric distributions, frequently encountered in transient regions of the image. The proposed computationally efficient modification of a robust S-estimator of variance is successfully utilized in iterative scheme for impulse noise filtering. Another novelty is that the proposed iterative algorithm has automatic stopping criteria, also based on the pixel-wise S-estimator. Performances of the proposed filter are independent of the image content or noise concentration. The proposed filter outperforms all state-of-the-art filters included in a large comparison, both objectively (in terms of PSNR and MSSIM and subjectively.
Minimum variance imaging based on correlation analysis of Lamb wave signals.
Hua, Jiadong; Lin, Jing; Zeng, Liang; Luo, Zhi
2016-08-01
In Lamb wave imaging, MVDR (minimum variance distortionless response) is a promising approach for the detection and monitoring of large areas with sparse transducer network. Previous studies in MVDR use signal amplitude as the input damage feature, and the imaging performance is closely related to the evaluation accuracy of the scattering characteristic. However, scattering characteristic is highly dependent on damage parameters (e.g. type, orientation and size), which are unknown beforehand. The evaluation error can degrade imaging performance severely. In this study, a more reliable damage feature, LSCC (local signal correlation coefficient), is established to replace signal amplitude. In comparison with signal amplitude, one attractive feature of LSCC is its independence of damage parameters. Therefore, LSCC model in the transducer network could be accurately evaluated, the imaging performance is improved subsequently. Both theoretical analysis and experimental investigation are given to validate the effectiveness of the LSCC-based MVDR algorithm in improving imaging performance.
Tiong Sieh Kiong
2014-01-01
Full Text Available In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR for wanted signals.
Early fault detection in automotive ball bearings using the minimum variance cepstrum
Park, Choon-Su; Choi, Young-Chul; Kim, Yang-Hann
2013-07-01
Ball bearings in automotive wheels play an important role in a vehicle. They enable an automobile to run and simultaneously support the vehicle. Once faults are generated, even if they are small, they often grow fast even under normal driving condition and cause vibration and noise. Therefore, it is critical to detect faults as early as possible to prevent bearings from generating harsh noise and vibration. How early faults can be detected is associated with how well a detecting method finds the information of early faults from measured signal. Incipient faults are so small that the fault signal is inherently buried by noise. Minimum variance cepstrum (MVC) has been introduced for the observation of periodic impulse signal under noisy environments. We are particularly focusing on the definition of MVC that goes back to the original definition by Bogert et al. in comparison with the recently prevalent definition of cepstral analysis. In this work, the MVC is, therefore, obtained by liftering a logarithmic power spectrum, and the lifter bank is designed by the minimum variance algorithm. Furthermore, it is also shown how efficient the method is for detecting periodic fault signal made by early faults by using automotive ball bearings, with which an automobile is equipped under running conditions. We were able to detect incipient faults in 4 out of 12 normal bearings which passed acceptance test as well as in bearings that were recalled due to noise and vibration. In addition, we compared the results of the proposed method with results obtained using other older well-established early fault detection methods that were chosen from 4 groups of methods which were classified by the domain of observation. The results demonstrated that MVC determined bearing fault periods more clearly than other methods under the given condition.
Tanner-Smith, Emily E.; Tipton, Elizabeth
2014-01-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…
Tanner-Smith, Emily E.; Tipton, Elizabeth
2014-01-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…
New algorithm for robust H2/H∞ filtering with error variance assignment
刘立恒; 邓正隆; 王广雄
2004-01-01
We consider the robust H2/H∞ filtering problem for linear perturbed systems with steady-state error variance assignment. The generalized inverse technique of matrix is introduced, and a new algorithm is developed. After two Riccati equations are solved, the filter can be obtained directly, and the following three performance requirements are simultaneously satisfied: The filtering process is asymptotically stable; the steady-state variance of the estimation error of each state is not more than the individual prespecified upper bound; the transfer function from exogenous noise inputs to error state outputs meets the prespecified H∞ norm upper bound constraint. A numerical example is provided to demonstrate the flexibility of the proposed design approach.
Thermography based breast cancer detection using texture features and minimum variance quantization
Milosevic, Marina; Jankovic, Dragan; Peulic, Aleksandar
2014-01-01
In this paper, we present a system based on feature extraction techniques and image segmentation techniques for detecting and diagnosing abnormal patterns in breast thermograms. The proposed system consists of three major steps: feature extraction, classification into normal and abnormal pattern and segmentation of abnormal pattern. Computed features based on gray-level co-occurrence matrices are used to evaluate the effectiveness of textural information possessed by mass regions. A total of 20 GLCM features are extracted from thermograms. The ability of feature set in differentiating abnormal from normal tissue is investigated using a Support Vector Machine classifier, Naive Bayes classifier and K-Nearest Neighbor classifier. To evaluate the classification performance, five-fold cross validation method and Receiver operating characteristic analysis was performed. The verification results show that the proposed algorithm gives the best classification results using K-Nearest Neighbor classifier and a accuracy of 92.5%. Image segmentation techniques can play an important role to segment and extract suspected hot regions of interests in the breast infrared images. Three image segmentation techniques: minimum variance quantization, dilation of image and erosion of image are discussed. The hottest regions of thermal breast images are extracted and compared to the original images. According to the results, the proposed method has potential to extract almost exact shape of tumors. PMID:26417334
Tanner-Smith, Emily E; Tipton, Elizabeth
2014-03-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates.
Chen, Yang; Zou, Ling; Zhou, Bin
2017-07-01
The high mounting precision of the fiber underwater acoustic array leads to an array manifold without perturbation. Besides, the targets are either static or slowly moving in azimuth in underwater acoustic array signal processing. Therefore, the covariance matrix can be estimated accurately by prolonging the observation time. However, this processing is limited to poor bearing resolution due to small aperture, low SNR and strong interferences. In this paper, diagonal rejection (DR) technology for Minimum Variance Distortionless Response (MVDR) was developed to enhance the resolution performance. The core idea of DR is rejecting the main diagonal elements of the covariance matrix to improve the output signal to interference and noise ratio (SINR). The definition of SINR here implicitly assumes independence between the spatial filter and the received observations at which the SINR is measured. The power of noise converges on the diagonal line in the covariance matrix and then it is integrated into the output beams. With the diagonal noise rejected by a factor smaller than 1, the array weights of MVDR will concentrate on interference suppression, leading to a better resolution capability. The algorithm was theoretically proved with optimal rejecting coefficient derived under both infinite and finite snapshots scenarios. Numerical simulations were conducted with an example of a linear array with eight elements half-wavelength spaced. Both resolution and Direction-of-Arrival (DOA) performances of MVDR and DR-based MVDR (DR-MVDR) were compared under different SNR and snapshot numbers. A conclusion can be drawn that with the covariance matrix accurately estimated, DR-MVDR can provide a lower sidelobe output level and a better bearing resolution capacity than MVDR without harming the DOA performance.
Bereteu, L; Drăgănescu, G E; Stănescu, D; Sinescu, C
2011-12-01
In this paper, we search an adequate quantitative method based on minimum variance spectral analysis in order to reflect the dependence of the speech quality on the correct positioning of the dental prostheses. We also search some quantitative parameters, which reflect the correct position of dental prostheses in a sensitive manner.
Georgy Shevlyakov; Kiseon Kim
2005-01-01
A brief survey of former and recent results on Huber's minimax approach in robust statistics is given. The least informative distributions minimizing Fisher information for location over several distribution classes with upper-bounded variances and subranges are written down. These least informative distributions are qualitatively different from classical Huber's solution and have the following common structure: (i) with relatively small variances they are short-tailed, in particular normal; (ii) with relatively large variances they are heavytailed, in particular the Laplace; (iii) they are compromise with relatively moderate variances. These results allow to raise the efficiency of minimax robust procedures retaining high stability as compared to classical Huber's procedure for contaminated normal populations. In application to signal detection problems, the proposed minimax detection rule has proved to be robust and close to Huber's for heavy-tailed distributions and more efficient than Huber's for short-tailed ones both in asymptotics and on finite samples.
Rogan, Joanne C.; Keselman, H. J.
1977-01-01
The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)
Kleijnen, J.P.C.; Beers, van W.C.M.
2005-01-01
This paper investigates the use of Kriging in random simulation when the simulation output variances are not constant. Kriging gives a response surface or metamodel that can be used for interpolation. Because Ordinary Kriging assumes constant variances, this paper also applies Detrended Kriging to e
Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.
Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash
2017-01-01
Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.
Feldman, Hume A; Hudson, Michael J
2009-01-01
The low order moments of the large scale peculiar velocity field are sensitive probes of the matter density fluctuations on very large scales. However, peculiar velocity surveys have varying spatial distributions of tracers, and so the moments estimated are hard to model and thus are not directly comparable between surveys. In addition, the sparseness of typical proper distance surveys can lead to aliasing of small scale power into what is meant to be a probe of the largest scales. Here we extend our previous optimization analysis of the bulk flow to include the shear and octupole moments where velocities are weighted to give an optimal estimate of the moments of an idealized survey, with the variance of the difference between the estimate and the actual flow being minimized. These "minimum variance" (MV) estimates can be designed to calculate the moments on a particular scale with minimal sensitivity to small scale power, and thus different surveys can be directly compared. The MV moments were also designed ...
Chang, Wen-Jer; Huang, Bo-Jyun
2014-11-01
The multi-constrained robust fuzzy control problem is investigated in this paper for perturbed continuous-time nonlinear stochastic systems. The nonlinear system considered in this paper is represented by a Takagi-Sugeno fuzzy model with perturbations and state multiplicative noises. The multiple performance constraints considered in this paper include stability, passivity and individual state variance constraints. The Lyapunov stability theory is employed to derive sufficient conditions to achieve the above performance constraints. By solving these sufficient conditions, the contribution of this paper is to develop a parallel distributed compensation based robust fuzzy control approach to satisfy multiple performance constraints for perturbed nonlinear systems with multiplicative noises. At last, a numerical example for the control of perturbed inverted pendulum system is provided to illustrate the applicability and effectiveness of the proposed multi-constrained robust fuzzy control method.
Yang Kailiang [Department of Automation, Shanghai Jiaotong University, 800 Dong Chuan Road, Shanghai 200240 (China); Lu Junguo [Department of Automation, Shanghai Jiaotong University, 800 Dong Chuan Road, Shanghai 200240 (China)], E-mail: jglu@sjtu.edu.cn
2009-03-15
In this paper, we consider the robust variance-constrained control problem for uncertain linear continuous time-delay systems subjected to parameter uncertainties. The purpose of this multi-objective control problem is to design a static state feedback controller that does not depend on the parameter uncertainties such that the resulting closed-loop system is asymptotically stable and the steady-state variance of each state is not more than the individual pre-specified value simultaneously. Using the linear matrix inequality approach, the existence conditions of such controllers are derived. A parameterized representation of the desired controllers is presented in terms of the feasible solutions to a certain linear matrix inequality system. An illustrative numerical example is provided to demonstrate the effectiveness of the proposed results.
Variance-Constrained Robust Estimation for Discrete-Time Systems with Communication Constraints
Baofeng Wang
2014-01-01
Full Text Available This paper is concerned with a new filtering problem in networked control systems (NCSs subject to limited communication capacity, which includes measurement quantization, random transmission delay, and packets loss. The measurements are first quantized via a logarithmic quantizer and then transmitted through a digital communication network with random delay and packet loss. The three communication constraints phenomena which can be seen as a class of uncertainties are formulated by a stochastic parameter uncertainty system. The purpose of the paper is to design a linear filter such that, for all the communication constraints, the error state of the filtering process is mean square bounded and the steady-state variance of the estimation error for each state is not more than the individual prescribed upper bound. It is shown that the desired filtering can effectively be solved if there are positive definite solutions to a couple of algebraic Riccati-like inequalities or linear matrix inequalities. Finally, an illustrative numerical example is presented to demonstrate the effectiveness and flexibility of the proposed design approach.
A Weakly Robust PTAS for Minimum Clique Partition in Unit Disk Graphs
Pirwani, Imran A.; Salavatipour, Mohammad R.
We consider the problem of partitioning the set of vertices of a given unit disk graph (UDG) into a minimum number of cliques. The problem is NP-hard and various constant factor approximations are known, with the best known ratio of 3. Our main result is a weakly robust polynomial time approximation scheme (PTAS) for UDGs expressed with edge-lengths and ɛ> 0 that either (i) computes a clique partition, or (ii) produces a certificate proving that the graph is not a UDG; if the graph is a UDG, then our partition is guaranteed to be within (1 + ɛ) ratio of the optimum; however, if the graph is not a UDG, it either computes a clique partition, or detects that the graph is not a UDG. Noting that recognition of UDG's is NP-hard even with edge lengths, this is a significant weakening of the input model.
A fast and Robust Algorithm for general inequality/equality constrained minimum time problems
Briessen, B. [Sandia National Labs., Albuquerque, NM (United States); Sadegh, N. [Georgia Inst. of Tech., Atlanta, GA (United States). School of Mechanical Engineering
1995-12-01
This paper presents a new algorithm for solving general inequality/equality constrained minimum time problems. The algorithm`s solution time is linear in the number of Runge-Kutta steps and the number of parameters used to discretize the control input history. The method is being applied to a three link redundant robotic arm with torque bounds, joint angle bounds, and a specified tip path. It solves case after case within a graphical user interface in which the user chooses the initial joint angles and the tip path with a mouse. Solve times are from 30 to 120 seconds on a Hewlett Packard workstation. A zero torque history is always used in the initial guess, and the algorithm has never crashed, indicating its robustness. The algorithm solves for a feasible solution for large trajectory execution time t{sub f} and then reduces t{sub f} and then reduces t{sub f} by a small amount and re-solves. The fixed time re- solve uses a new method of finding a near-minimum-2-norm solution to a set of linear equations and inequalities that achieves quadratic convegence to a feasible solution of the full nonlinear problem.
Mohammad Ali Barati
2016-04-01
Full Text Available Multi-period models of portfolio selection have been developed in the literature with respect to certain assumptions. In this study, for the first time, the portfolio selection problem has been modeled based on mean-semi variance with transaction cost and minimum transaction lots considering functional constraints and fuzzy parameters. Functional constraints such as transaction cost and minimum transaction lots were included. In addition, the returns on assets parameters were considered as trapezoidal fuzzy numbers. An efficient genetic algorithm (GA was designed, results were analyzed using numerical instances and sensitivity analysis were executed. In the numerical study, the problem was solved based on the presence or absence of each mode of constraints including transaction costs and minimum transaction lots. In addition, with the use of sensitivity analysis, the results of the model were presented with the variations of minimum expected rate of programming periods.
Stenroos, Matti; Hauk, Olaf
2013-11-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG+EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG+EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG+EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only.
Uchoa, Eduardo; Fukasawa, Ricardo; Lysgaard, Jens
2008-01-01
This paper presents a robust branch-cut-and-price algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to q-arbs, a structure that arises from a relaxation of the capacitated prize-collecting arborescence problem in order to make it solvable in pseudo-p...
Uchoa, Eduardo; Fukasawa, Ricardo; Lysgaard, Jens
This paper presents a robust branch-cut-and-price algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to q-arbs, a structure that arises from a relaxation of the capacitated prize-collecting arborescence problem in order to make it solvable in pseudo...
Brahma, Sanjoy; Datta, Biswa
2009-07-01
The partial quadratic eigenvalue assignment problem (PQEVAP) concerns the reassignment of a small number of undesirable eigenvalues of a quadratic matrix pencil, while leaving the remaining large number of eigenvalues and the corresponding eigenvectors unchanged. The problem arises in controlling undesirable resonance in vibrating structures and in stabilizing control systems. The solution of this problem requires computations of a pair of feedback matrices. For practical effectiveness, these feedback matrices must be computed in such a way that their norms and the condition number of the closed-loop eigenvector matrix are as small as possible. These considerations give rise to the minimum norm partial quadratic eigenvalue assignment problem (MNPQEVAP) and the robust partial quadratic eigenvalue assignment problem (RPQEVAP), respectively. In this paper we propose new optimization based algorithms for solving these problems. The problems are solved directly in a second-order setting without resorting to a standard first-order formulation so as to avoid the inversion of a possibly ill-conditioned matrix and the loss of exploitable structures of the original model. The algorithms require the knowledge of only the open-loop eigenvalues to be replaced and their corresponding eigenvectors. The remaining open-loop eigenvalues and their corresponding eigenvectors are kept unchanged. The invariance of the large number of eigenvalues and eigenvectors under feedback is guaranteed by a proven mathematical result. Furthermore, the gradient formulas needed to solve the problems by using the quasi-Newton optimization technique employed are computed in terms of the known quantities only. Above all, the proposed methods do not require the reduction of the model order or the order of the controller, even when the underlying finite element model has a very large degree of freedom. These attractive features, coupled with minimal computational requirements, such as solutions of small
Wei, Xile; Lu, Meili; Wang, Jiang; Tsang, K. M.; Deng, Bin; Che, Yanqiu
2010-05-01
We consider the assumption of existence of the general nonlinear internal model that is introduced in the design of robust output regulators for a class of minimum-phase nonlinear systems with rth degree (r ≥ 2). The robust output regulation problem can be converted into a robust stabilisation problem of an augmented system consisting of the given plant and a high-gain nonlinear internal model, perfectly reproducing the bounded including not only periodic but also nonperiodic exogenous signal from a nonlinear system, which satisfies some general immersion assumption. The state feedback controller is designed to guarantee the asymptotic convergence of system errors to zero manifold. Furthermore, the proposed scheme makes use of output feedback dynamic controller that only processes information from the regulated output error by using high-gain observer to robustly estimate the derivatives of the regulated output error. The stabilisation analysis of the resulting closed-loop systems leads to regional as well as semi-global robust output regulation achieved for some appointed initial condition in the state space, for all possible values of the uncertain parameter vector and the exogenous signal, ranging over an arbitrary compact set.
Uchoa, Eduardo; Fukasawa, Ricardo; Lysgaard, Jens
2008-01-01
This paper presents a robust branch-cut-and-price algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to q-arbs, a structure that arises from a relaxation of the capacitated prize-collecting arborescence problem in order to make it solvable in pseudo...... or the size of the LPs that are actually solved. Computational results on benchmark instances from the OR-Library show very significant improvements over previous algorithms. Several open instances could be solved to optimality....
Uchoa, Eduardo; Fukasawa, Ricardo; Lysgaard, Jens;
This paper presents a robust branch-cut-and-price algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to q-arbs, a structure that arises from a relaxation of the capacitated prize-collecting arborescence problem in order to make it solvable in pseudo-p...... or the size of the LPs that are actually solved. Computational results on benchmark instances from the OR-Library show very signicant improvements over previous algorithms. Several open instances could be solved to optimality....
Bai, Zheng-Jian; Datta, Biswa Nath; Wang, Jinwei
2010-04-01
The partial quadratic eigenvalue assignment problem (PQEVAP) concerns reassigning a few undesired eigenvalues of a quadratic matrix pencil to suitably chosen locations and keeping the other large number of eigenvalues and eigenvectors unchanged (no spill-over). The problem naturally arises in controlling dangerous vibrations in structures by means of active feedback control design. For practical viability, the design must be robust, which requires that the norms of the feedback matrices and the condition number of the closed-loop eigenvectors are as small as possible. The problem of computing feedback matrices that satisfy the above two practical requirements is known as the Robust Partial Quadratic Eigenvalue Assignment Problem (RPQEVAP). In this paper, we formulate the RPQEVAP as an unconstrained minimization problem with the cost function involving the condition number of the closed-loop eigenvector matrix and two feedback norms. Since only a small number of eigenvalues of the open-loop quadratic pencil are computable using the state-of-the-art matrix computational techniques and/or measurable in a vibration laboratory, it is imperative that the problem is solved using these small number of eigenvalues and the corresponding eigenvectors. To this end, a class of the feedback matrices are obtained in parametric form, parameterized by a single parametric matrix, and the cost function and the required gradient formulas for the optimization problem are developed in terms of the small number of eigenvalues that are reassigned and their corresponding eigenvectors. The problem is solved directly in quadratic setting without transforming it to a standard first-order control problem and most importantly, the significant "no spill-over property" of the closed-loop eigenvalues and eigenvectors is established by means of a mathematical result. These features make the proposed method practically applicable even for very large structures. Results on numerical experiments show
Performance assessment of excitation system based on minimum variance benchmark%基于最小方差基准的励磁系统性能评估
张虹; 徐滨; 高健; 庞健
2014-01-01
Step response test methods are generally used to evaluate synchronous generator excitation system performance, but this method can not be implemented online. A method for evaluating the excitation system performance of the minimum variance control benchmark is proposed. Performance of the system under the action of the minimum variance controller output is considered as the upper bound of performance. The ratio of this output performance and actual output performance of the system is defined as the performance index. To avoid expanding the Diophantine equation, filtering and correlation analysis (FCOR) algorithm is introduced. The analysis results show that this method only requires synchronous generator output voltage data and a priori knowledge of the system dead time. Simulation results show that this method simplifies the calculation process, and evaluates the performance of excitation control system timely and accurately.%对同步发电机励磁系统性能评价一般通过阶跃响应方法，但该方法无法在线进行，为此提出了最小方差控制基准的性能评估方法。对系统设计最小方差控制器并作为系统控制性能上限，与系统实际性能进行比较而得到性能指标，并对该方法进行系统滤波和相关性分析 FCOR(Filtering and Correlation Analysis)算法的改进，避免了 Diophantine 方程的展开运算。分析表明该评估方法只需利用同步发电机输出端电压数据，结合系统时滞d就可以得到励磁系统的性能指标。仿真结果表明该方法简化了计算过程，能够及时准确地在线评估励磁系统的控制性能。
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Davendralingam, Navindran
Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is
Sørensen, John Dalsgaard; Rizzuto, Enrico; Narasimhan, Harikrishna
2012-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of structures......, a theoretical and risk-based framework is presented which facilitates the quantification of robustness, and thus supports the formulation of pre-normative guidelines....
Robustness Beamforming Algorithms
Sajad Dehghani
2014-04-01
Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.
Robustness Beamforming Algorithms
Sajad Dehghani
2014-09-01
Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.
Tun, F A Hla Myo; Naing, T C Zaw Min
2010-01-01
In this paper, the minimum channel gain flow with uncertainty in the demand vector is examined. The approach is based on a transformation of uncertainty in the demand vector to uncertainty in the gain vector. OFDM systems are known to overcome the impairment of the wireless channel by splitting the given system bandwidth into parallel sub-carriers, on which data-symbols can be transmitted simultaneously. This enables the possibility of enhancing the system's performance by deploying adaptive mechanisms, namely power distribution and dynamic sub-carrier assignments. The performances of maximizing the minimum throughput have been analyzed by MATLAB codes.
Ahmadi, A M; Bahrampour, A R; Ravanbod, H
2013-01-01
In this work, the residual complexity (RC) similarity measure, is employed for time delay estimation (TDE) in gas pipe leak localization. The result of TDE by RC is compared with those of Cross Correlation(CC) and Mutual Information(MI) similarity measures based on our experimental data. The comparison confirms the advantages of RC relative to CC and MI, in robustness against both correlated noises and reduction of number of samples. These advantages originate from not only its mathematical nature of RC similarity measure which considers interdependency but also from broadband frequency of acoustic waves propagating during gas pipes.
Gil-Cacho, Jose M.; Van Waterschoot, Toon; Moonen, Marc;
2014-01-01
-GSVS-FDAF-PEM-AFROW algorithm obtains outstanding robustness and smooth adaptation in highly adverse scenarios such as in bursting DT at high levels, and in a change of acoustic path during continuous DT. Similarly, in AFC simulations, the algorithm outperforms state-of-the-art algorithms when using a low-order near-end speech...... model and in colored non-stationary noise....
Rossi, Stefano; Petrelli, Maurizio; Morgavi, Daniele; González-García, Diego; Fischer, Lennart A.; Vetere, Francesco; Perugini, Diego
2017-08-01
The mixing of magmas is a fundamental process in the Earth system causing extreme compositional variations in igneous rocks. This process can develop with different intensities both in space and time, making the interpretation of compositional patterns in igneous rocks a petrological challenge. As a time-dependent process, magma mixing has been suggested to preserve information about the time elapsed between the injection of a new magma into sub-volcanic magma chambers and eruptions. This allowed the use of magma mixing as an additional volcanological tool to infer the mixing-to-eruption timescales. In spite of the potential of magma mixing processes to provide information about the timing of volcanic eruptions its statistical robustness is not yet established. This represents a prerequisite to apply reliably this conceptual model. Here, new chaotic magma mixing experiments were performed at different times using natural melts. The degree of reproducibility of experimental results was tested repeating one experiment at the same starting conditions and comparing the compositional variability. We further tested the robustness of the statistical analysis by randomly removing from the analysed dataset a progressively increasing number of samples. Results highlight the robustness of the method to derive empirical relationships linking the efficiency of chemical exchanges and mixing time. These empirical relationships remain valid by removing up to 80% of the analytical determinations. Experimental results were applied to constrain the homogenization time of chemical heterogeneities in natural magmatic system during mixing. The calculations show that, when the mixing dynamics generate millimetre thick filaments, homogenization timescales of the order of a few minutes are to be expected.
Downside Variance Risk Premium
Feunou, Bruno; Jahan-Parvar, Mohammad R.; Okou, Cédric
2015-01-01
We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...
Portfolio optimization with mean-variance model
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
In many scenarios, a periodic signal of interest is often contaminated by different types of noise that may render many existing pitch estimation methods suboptimal, e.g., due to an incorrect white Gaussian noise assumption. In this paper, a method is established to estimate the pitch...... against different noise situations. The simulation results confirm that the proposed MVDR method outperforms the state-of-the-art weighted least squares (WLS) pitch estimator in colored noise and has robust pitch estimates against missing harmonics in some time-frames....... of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...
Simultaneous optimal estimates of fixed effects and variance components in the mixed model
WU Mixia; WANG Songgui
2004-01-01
For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.
Conversations across Meaning Variance
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
张宁; 赵亮
2015-01-01
As for the principal component analysis frequently used in the quality assessment of information disclo-sure,this paper proposes that it relies on sample co-variance matrix,the estimation of which is often easily influenced by some variants. Targeting at this problem, this paper suggests that the"robust co-variance estimation"be introduced into the quality assessment of information disclosure and does a comprehensive analysis and comparison on the quality of information disclosure of some listed companies in 2015 to show the stability of the new method.%针对信息披露质量评价中常用的主成分分析方法,本文指出了其计算依赖于样本协方差矩阵,而协方差矩阵的估计容易受到异常值的影响.针对该问题,本文提出了将"稳健协方差方法"引入到信息披露质量评估中,并对2015年部分上市公司的信息披露质量进行了综合分析和对比,展示了新方法的稳健性.
Statistical inference of Minimum Rank Factor Analysis
Shapiro, A; Ten Berge, JMF
2002-01-01
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the
Statistical inference of Minimum Rank Factor Analysis
Shapiro, A; Ten Berge, JMF
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
李敏; 王飞雪; 李峥嵘; 曾祥华
2012-01-01
To mitigate multipath in monitoring (reference) stations of satellite navigation systems, a weighting criterion for antenna arrays called Down-up-ratio Constrained Minimum Variance ( DCMV) criterion is proposed in this paper. The proposed criterion aims at minimizing the array output power under the constraint of down-up-ratio not greater than some threshold r. Therefore, this criterion is able to mitigate both interference and multipath. Simulation results show that it outperformed other criteria in satellite navigation systems, such as Power Inversion, Beam Steering, Maximum Signal-to-Interference-plus-Noise Ratio criterion, etc. The DCMV criterion is able to quantitatively control the incoming multipath energy, however, it losses some array gain as a cost.%针对卫星导航系统监测站(参考站)面临的典型多径环境,设计了一种具有多径抑制能力的阵列加权准则——约束下上比的最小方差( DCMV)准则.该准则的优化目标是在约束有用信号方向的下上比不大于某个门限r的条件下,使阵列输出功率最小.理论推导和仿真结果表明,相比卫星导航领域常见的几种天线阵最优加权准则(如功率倒置、波束控制、最大信干噪比准则等),DCMV准则可以定量控制地面反射多径的入射能量,然而其代价是损失了一定的阵列增益.
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Estimation of genetic variation in residual variance in female and male broiler chickens
Mulder, H.A.; Hill, W.G.; Vereijken, A.; Veerkamp, R.F.
2009-01-01
In breeding programs, robustness of animals and uniformity of end product can be improved by exploiting genetic variation in residual variance. Residual variance can be defined as environmental variance after accounting for all identifiable effects. The aims of this study were to estimate genetic va
Fixed effects analysis of variance
Fisher, Lloyd; Birnbaum, Z W; Lukacs, E
1978-01-01
Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi
Michel, Loïc
2012-01-01
This preliminary work presents a simple derivation of the standard model-free control in order to control switching minimum phase, non-minimum phase and time-delay systems. The robustness of the proposed method is studied in simulation.
Statistical inference on variance components
Verdooren, L.R.
1988-01-01
In several sciences but especially in animal and plant breeding, the general mixed model with fixed and random effects plays a great role. Statistical inference on variance components means tests of hypotheses about variance components, constructing confidence intervals for them, estimating them,
Hedging with stock index futures: downside risk versus the variance
Brouwer, F.; Nat, van der M.
1995-01-01
In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to Fis
Performance Analysis of Intelligent Robust Facility Layout Design
Moslemipour, G.; Lee, T. S.; Loong, Y. T.
2017-03-01
Design of a robust production facility layout with minimum handling cost (MHC) presents an appropriate approach to tackle facility layout problems in a dynamic volatile environment, in which product demands randomly change in each planning period. The objective of the design is to find the robust facility layout with minimum total material handling cost over the entire multi-period planning horizon. This paper proposes a new mathematical model for designing robust machine layout in the stochastic dynamic environment of manufacturing systems using quadratic assignment problem (QAP) formulation. In this investigation, product demands are assumed to be normally distributed random variables with known expected value, variance, and covariance that randomly change from period to period. The proposed model was verified and validated using randomly generated numerical data and benchmark examples. The effect of dependent product demands and varying interest rate on the total cost function of the proposed model has also been investigated. Sensitivity analysis on the proposed model has been performed. Dynamic programming and simulated annealing optimization algorithms were used in solving the modeled example problems.
Performance Analysis of Intelligent Robust Facility Layout Design
Moslemipour, G.; Lee, T. S.; Loong, Y. T.
2017-03-01
Design of a robust production facility layout with minimum handling cost (MHC) presents an appropriate approach to tackle facility layout problems in a dynamic volatile environment, in which product demands randomly change in each planning period. The objective of the design is to find the robust facility layout with minimum total material handling cost over the entire multi-period planning horizon. This paper proposes a new mathematical model for designing robust machine layout in the stochastic dynamic environment of manufacturing systems using quadratic assignment problem (QAP) formulation. In this investigation, product demands are assumed to be normally distributed random variables with known expected value, variance, and covariance that randomly change from period to period. The proposed model was verified and validated using randomly generated numerical data and benchmark examples. The effect of dependent product demands and varying interest rate on the total cost function of the proposed model has also been investigated. Sensitivity analysis on the proposed model has been performed. Dynamic programming and simulated annealing optimization algorithms were used in solving the modeled example problems.
Methodology in robust and nonparametric statistics
Jurecková, Jana; Picek, Jan
2012-01-01
Introduction and SynopsisIntroductionSynopsisPreliminariesIntroductionInference in Linear ModelsRobustness ConceptsRobust and Minimax Estimation of LocationClippings from Probability and Asymptotic TheoryProblemsRobust Estimation of Location and RegressionIntroductionM-EstimatorsL-EstimatorsR-EstimatorsMinimum Distance and Pitman EstimatorsDifferentiable Statistical FunctionsProblemsAsymptotic Representations for L-Estimators
Robust dual-response optimization
Yanikoglu, Ihsan; den Hertog, Dick; Kleijnen, J.P.C.
2016-01-01
This article presents a robust optimization reformulation of the dual-response problem developed in response surface methodology. The dual-response approach fits separate models for the mean and the variance and analyzes these two models in a mathematical optimization setting. We use metamodels esti
Variance estimation in neutron coincidence counting using the bootstrap method
Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)
2015-09-11
In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.
Gorm Hansen, Birgitte
2012-01-01
as the analytical framework for descri bing the complex relationship between academic science and its so called “external” habitat. Although relational skills and adaptability do seem to be at the heart of successful research management, the key to success does not lie with the ability to assimilate to industrial...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...... and industrial intere sts. The paper concludes by stressing the potential danger of policy habitats who have promoted the evolution of robust scientists based on a competitive system where only the fittest survive. Robust scientists, it is argued, have the potential to become a new “invasive species...
Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
a linear array and harmonic constraints, we design optimal filters based on estimated noise statistics. Therefore, the proposed method is robust against different noise scenarios. In colored noise, simulation results confirm that the proposed method outperforms an optimal state-of-the-art weighted least...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using...
Modelling volatility by variance decomposition
Amado, Cristina; Teräsvirta, Timo
on the multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspecification tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns...... illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance....
Revision: Variance Inflation in Regression
D. R. Jensen
2013-01-01
the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.
Analysis of variance: Comfortless questions
L.V. Nedorezov
2017-01-01
In this paper the simplest variant of analysis of variance is under consideration. Three examples from textbooks by Lakin (1990) and Rokitsky (1973) were re-considered. It was obtained that traditional one-way ANOVA and Kruskal - Wallis criterion can lead to unreal results about factor's influence on value of characteristics. Alternative way to solution of the same problem is under consideration too.
Analysis of Variance: Variably Complex
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Vandenplas, J.; Bastin, C.; Gengler, N.; Mulder, H.A.
2013-01-01
Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogene
Variance based OFDM frame synchronization
Z. Fedra
2012-04-01
Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.
Variance decomposition in stochastic simulators
Le Maître, O. P.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Robust Portfolio Optimization Using Pseudodistances.
Toma, Aida; Leoni-Aubin, Samuela
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.
Shaping Robust System through Evolution
Kaneko, Kunihiko
2008-01-01
Biological functions are generated as a result of developmental dynamics that form phenotypes governed by genotypes. The dynamical system for development is shaped through genetic evolution following natural selection based on the fitness of the phenotype. Here we study how this dynamical system is robust to noise during development and to genetic change by mutation. We adopt a simplified transcription regulation network model to govern gene expression, which gives a fitness function. Through simulations of the network that undergoes mutation and selection, we show that a certain level of noise in gene expression is required for the network to acquire both types of robustness. The results reveal how the noise that cells encounter during development shapes any network's robustness, not only to noise but also to mutations. We also establish a relationship between developmental and mutational robustness through phenotypic variances caused by genetic variation and epigenetic noise. A universal relationship betwee...
Variance Swaps in BM&F: Pricing and Viability of Hedge
Richard John Brostowicz Junior
2010-07-01
Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.
Banerjee, S; Grebogi, C; Banerjee, Soumitro; Yorke, James A.; Grebogi, Celso
1998-01-01
It has been proposed to make practical use of chaos in communication, in enhancing mixing in chemical processes and in spreading the spectrum of switch-mode power suppies to avoid electromagnetic interference. It is however known that for most smooth chaotic systems, there is a dense set of periodic windows for any range of parameter values. Therefore in practical systems working in chaotic mode, slight inadvertent fluctuation of a parameter may take the system out of chaos. We say a chaotic attractor is robust if, for its parameter values there exists a neighborhood in the parameter space with no periodic attractor and the chaotic attractor is unique in that neighborhood. In this paper we show that robust chaos can occur in piecewise smooth systems and obtain the conditions of its occurrence. We illustrate this phenomenon with a practical example from electrical engineering.
Čίžek, Pavel; Härdle, Wolfgang Karl
2006-01-01
Econometrics often deals with data under, from the statistical point of view, non-standard conditions such as heteroscedasticity or measurement errors and the estimation methods need thus be either adopted to such conditions or be at least insensitive to them. The methods insensitive to violation of certain assumptions, for example insensitive to the presence of heteroscedasticity, are in a broad sense referred to as robust (e.g., to heteroscedasticity). On the other hand, there is also a mor...
Variance-based uncertainty relations
Huang, Yichen
2010-01-01
It is hard to overestimate the fundamental importance of uncertainty relations in quantum mechanics. In this work, I propose state-independent variance-based uncertainty relations for arbitrary observables in both finite and infinite dimensional spaces. We recover the Heisenberg uncertainty principle as a special case. By studying examples, we find that the lower bounds provided by our new uncertainty relations are optimal or near-optimal. I illustrate the uses of our new uncertainty relations by showing that they eliminate one common obstacle in a sequence of well-known works in entanglement detection, and thus make these works much easier to access in applications.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.
Neutrino mass without cosmic variance
LoVerde, Marilena
2016-01-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological datasets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias $b(k)$ and the linear growth parameter $f(k)$ inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on $b(k)$ and $f(k)$ continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via $b(k)$ and $f(k)$. The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high density limit, using multiple tracers allows cosmic-variance to be beaten and the forecasted errors on neutrino mass shrink dramatically. In...
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
The balanced minimum evolution problem under uncertain data
Catanzaro, Daniele; Labbe, Martine; Pesenti, Raffaele
2013-01-01
We investigate the Robust Deviation Balanced Minimum Evolution Problem (RDBMEP), a combinatorial optimization problem that arises in computational biology when the evolutionary distances from taxa are uncertain and varying inside intervals. By exploiting some fundamental properties of the objective
Gorm Hansen, Birgitte
The concepts of “socially robust knowledge” and “mode 2 knowledge production” (Nowotny 2003, Gibbons et al. 1994) have migrated from STS into research policy practices. Both STS-scholars and policy makers have been known to propomote the idea that the way forward for today’s scientist is to jump...... from the ivory tower and learn how to create high-flying synergies with citizens, corporations and governments. In STS as well as in Danish research policy it has thus been argued that scientists will gain more support and enjoy greater success in their work by “externalizing” their research...... and adapting their interests to the needs of outside actors. However, when studying the concrete strategies of such successful scientists, matters seem a bit more complicated. Based on interviews with a plant biologist working in GMO the paper uses the biological concepts of field participants...
Aanæs, Henrik; Fisker, Rune; Åström, Kalle;
2002-01-01
Factorization algorithms for recovering structure and motion from an image stream have many advantages, but they usually require a set of well-tracked features. Such a set is in generally not available in practical applications. There is thus a need for making factorization algorithms deal...... effectively with errors in the tracked features. We propose a new and computationally efficient algorithm for applying an arbitrary error function in the factorization scheme. This algorithm enables the use of robust statistical techniques and arbitrary noise models for the individual features....... These techniques and models enable the factorization scheme to deal effectively with mismatched features, missing features, and noise on the individual features. The proposed approach further includes a new method for Euclidean reconstruction that significantly improves convergence of the factorization algorithms...
Speed Variance and Its Influence on Accidents.
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Variance optimal stopping for geometric Levy processes
Gad, Kamille Sofie Tågholt; Pedersen, Jesper Lund
2015-01-01
The main result of this paper is the solution to the optimal stopping problem of maximizing the variance of a geometric Lévy process. We call this problem the variance problem. We show that, for some geometric Lévy processes, we achieve higher variances by allowing randomized stopping. Furthermore...
Dynamic Programming Using Polar Variance for Image Segmentation.
Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J
2016-10-06
When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Generalized analysis of molecular variance.
Caroline M Nievergelt
2007-04-01
Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by
Robust Adaptive LCMV Beamformer Based On An Iterative Suboptimal Solution
Xiansheng Guo
2015-06-01
Full Text Available The main drawback of closed-form solution of linearly constrained minimum variance (CF-LCMV beamformer is the dilemma of acquiring long observation time for stable covariance matrix estimates and short observation time to track dynamic behavior of targets, leading to poor performance including low signal-noise-ratio (SNR, low jammer-to-noise ratios (JNRs and small number of snapshots. Additionally, CF-LCMV suffers from heavy computational burden which mainly comes from two matrix inverse operations for computing the optimal weight vector. In this paper, we derive a low-complexity Robust Adaptive LCMV beamformer based on an Iterative Suboptimal solution (RAIS-LCMV using conjugate gradient (CG optimization method. The merit of our proposed method is threefold. Firstly, RAIS-LCMV beamformer can reduce the complexity of CF-LCMV remarkably. Secondly, RAIS-LCMV beamformer can adjust output adaptively based on measurement and its convergence speed is comparable. Finally, RAIS-LCMV algorithm has robust performance against low SNR, JNRs, and small number of snapshots. Simulation results demonstrate the superiority of our proposed algorithms.
Minimum Length - Maximum Velocity
Panes, Boris
2011-01-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.
Inferred changes in El Niño–Southern Oscillation variance over the past six centuries
S. McGregor
2013-10-01
Full Text Available It is vital to understand how the El Niño–Southern Oscillation (ENSO has responded to past changes in natural and anthropogenic forcings, in order to better understand and predict its response to future greenhouse warming. To date, however, the instrumental record is too brief to fully characterize natural ENSO variability, while large discrepancies exist amongst paleo-proxy reconstructions of ENSO. These paleo-proxy reconstructions have typically attempted to reconstruct ENSO's temporal evolution, rather than the variance of these temporal changes. Here a new approach is developed that synthesizes the variance changes from various proxy data sets to provide a unified and updated estimate of past ENSO variance. The method is tested using surrogate data from two coupled general circulation model (CGCM simulations. It is shown that in the presence of dating uncertainties, synthesizing variance information provides a more robust estimate of ENSO variance than synthesizing the raw data and then identifying its running variance. We also examine whether good temporal correspondence between proxy data and instrumental ENSO records implies a good representation of ENSO variance. In the climate modeling framework we show that a significant improvement in reconstructing ENSO variance changes is found when combining information from diverse ENSO-teleconnected source regions, rather than by relying on a single well-correlated location. This suggests that ENSO variance estimates derived from a single site should be viewed with caution. Finally, synthesizing existing ENSO reconstructions to arrive at a better estimate of past ENSO variance changes, we find robust evidence that the ENSO variance for any 30 yr period during the interval 1590–1880 was considerably lower than that observed during 1979–2009.
Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.
2017-01-01
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
The phenotypic variance gradient - a novel concept.
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-11-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Zhou, Hao
predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
SUBSPACE-BASED NOISE VARIANCE AND SNR ESTIMATION FOR MIMO OFDM SYSTEMS
无
2006-01-01
This paper proposes a subspace-based noise variance and Signal-to-Noise Ratio (SNR) estimation algorithm for Multi-Input Multi-Output (MIMO) wireless Orthogonal Frequency Division Multiplexing (OFDM) systems. The special training sequences with the property of orthogonality and phase shift orthogonality are used in pilot tones to obtain the estimated channel correlation matrix. Partitioning the observation space into a delay subspace and a noise subspace, we achieve the measurement of noise variance and SNR.Simulation results show that the proposed estimator can obtain accurate and real-time measurements of the noise variance and SNR for various multipath fading channels, demonstrating its strong robustness against different channels.
Influence of Family Structure on Variance Decomposition
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs... PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for... shall modify the tag, label, or other certification required by § 1010.2 to state: (1) That the...
Analysis of variance for model output
Jansen, M.J.W.
1999-01-01
A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va
The Correct Kriging Variance Estimated by Bootstrapping
den Hertog, D.; Kleijnen, J.P.C.; Siem, A.Y.D.
2004-01-01
The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrappi
Influence of Family Structure on Variance Decomposition
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...
Nonlinear Epigenetic Variance: Review and Simulations
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Variance Risk Premia on Stocks and Bonds
Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea
is different from the equity variance risk premium. Third, the conditional correlation between stock and bond market variance risk premium switches sign often and ranges between -60% and +90%. We then show that these stylized facts pose a challenge to standard consumption-based asset pricing models....
Reduced K-best sphere decoding algorithm based on minimum route distance and noise variance
Xinyu Mao; Jianjun Wu; Haige Xiang
2014-01-01
This paper focuses on reducing the complexity of K-best sphere decoding (SD) algorithm for the detection of uncoded multi-ple input multiple output (MIMO) systems. The proposed algorithm utilizes the threshold-pruning method to cut nodes with partial Euclidean distances (PEDs) larger than the threshold. Both the known noise value and the unknown noise value are considered to generate the threshold, which is the sum of the two values. The known noise value is the smal est PED of signals in the detected layers. The unknown noise value is generated by the noise power, the quality of service (QoS) and the signal-to-noise ratio (SNR) bound. Simulation results show that by considering both two noise values, the proposed algorithm makes an efficient reduction while the performance drops little.
A phantom study on temporal and subband Minimum Variance adaptive beamforming
Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.
2014-01-01
BK8804 linear transducer was used to scan a wire phantom in which wires are separated by 10 mm. Performance is then evaluated by the lateral Full-Width-Half-Maximum (FWHM), the Peak Sidelobe Level (PSL), and the computational load. Beamformed single emission responses are also compared with those...
A minimum variance benchmark to measure the performance of pension funds in Mexico
Oscar V. De la Torre Torres
2015-01-01
Full Text Available En el presente artículo proponemos el portafolio de mínima varianza como método de ponderación para un benchmark que mida el desempeno˜ de fondos de pensiones en México. Se contrastó éste portafolio contra los logrados ya sea con la máxima razón de Sharpe o el resultante de una combinación lineal de ambos métodos. Esto se hizo con tres simulaciones de eventos discretos con datos diarios de enero de 2002 a mayo de 2013. Con la razón de Sharpe, la prueba de significancia de la Alfa de Jensen y la prueba de expansión de Huberman y Kandel (1987, se encontró que los portafolios simulados tienen una performance similar. Al utilizar los criterios exposición al riesgo, representatividad de los mercados objeto de inversiín y el nivel de rebalanceo propuestos por Bailey (1992, encontramos que el método de mínima varianza es preferible para medir el desempeño de fondos de pensiones en México.
Spatio-angular Minimum-variance Tomographic Controller for Multi-Object Adaptive Optics systems
Correia, Carlos M; Veran, Jean-Pierre; Andersen, David; Lardiere, Olivier; Bradley, Colin
2015-01-01
Multi-object astronomical adaptive-optics (MOAO) is now a mature wide-field observation mode to enlarge the adaptive-optics-corrected field in a few specific locations over tens of arc-minutes. The work-scope provided by open-loop tomography and pupil conjugation is amenable to a spatio-angular Linear-Quadratic Gaussian (SA-LQG) formulation aiming to provide enhanced correction across the field with improved performance over static reconstruction methods and less stringent computational complexity scaling laws. Starting from our previous work [1], we use stochastic time-progression models coupled to approximate sparse measurement operators to outline a suitable SA-LQG formulation capable of delivering near optimal correction. Under the spatio-angular framework the wave-fronts are never explicitly estimated in the volume,providing considerable computational savings on 10m-class telescopes and beyond. We find that for Raven, a 10m-class MOAO system with two science channels, the SA-LQG improves the limiting mag...
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Zhou, Hao
We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia...... predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
Proceedings of the First International Symposium on Robust Design 2014
The symposium concerns the topic of robust design from a practical and industry orientated perspective. During the 2 day symposium we will share our understanding of the need of industry with respect to the control of variance, reliability issues and approaches to robust design. The target audience...
Robust Portfolio Optimization Using Pseudodistances
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Sørensen, John Dalsgaard
2011-01-01
robust design as well as strategies for maintaining the robustness of existing structures throughout their service life. This paper describes an overall theoretical framework for assessing robustness of structures developed within WG1 “Robustness of structures”. Robustness can be defined in different......An important aspect of the COST Action TU0601 “Robustness of structures” concerns the development of a theoretically sound basis for the assessment of robustness and acceptance criteria for structural robustness which can form the basis for development of practical relevant methods for ensuring...
Direct selection on genetic robustness revealed in the yeast transcriptome.
Stephen R Proulx
congruence hypothesis. However, this correlation alone cannot explain the co-variance of genetic robustness with position in the protein interaction network. We therefore conclude that direct selection on robustness has played a role in the evolution of genetic robustness in the transcriptome.
Robust inference in sample selection models
Zhelonkin, Mikhail
2015-11-20
The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander
2013-01-01
of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic...
Inferred changes in El Niño-Southern Oscillation variance over the past six centuries
S. McGregor
2013-05-01
Full Text Available It is vital to understand how the El Niño–Southern Oscillation (ENSO has responded to past changes in natural and anthropogenic forcings, in order to better understand and predict its response to future greenhouse warming. To date, however, the instrumental record is too brief to fully characterize natural ENSO variability, while large discrepancies exist amongst paleo-proxy reconstructions of ENSO. These paleo-proxy reconstructions have typically attempted to reconstruct the full temporal variability of ENSO, rather than focusing simply on its variance. Here a new approach is developed that synthesizes the information on common low frequency variance changes from various proxy datasets to obtain estimates of ENSO variance. The method is tested using surrogate data from two coupled general circulation model (CGCM simulations. It is shown that in the presence of dating uncertainties, synthesizing variance information provides a more robust estimate of ENSO variance than synthesizing the raw data than identifying its running variance. We also examine whether good temporal correspondence between proxy data and instrumental ENSO records implies a good representation of ENSO variance. A significant improvement in reconstructing ENSO variance changes is found when combining several proxies from diverse ENSO-teleconnected source regions, rather than by relying on a single well-correlated location, suggesting that ENSO variance estimates provided derived from a single site should be viewed with caution. Finally, identifying the common variance signal in a series of existing proxy based reconstructions of ENSO variability over the last 600 yr we find that the common ENSO variance over the period 1600–1900 was considerably lower than during 1979–2009.
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Grammatical and lexical variance in English
Quirk, Randolph
2014-01-01
Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.
78 FR 14122 - Revocation of Permanent Variances
2013-03-04
... Occupational Safety and Health Administration Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is... into consideration these newly corrected cross references. DATES: The effective date of the...
Importance Sampling Variance Reduction in GRESS ATMOSIM
Wakeford, Daniel Tyler [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-26
This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....
Střelec, Luboš; Stehlík, Milan
2017-01-01
Normality of the error terms in regression models is one of the basic assumptions in the applied regression analysis. Therefore, testing for normality of the error terms constitutes one of the most important steps of regression model verification and validation. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Within the applied regression analysis there is a frequent problem of the presence of autocorrelation and conditional heteroscedasticity of the error terms. Under both autocorrelation and heteroscedasticity, the usual OLS estimators are still unbiased, linear and asymptotically normally distributed, however, no longer have the minimum variance property among all linear unbiased estimators. Therefore, the aim of this paper is to present and discuss normality testing of the error terms with presence of autocorrelation and conditional heteroscedasticity. To explore the power of selected classical tests and robust tests for normality, we perform simulation study.
ROBUST DESIGN MODELS FOR CUSTOMER-SPECIFIED BOUNDS ON PROCESS PARAMETERS
Sangmun SHIN; Byung Rae CHO
2006-01-01
Robust design (RD) has received much attention from researchers and practitioners for years, and a number of methodologies have been studied in the research community. The majority of existing RD models focus on the minimum variability with a zero bias. However, it is often the case that the customer may specify upper bounds on one of the two process parameters (i.e., the process mean and variance). In this situation, the existing RD models may not work efficiently in incorporating the customer's needs. To this end, we propose two simple RD models using the ε-constraint feasible region method - one with an upper bound of process bias specified and the other with an upper bound on process variability specified. We then conduct a case study to analyze the effects of upper bounds on each of the process parameters in terms of optimal operating conditions and mean squarederror.
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Variance components in discrete force production tasks.
Varadhan, S K M; Zatsiorsky, Vladimir M; Latash, Mark L
2010-09-01
The study addresses the relationships between task parameters and two components of variance, "good" and "bad", during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does ("bad") and does not ("good") affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the "bad" variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. "Good" variance scaled linearly with force magnitude and did not depend on force rate. "Bad" variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic tasks was associated with
Discrimination of frequency variance for tonal sequencesa)
Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.
2014-01-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTA...
Boolean networks with robust and reliable trajectories
Schmal, Christoph; Peixoto, Tiago P; Drossel, Barbara, E-mail: schmal@physik.uni-bielefeld.d, E-mail: tiago@fkp.tu-darmstadt.d, E-mail: drossel@fkp.tu-darmstadt.d [Institut fuer Festkoerperphysik, TU Darmstadt, Hochschulstrasse 6, 64289 Darmstadt (Germany)
2010-11-15
We construct and investigate Boolean networks that follow a given reliable trajectory in state space, which is insensitive to fluctuations in the updating schedule and which is also robust against noise. Robustness is quantified as the probability that the dynamics return to the reliable trajectory after a perturbation of the state of a single node. In order to achieve high robustness, we navigate through the space of possible update functions by using an evolutionary algorithm. We constrain the networks to those having the minimum number of connections required to obtain the reliable trajectory. Surprisingly, we find that robustness always reaches values close to 100% during the evolutionary optimization process. The set of update functions can be evolved such that it differs only slightly from that of networks that were not optimized with respect to robustness. The state space of the optimized networks is dominated by the basin of attraction of the reliable trajectory.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
The Variance Composition of Firm Growth Rates
Luiz Artur Ledur Brito
2009-04-01
Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.
Rising above the Minimum Wage.
Even, William; Macpherson, David
An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…
Robust control design with MATLAB
Gu, Da-Wei; Konstantinov, Mihail M
2013-01-01
Robust Control Design with MATLAB® (second edition) helps the student to learn how to use well-developed advanced robust control design methods in practical cases. To this end, several realistic control design examples from teaching-laboratory experiments, such as a two-wheeled, self-balancing robot, to complex systems like a flexible-link manipulator are given detailed presentation. All of these exercises are conducted using MATLAB® Robust Control Toolbox 3, Control System Toolbox and Simulink®. By sharing their experiences in industrial cases with minimum recourse to complicated theories and formulae, the authors convey essential ideas and useful insights into robust industrial control systems design using major H-infinity optimization and related methods allowing readers quickly to move on with their own challenges. The hands-on tutorial style of this text rests on an abundance of examples and features for the second edition: · rewritten and simplified presentation of theoretical and meth...
Cardinal, Jean; Joret, Gwenaël
2008-01-01
We study graph orientations that minimize the entropy of the in-degree sequence. The problem of finding such an orientation is an interesting special case of the minimum entropy set cover problem previously studied by Halperin and Karp [Theoret. Comput. Sci., 2005] and by the current authors [Algorithmica, to appear]. We prove that the minimum entropy orientation problem is NP-hard even if the graph is planar, and that there exists a simple linear-time algorithm that returns an approximate solution with an additive error guarantee of 1 bit. This improves on the only previously known algorithm which has an additive error guarantee of log_2 e bits (approx. 1.4427 bits).
Methods for robustness programming
Olieman, N.J.
2008-01-01
Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition
DILEMATIKA PENETAPAN UPAH MINIMUM
. Pitaya
2015-02-01
Full Text Available In the effort of creating appropiate wage for employees, it is necessary to determine the wages by considering the increase of poverty without ignoring the increase of productivity, the progressivity of companies and the growth of economic. The new minimum wages in the provincial level and the regoinal/municipality level have been implemented per 1st January in Indonesia since 2001. The determination of minimum wage for provinvial level should be done 30 days before 1st January, whereas the determination of minimumwage for regional/municipality level should be done 40 days before 1st January. Moreover,there is an article which governs thet the minimumwage will be revised annually. By considering the time of determination and the time of revision above,it can be predicted that before and after the determination date will be crucial time. This is because the controversy among parties in industrial relationships will arise. The determination of minimum wage will always become a dilemmatic step which has to be done by the Government. Through this policy, on one side the government attempts to attract many investors, however, on the other side the government also has to protect the employees in order to have the appropiate wage in accordance with the standard of living.
Minimum quality standards and exports
2015-01-01
This paper studies the interaction of a minimum quality standard and exports in a vertical product differentiation model when firms sell global products. If ex ante quality of foreign firms is lower (higher) than the quality of exporting firms, a mild minimum quality standard in the home market hinders (supports) exports. The minimum quality standard increases quality in both markets. A welfare maximizing minimum quality standard is always lower under trade than under autarky. A minimum quali...
Introduction to Robust Estimation and Hypothesis Testing
Wilcox, Rand R
2012-01-01
This revised book provides a thorough explanation of the foundation of robust methods, incorporating the latest updates on R and S-Plus, robust ANOVA (Analysis of Variance) and regression. It guides advanced students and other professionals through the basic strategies used for developing practical solutions to problems, and provides a brief background on the foundations of modern methods, placing the new methods in historical context. Author Rand Wilcox includes chapter exercises and many real-world examples that illustrate how various methods perform in different situations.Introduction to R
Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter
Luo, Xiaodong
2011-12-01
A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the H∞ filter is more robust than the Kalman filter, in the sense that the estimation error in the H∞ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the H∞ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore a variant is introduced that solves some time-local constraints instead, and hence it is called the time-local H∞ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), the concept of ensemble time-local H∞ filter (EnTLHF) is also proposed. The general form of the EnTLHF is outlined, and some of its special cases are discussed. In particular, it is shown that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. Some numerical examples are used to assess the relative robustness of the TLHF–EnTLHF in comparison with the corresponding KF–EnKF method.
The robust regulation problem with robust stability
Cevik, M.K.K.; Schumacher, J.M.
1999-01-01
Among the most common purposes of control are the tracking of reference signals and the rejection of disturbance signals in the face of uncertainties. The related design problem is called the `robust regulation problem'. Here we investigate the trade-off between the robust regulation constraint and
Discrimination of frequency variance for tonal sequences.
Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A
2014-12-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) > σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.
A proxy for variance in dense matching over homogeneous terrain
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Static models, recursive estimators and the zero-variance approach
Rubino, Gerardo
2016-01-07
When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
Estimating quadratic variation using realized variance
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process is a semimar......This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....
Do Minimum Wages Fight Poverty?
David Neumark; William Wascher
1997-01-01
The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...
Integrating Variances into an Analytical Database
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
Orme, John S.; Nobbs, Steven G.
1995-01-01
The minimum fuel mode of the NASA F-15 research aircraft is designed to minimize fuel flow while maintaining constant net propulsive force (FNP), effectively reducing thrust specific fuel consumption (TSFC), during cruise flight conditions. The test maneuvers were at stabilized flight conditions. The aircraft test engine was allowed to stabilize at the cruise conditions before data collection initiated; data were then recorded with performance seeking control (PSC) not-engaged, then data were recorded with the PSC system engaged. The maneuvers were flown back-to-back to allow for direct comparisons by minimizing the effects of variations in the test day conditions. The minimum fuel mode was evaluated at subsonic and supersonic Mach numbers and focused on three altitudes: 15,000; 30,000; and 45,000 feet. Flight data were collected for part, military, partial, and maximum afterburning power conditions. The TSFC savings at supersonic Mach numbers, ranging from approximately 4% to nearly 10%, are in general much larger than at subsonic Mach numbers because of PSC trims to the afterburner.
The Risk Management of Minimum Return Guarantees
Antje Mahayni
2008-05-01
Full Text Available Contracts paying a guaranteed minimum rate of return and a fraction of a positive excess rate, which is specified relative to a benchmark portfolio, are closely related to unit-linked life-insurance products and can be considered as alternatives to direct investment in the underlying benchmark. They contain an embedded power option, and the key issue is the tractable and realistic hedging of this option, in order to rigorously justify valuation by arbitrage arguments and prevent the guarantees from becoming uncontrollable liabilities to the issuer. We show how to determine the contract parameters conservatively and implement robust risk-management strategies.
Sources of variance in ocular microtremor.
Sheahan, N F; Coakley, D; Bolger, C; O'Neill, D; Fry, G; Phillips, J; Malone, J F
1994-02-01
This study presents a preliminary investigation of the sources of variance in the measurement of ocular microtremor frequency in a normal population. When the results from both experienced and relatively inexperienced operators are pooled, factors that contribute significantly to the total variance include the measurement procedure (p < 0.001), day-to-day variations within subjects (p < 0.001), and inter-subject differences (p < 0.01). Operator experience plays a role in determining the measurement precision: the intra-subject coefficient of variation is about 5% for a very experienced operator, and about 14% for a relatively inexperienced operator.
Continuous-Time Mean-Variance Portfolio Selection with Random Horizon
Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)
2013-12-15
This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.
Minimum wages, earnings, and migration
Boffy-Ramirez, Ernest
2013-01-01
Does increasing a state’s minimum wage induce migration into the state? Previous literature has shown mobility in response to welfare benefit differentials across states, yet few have examined the minimum wage as a cause of mobility...
Managing product inherent variance during treatment
Verdenius, F.
1996-01-01
The natural variance of agricultural product parameters complicates recipe planning for product treatment, i.e. the process of transforming a product batch from its initial state to a prespecified final state. For a specific product P, recipes are currently composed by human experts on the basis of
The Variance of Language in Different Contexts
申一宁
2012-01-01
language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.
Regression calibration with heteroscedastic error variance.
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Formative Use of Intuitive Analysis of Variance
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
Linear transformations of variance/covariance matrices
Parois, P.J.A.; Lutz, M.
2011-01-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Decomposition of variance for spatial Cox processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...
Decomposition of variance for spatial Cox processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
On the robustness of two-stage estimators
Zhelonkin, Mikhail
2012-04-01
The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.
Variance of gene expression identifies altered network constraints in neurological disease.
Jessica C Mar
2011-08-01
Full Text Available Gene expression analysis has become a ubiquitous tool for studying a wide range of human diseases. In a typical analysis we compare distinct phenotypic groups and attempt to identify genes that are, on average, significantly different between them. Here we describe an innovative approach to the analysis of gene expression data, one that identifies differences in expression variance between groups as an informative metric of the group phenotype. We find that genes with different expression variance profiles are not randomly distributed across cell signaling networks. Genes with low-expression variance, or higher constraint, are significantly more connected to other network members and tend to function as core members of signal transduction pathways. Genes with higher expression variance have fewer network connections and also tend to sit on the periphery of the cell. Using neural stem cells derived from patients suffering from Schizophrenia (SZ, Parkinson's disease (PD, and a healthy control group, we find marked differences in expression variance in cell signaling pathways that shed new light on potential mechanisms associated with these diverse neurological disorders. In particular, we find that expression variance of core networks in the SZ patient group was considerably constrained, while in contrast the PD patient group demonstrated much greater variance than expected. One hypothesis is that diminished variance in SZ patients corresponds to an increased degree of constraint in these pathways and a corresponding reduction in robustness of the stem cell networks. These results underscore the role that variation plays in biological systems and suggest that analysis of expression variance is far more important in disease than previously recognized. Furthermore, modeling patterns of variability in gene expression could fundamentally alter the way in which we think about how cellular networks are affected by disease processes.
Dziekan, Thomas; Weissbach, Carmen; Voigt, Jan; Ebert, Bernd; MacDonald, Rainer; Bahner, Malte L.; Mahler, Marianne; Schirner, Michael; Berliner, Michael; Berliner, Birgitt; Osel, Jens; Osel, Ilka
2011-07-01
Fluorescence imaging using the dye indocyanine green as a contrast agent was investigated in a prospective clinical study for the detection of rheumatoid arthritis. Normalized variances of correlated time series of fluorescence intensities describing the bolus kinetics of the contrast agent in certain regions of interest were analyzed to differentiate healthy from inflamed finger joints. These values are determined using a robust, parameter-free algorithm. We found that the normalized variance of correlation functions improves the differentiation between healthy joints of volunteers and joints with rheumatoid arthritis of patients by about 10% compared to, e.g., ratios of areas under the curves of raw data.
Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.
2013-10-01
Based on rainfall intensity-duration-frequency (IDF) curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological
Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A
2013-09-01
Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also
40 CFR 142.43 - Disposition of a variance request.
2010-07-01
... during the period of variance shall specify interim treatment techniques, methods and equipment, and... the specified treatment technique for which the variance was granted is necessary to protect...
Adewunmi, Adrian; Byrne, Mike
2008-01-01
This paper investigates the reduction of variance associated with a simulation output performance measure, using the Sequential Sampling method while applying minimum simulation replications, for a class of JIT (Just in Time) warehousing system called crossdocking. We initially used the Sequential Sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (Total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). From our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (Total usage cost, with an average mean value of 115,000 pounds). However, the Sequential Sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. Arena (version 11) simulation software was used to conduct this study.
A Robust Optimization Approach Considering the Robustness of Design Objectives and Constraints
LIUChun-tao; LINZhi-hang; ZHOUChunojing
2005-01-01
The problem of robust design is treated as a multi-objective optimization issue in which the performance mean and variation are optimized and minimized respectively, while maintaining the feasibility of design constraints under uncertainty. To effectively address this issue in robust design, this paper presents a novel robust optimization approach which integrates multi-objective optimization concepts with Taguchi's crossed arrays techniques. In this approach,Pareto-optimal robust design solution sets are obtained with the aid of design of experiment set-ups,which utilize the results of Analysis of Variance to quantify relative dominance and significance of design variables. A beam design problem is used to illustrate the effectiveness of the proposed approach.
Robust geostatistical analysis of spatial data
Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.
2013-04-01
Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R
Kavitha, Telikepalli; Nimbhorkar, Prajakta
2010-01-01
We consider an extension of the {\\em popular matching} problem in this paper. The input to the popular matching problem is a bipartite graph G = (A U B,E), where A is a set of people, B is a set of items, and each person a belonging to A ranks a subset of items in an order of preference, with ties allowed. The popular matching problem seeks to compute a matching M* between people and items such that there is no matching M where more people are happier with M than with M*. Such a matching M* is called a popular matching. However, there are simple instances where no popular matching exists. Here we consider the following natural extension to the above problem: associated with each item b belonging to B is a non-negative price cost(b), that is, for any item b, new copies of b can be added to the input graph by paying an amount of cost(b) per copy. When G does not admit a popular matching, the problem is to "augment" G at minimum cost such that the new graph admits a popular matching. We show that this problem is...
Bias-variance decomposition in Genetic Programming
Kowaliw Taras
2016-01-01
Full Text Available We study properties of Linear Genetic Programming (LGP through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a the variance between runs is primarily due to initialization rather than the selection of training samples, (b parameters can be reasonably optimized to obtain gains in efficacy, and (c functions detrimental to evolvability are easily eliminated, while functions well-suited to the problem can greatly improve performance—therefore, larger and more diverse function sets are always preferable.
Realized Variance and Market Microstructure Noise
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
Linear transformations of variance/covariance matrices.
Parois, Pascal; Lutz, Martin
2011-07-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance matrix. For the transformation of second-rank tensors it is suggested that the 3 × 3 matrix is re-written into a 9 × 1 vector. The transformation of the corresponding variance/covariance matrix is then straightforward and easily implemented into computer software. This method is applied in the transformation of anisotropic displacement parameters, the calculation of equivalent isotropic displacement parameters, the comparison of refinements in different space-group settings and the calculation of standard uncertainties of eigenvalues.
Variance and covariance of accumulated displacement estimates.
Bayer, Matthew; Hall, Timothy J
2013-04-01
Tracking large deformations in tissue using ultrasound can enable the reconstruction of nonlinear elastic parameters, but poses a challenge to displacement estimation algorithms. Such large deformations have to be broken up into steps, each of which contributes an estimation error to the final accumulated displacement map. The work reported here measured the error variance for single-step and accumulated displacement estimates using one-dimensional numerical simulations of ultrasound echo signals, subjected to tissue strain and electronic noise. The covariance between accumulation steps was also computed. These simulations show that errors due to electronic noise are negatively correlated between steps, and therefore accumulate slowly, whereas errors due to tissue deformation are positively correlated and accumulate quickly. For reasonably low electronic noise levels, the error variance in the accumulated displacement estimates is remarkably constant as a function of step size, but increases with the length of the tracking kernel.
Realized Variance and Market Microstructure Noise
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
Social Security's special minimum benefit.
Olsen, K A; Hoffmeyer, D
Social Security's special minimum primary insurance amount (PIA) provision was enacted in 1972 to increase the adequacy of benefits for regular long-term, low-earning covered workers and their dependents or survivors. At the time, Social Security also had a regular minimum benefit provision for persons with low lifetime average earnings and their families. Concerns were rising that the low lifetime average earnings of many regular minimum beneficiaries resulted from sporadic attachment to the covered workforce rather than from low wages. The special minimum benefit was seen as a way to reward regular, low-earning workers without providing the windfalls that would have resulted from raising the regular minimum benefit to a much higher level. The regular minimum benefit was subsequently eliminated for workers reaching age 62, becoming disabled, or dying after 1981. Under current law, the special minimum benefit will phase out over time, although it is not clear from the legislative history that this was Congress's explicit intent. The phaseout results from two factors: (1) special minimum benefits are paid only if they are higher than benefits payable under the regular PIA formula, and (2) the value of the regular PIA formula, which is indexed to wages before benefit eligibility, has increased faster than that of the special minimum PIA, which is indexed to inflation. Under the Social Security Trustees' 2000 intermediate assumptions, the special minimum benefit will cease to be payable to retired workers attaining eligibility in 2013 and later. Their benefits will always be larger under the regular benefit formula. As policymakers consider Social Security solvency initiatives--particularly proposals that would reduce benefits or introduce investment risk--interest may increase in restoring some type of special minimum benefit as a targeted protection for long-term low earners. Two of the three reform proposals offered by the President's Commission to Strengthen
The Theory of Variances in Equilibrium Reconstruction
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
Eigenvalue variance bounds for covariance matrices
Dallaporta, Sandrine
2013-01-01
This work is concerned with finite range bounds on the variance of individual eigenvalues of random covariance matrices, both in the bulk and at the edge of the spectrum. In a preceding paper, the author established analogous results for Wigner matrices and stated the results for covariance matrices. They are proved in the present paper. Relying on the LUE example, which needs to be investigated first, the main bounds are extended to complex covariance matrices by means of the Tao, Vu and Wan...
High-dimensional regression with unknown variance
Giraud, Christophe; Verzelen, Nicolas
2011-01-01
We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.
Fractional constant elasticity of variance model
Ngai Hang Chan; Chi Tim Ng
2007-01-01
This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed.
Fundamentals of exploratory analysis of variance
Hoaglin, David C; Tukey, John W
2009-01-01
The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.
Discussion on variance reduction technique for shielding
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Applications of non-parametric statistics and analysis of variance on sample variances
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Robust design of a 2-DOF GMV controller: a direct self-tuning and fuzzy scheduling approach.
Silveira, Antonio S; Rodríguez, Jaime E N; Coelho, Antonio A R
2012-01-01
This paper presents a study on self-tuning control strategies with generalized minimum variance control in a fixed two degree of freedom structure-or simply GMV2DOF-within two adaptive perspectives. One, from the process model point of view, using a recursive least squares estimator algorithm for direct self-tuning design, and another, using a Mamdani fuzzy GMV2DOF parameters scheduling technique based on analytical and physical interpretations from robustness analysis of the system. Both strategies are assessed by simulation and real plants experimentation environments composed of a damped pendulum and an under development wind tunnel from the Department of Automation and Systems of the Federal University of Santa Catarina.
Estimation of bias and variance of measurements made from tomography scans
Bradley, Robert S.
2016-09-01
Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.
The Parabolic variance (PVAR), a wavelet variance based on least-square fit
Vernotte, F; Bourgeois, P -Y; Rubiola, E
2015-01-01
The Allan variance (AVAR) is one option among the wavelet variances. However a milestone in the analysis of frequency fluctuations and in the long-term stability of clocks, and certainly the most widely used one, AVAR is not suitable when fast noise processes show up, chiefly because of the poor rejection of white phase noise. The modified Allan variance (MVAR) features high resolution in the presence of white PM noise, but it is poorer for slow phenomena because the wavelet spans over 50% longer time. This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The PVAR relates to the Omega frequency counter, which is the topics of a companion article [the reference to the article, or to the ArXiv manuscript, will be provided later]. The PVAR wavelet spans over 2 tau, the same of the AVAR wavelet. After setting the theoretical framework, we analyze the degrees of freedom and the detection of weak noise processes in...
Xu, Huan; Mannor, Shie
2008-01-01
Lasso, or $\\ell^1$ regularized least squares, has been explored extensively for its remarkable sparsity properties. It is shown in this paper that the solution to Lasso, in addition to its sparsity, has robustness properties: it is the solution to a robust optimization problem. This has two important consequences. First, robustness provides a connection of the regularizer to a physical property, namely, protection from noise. This allows a principled selection of the regularizer, and in particular, generalizations of Lasso that also yield convex optimization problems are obtained by considering different uncertainty sets. Secondly, robustness can itself be used as an avenue to exploring different properties of the solution. In particular, it is shown that robustness of the solution explains why the solution is sparse. The analysis as well as the specific results obtained differ from standard sparsity results, providing different geometric intuition. Furthermore, it is shown that the robust optimization formul...
Daniel Bartz; Kerr Hatrick; Hesse, Christian W.; Klaus-Robert M\\"uller; Steven Lemm
2011-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on Factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, w...
Bose, Prosenjit; Morin, Pat; Smid, Michiel
2012-01-01
Highly connected and yet sparse graphs (such as expanders or graphs of high treewidth) are fundamental, widely applicable and extensively studied combinatorial objects. We initiate the study of such highly connected graphs that are, in addition, geometric spanners. We define a property of spanners called robustness. Informally, when one removes a few vertices from a robust spanner, this harms only a small number of other vertices. We show that robust spanners must have a superlinear number of edges, even in one dimension. On the positive side, we give constructions, for any dimension, of robust spanners with a near-linear number of edges.
Robustness of Structural Systems
Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.
2007-01-01
The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...
2013-03-21
...; and (3) any views or arguments on any issue of fact or law presented in the variance application. ] I... located in the car; (c)(14)(i)--Using a minimum of two wire ropes for drum hoisting; and (c)(16)--Material... Federal OSHA. Kentucky stated that its statutory law requires affected employers to apply to the state...
Visual SLAM Using Variance Grid Maps
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
A relation between information entropy and variance
Pandey, Biswajit
2016-01-01
We obtain an analytic relation between the information entropy and the variance of a distribution in the regime of small fluctuations. We use a set of Monte Carlo simulations of different homogeneous and inhomogeneous distributions to verify the relation and also test it in a set of cosmological N-body simulations. We find that the relation is in excellent agreement with the simulations and is independent of number density and the nature of the distributions. The relation would help us to relate entropy to other conventional measures and widen its scope.
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2010-01-01
This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...
Markov bridges, bisection and variance reduction
Asmussen, Søren; Hobolth, Asger
Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints....... In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented...... where the methods of stratification, importance sampling and quasi Monte Carlo are investigated....
Least squares with non-normal data: estimating experimental variance functions.
Tellinghuisen, Joel
2008-02-01
Contrary to popular belief, the method of least squares (LS) does not require that the data have normally distributed (Gaussian) error for its validity. One practically important application of LS fitting that does not involve normal data is the estimation of data variance functions (VFE) from replicate statistics. If the raw data are normal, sampling estimates s(2) of the variance sigma(2) are chi(2) distributed. For small degrees of freedom, the chi(2) distribution is strongly asymmetrical -- exponential in the case of three replicates, for example. Monte Carlo computations for linear variance functions demonstrate that with proper weighting, the LS variance-function parameters remain unbiased, minimum-variance estimates of the true quantities. However, the parameters are strongly non-normal -- almost exponential for some parameters estimated from s(2) values derived from three replicates, for example. Similar LS estimates of standard deviation functions from estimated s values have a predictable and correctable bias stemming from the bias inherent in s as an estimator of sigma. Because s(2) and s have uncertainties proportional to their magnitudes, the VFE and SDFE fits require weighting as s(-4) and s(-2), respectively. However, these weights must be evaluated on the calculated functions rather than directly from the sampling estimates. The computation is thus iterative but usually converges in a few cycles, with remaining 'weighting' bias sufficiently small as to be of no practical consequence.
Robust Crossfeed Design for Hovering Rotorcraft
Catapang, David R.
1993-01-01
Control law design for rotorcraft fly-by-wire systems normally attempts to decouple angular responses using fixed-gain crossfeeds. This approach can lead to poor decoupling over the frequency range of pilot inputs and increase the load on the feedback loops. In order to improve the decoupling performance, dynamic crossfeeds may be adopted. Moreover, because of the large changes that occur in rotorcraft dynamics due to small changes about the nominal design condition, especially for near-hovering flight, the crossfeed design must be 'robust'. A new low-order matching method is presented here to design robust crossfeed compensators for multi-input, multi-output (MIMO) systems. The technique identifies degrees-of-freedom that can be decoupled using crossfeeds, given an anticipated set of parameter variations for the range of flight conditions of concern. Cross-coupling is then reduced for degrees-of-freedom that can use crossfeed compensation by minimizing off-axis response magnitude average and variance. Results are presented for the analysis of pitch, roll, yaw and heave coupling of the UH-60 Black Hawk helicopter in near-hovering flight. Robust crossfeeds are designed that show significant improvement in decoupling performance and robustness over nominal, single design point, compensators. The design method and results are presented in an easily used graphical format that lends significant physical insight to the design procedure. This plant pre-compensation technique is an appropriate preliminary step to the design of robust feedback control laws for rotorcraft.
Mechanisms for Robust Cognition
Walsh, Matthew M.; Gluck, Kevin A.
2015-01-01
To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…
Mechanisms for Robust Cognition
Walsh, Matthew M.; Gluck, Kevin A.
2015-01-01
To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…
Wang, H.
2009-01-01
Our society depends more strongly than ever on large networks such as transportation networks, the Internet and power grids. Engineers are confronted with fundamental questions such as “how to evaluate the robustness of networks for a given service?”, “how to design a robust network?”, because netwo
Minimum signals in classical physics
邓文基; 许基桓; 刘平
2003-01-01
The bandwidth theorem for Fourier analysis on any time-dependent classical signal is shown using the operator approach to quantum mechanics. Following discussions about squeezed states in quantum optics, the problem of minimum signals presented by a single quantity and its squeezing is proposed. It is generally proved that all such minimum signals, squeezed or not, must be real Gaussian functions of time.
EXPLANATORY VARIANCE IN MAXIMAL OXYGEN UPTAKE
Jacalyn J. Robert McComb
2006-06-01
Full Text Available The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females, ages 18 - 24 years, underwent the following testing procedures: (a a 7-site skin fold assessment; (b a land VO2max running treadmill test; and (c a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants' head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF, height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27 of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF.
Dimension reduction based on weighted variance estimate
无
2009-01-01
In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Dimension reduction based on weighted variance estimate
ZHAO JunLong; XU XingZhong
2009-01-01
In this paper,we propose a new estimate for dimension reduction,called the weighted variance estimate (WVE),which includes Sliced Average Variance Estimate (SAVE) as a special case.Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension.And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR),SAVE,etc.Many methods such as SIR,SAVE,etc.usually put the same weight on each observation to estimate central subspace (CS).By introducing a weight function,WVE puts different weights on different observations according to distance of observations from CS.The weight function makes WVE have very good performance in general and complicated situations,for example,the distribution of regressor deviating severely from elliptical distribution which is the base of many methods,such as SIR,etc.And compared with many existing methods,WVE is insensitive to the distribution of the regressor.The consistency of the WVE is established.Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Robust Designs for Three Commonly Used Nonlinear Models
Xu, Xiaojian; Chen, Arnold
2011-11-01
In this paper, we study the robust designs for a few nonlinear models, including an exponential model with an intercept, a compartmental model, and a Michaelis-Menten model, when these models are possibly misspecified. The minimax robust designs we considered in this paper are under consideration of not only minimizing the variances but also reducing the possible biases in estimation. Both prediction and extrapolation cases are discussed. The robust designs are found incorporating the approximation of these models with several situations such as homoscedasticity, and heteroscedasticity. Both ordinary and weighted nonlinear least squares methods are utilized.
A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del
Hou Ying-li; Liu Guo-xin; Jiang Chun-lan
2015-01-01
In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the eﬃcient strategies and eﬃcient frontier) are derived explicitly.
On Eliminating The Scrambling Variance In Scrambled Response Models
Zawar Hussain
2012-06-01
Full Text Available To circumvent the response bias in sensitive surveys randomized response models are being used. To add into it we propose an improved response model utilizing both the additive and multiplicative scrambling method. The proposed model provides greater flexibility in terms of fixing the constantKdepending upon the guessed distribution of sensitive variable and nature of the population. The proposed model yields an unbiased estimator and is anticipated as more protective against the privacy of the respondents. The relative efficiency comparison of the proposed estimator is made relative to Hussain and Shabbir (2007 RRM. Furthermore, the proposed model itself is improved by taking the two responses from each respondent and suggesting a weighted estimator yielding an unbiased estimator having the minimum possible sampling variance. The suggested weighted estimator is unconditionally more efficient than all of the suggested estimators until now. Future research may be focused on privacy protection provided by the scrambling models. More scrambling models may be identified and improved by taking the two responses from each respondent in such a way that the scrambling effect is balanced out.
Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability
Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco
We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data...... and realized variances, our model allows to infer the occurrence and size of extreme variance events, and construct indicators signalling agents sentiment towards future market conditions. Our results show that excess returns are to a large extent explained by fear or optimism towards future extreme variance...
Evolution of robustness to noise and mutation in gene expression dynamics.
Kunihiko Kaneko
Full Text Available Phenotype of biological systems needs to be robust against mutation in order to sustain themselves between generations. On the other hand, phenotype of an individual also needs to be robust against fluctuations of both internal and external origins that are encountered during growth and development. Is there a relationship between these two types of robustness, one during a single generation and the other during evolution? Could stochasticity in gene expression have any relevance to the evolution of these types of robustness? Robustness can be defined by the sharpness of the distribution of phenotype; the variance of phenotype distribution due to genetic variation gives a measure of 'genetic robustness', while that of isogenic individuals gives a measure of 'developmental robustness'. Through simulations of a simple stochastic gene expression network that undergoes mutation and selection, we show that in order for the network to acquire both types of robustness, the phenotypic variance induced by mutations must be smaller than that observed in an isogenic population. As the latter originates from noise in gene expression, this signifies that the genetic robustness evolves only when the noise strength in gene expression is larger than some threshold. In such a case, the two variances decrease throughout the evolutionary time course, indicating increase in robustness. The results reveal how noise that cells encounter during growth and development shapes networks' robustness to stochasticity in gene expression, which in turn shapes networks' robustness to mutation. The necessary condition for evolution of robustness, as well as the relationship between genetic and developmental robustness, is derived quantitatively through the variance of phenotypic fluctuations, which are directly measurable experimentally.
Evolution of robustness to noise and mutation in gene expression dynamics.
Kaneko, Kunihiko
2007-05-09
Phenotype of biological systems needs to be robust against mutation in order to sustain themselves between generations. On the other hand, phenotype of an individual also needs to be robust against fluctuations of both internal and external origins that are encountered during growth and development. Is there a relationship between these two types of robustness, one during a single generation and the other during evolution? Could stochasticity in gene expression have any relevance to the evolution of these types of robustness? Robustness can be defined by the sharpness of the distribution of phenotype; the variance of phenotype distribution due to genetic variation gives a measure of 'genetic robustness', while that of isogenic individuals gives a measure of 'developmental robustness'. Through simulations of a simple stochastic gene expression network that undergoes mutation and selection, we show that in order for the network to acquire both types of robustness, the phenotypic variance induced by mutations must be smaller than that observed in an isogenic population. As the latter originates from noise in gene expression, this signifies that the genetic robustness evolves only when the noise strength in gene expression is larger than some threshold. In such a case, the two variances decrease throughout the evolutionary time course, indicating increase in robustness. The results reveal how noise that cells encounter during growth and development shapes networks' robustness to stochasticity in gene expression, which in turn shapes networks' robustness to mutation. The necessary condition for evolution of robustness, as well as the relationship between genetic and developmental robustness, is derived quantitatively through the variance of phenotypic fluctuations, which are directly measurable experimentally.
Faber, M.H.; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard
2011-01-01
In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely the developm......In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...
MULTIDISCIPLINARY ROBUST OPTIMIZATION DESIGN
Chen Jianjiang; Xiao Renbin; Zhong Yifang; Dou Gang
2005-01-01
Because uncertainty factors inevitably exist under multidisciplinary design environment, a hierarchical multidisciplinary robust optimization design based on response surface is proposed. The method constructs optimization model of subsystem level and system level to coordinate the coupling among subsystems, and also the response surface based on the artificial neural network is introduced to provide information for system level optimization tool to maintain the independence of subsystems,i.e. to realize multidisciplinary parallel design. The application case of electrical packaging demonstrates that reasonable robust optimum solution can be yielded and it is a potential and efficient multidisciplinary robust optimization approach.
Minimum length-maximum velocity
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2011-01-01
This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....
Power Estimation in Multivariate Analysis of Variance
Jean François Allaire
2007-09-01
Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Tauchen, George; Zhou, Hao
Motivated by the implications from a stylized self-contained general equilibrium model incorporating the effects of time-varying economic uncertainty, we show that the difference between implied and realized variation, or the variance risk premium, is able to explain a non-trivial fraction...... of the time series variation in post 1990 aggregate stock market returns, with high (low) premia predicting high (low) future returns. Our empirical results depend crucially on the use of "model-free," as opposed to Black- Scholes, options implied volatilities, along with accurate realized variation measures...... constructed from high-frequency intraday, as opposed to daily, data. The magnitude of the predictability is particularly strong at the intermediate quarterly return horizon, where it dominates that afforded by other popular predictor variables, like the P/E ratio, the default spread, and the consumption...
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.
Sørensen, John Dalsgaard
2008-01-01
. According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review......This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential...
Robustness - theoretical framework
Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.
2010-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....
Robust Nonstationary Regression
1993-01-01
This paper provides a robust statistical approach to nonstationary time series regression and inference. Fully modified extensions of traditional robust statistical procedures are developed which allow for endogeneities in the nonstationary regressors and serial dependence in the shocks that drive the regressors and the errors that appear in the equation being estimated. The suggested estimators involve semiparametric corrections to accommodate these possibilities and they belong to the same ...
Robustness - theoretical framework
Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.
2010-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....
Qualitative Robustness in Estimation
Mohammed Nasser
2012-07-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.
Cook, Philip
2013-01-01
A minimum voting age is defended as the most effective and least disrespectful means of ensuring all members of an electorate are sufficiently competent to vote. Whilst it may be reasonable to require competency from voters, a minimum voting age should be rejected because its view of competence is unreasonably controversial, it is incapable of defining a clear threshold of sufficiency and an alternative test is available which treats children more respectfully. This alternative is a procedura...
Evolution of mutational robustness in an RNA virus.
Rebecca Montville
2005-11-01
Full Text Available Mutational (genetic robustness is phenotypic constancy in the face of mutational changes to the genome. Robustness is critical to the understanding of evolution because phenotypically expressed genetic variation is the fuel of natural selection. Nonetheless, the evidence for adaptive evolution of mutational robustness in biological populations is controversial. Robustness should be selectively favored when mutation rates are high, a common feature of RNA viruses. However, selection for robustness may be relaxed under virus co-infection because complementation between virus genotypes can buffer mutational effects. We therefore hypothesized that selection for genetic robustness in viruses will be weakened with increasing frequency of co-infection. To test this idea, we used populations of RNA phage phi6 that were experimentally evolved at low and high levels of co-infection and subjected lineages of these viruses to mutation accumulation through population bottlenecking. The data demonstrate that viruses evolved under high co-infection show relatively greater mean magnitude and variance in the fitness changes generated by addition of random mutations, confirming our hypothesis that they experience weakened selection for robustness. Our study further suggests that co-infection of host cells may be advantageous to RNA viruses only in the short term. In addition, we observed higher mutation frequencies in the more robust viruses, indicating that evolution of robustness might foster less-accurate genome replication in RNA viruses.
Testing for homogeneity of variance in time series: Long memory, wavelets, and the Nile River
Whitcher, B.; Byers, S. D.; Guttorp, P.; Percival, D. B.
2002-05-01
We consider the problem of testing for homogeneity of variance in a time series with long memory structure. We demonstrate that a test whose null hypothesis is designed to be white noise can, in fact, be applied, on a scale by scale basis, to the discrete wavelet transform of long memory processes. In particular, we show that evaluating a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process. The point at which the test statistic, using a nondecimated version of the discrete wavelet transform, achieves its maximum value can be used to estimate the time of the unknown variance change. We apply our proposed test statistic on five time series derived from the historical record of Nile River yearly minimum water levels covering 622-1922 A.D., each series exhibiting various degrees of serial correlation including long memory. In the longest subseries, spanning 622-1284 A.D., the test confirms an inhomogeneity of variance at short time scales and identifies the change point around 720 A.D., which coincides closely with the construction of a new device around 715 A.D. for measuring the Nile River. The test also detects a change in variance for a record of only 36 years.
Robust Instrumentation[Water treatment for power plant]; Robust Instrumentering
Wik, Anders [Vattenfall Utveckling AB, Stockholm (Sweden)
2003-08-01
Cementa Slite Power Station is a heat recovery steam generator (HRSG) with moderate steam data; 3.0 MPa and 420 deg C. The heat is recovered from Cementa, a cement industry, without any usage of auxiliary fuel. The Power station commenced operation in 2001. The layout of the plant is unusual, there are no similar in Sweden and very few world-wide, so the operational experiences are limited. In connection with the commissioning of the power plant a R and D project was identified with the objective to minimise the manpower needed for chemistry management of the plant. The lean chemistry management is based on robust instrumentation and chemical-free water treatment plant. The concept with robust instrumentation consists of the following components; choice of on-line instrumentation with a minimum of O and M and a chemical-free water treatment. The parameters are specific conductivity, cation conductivity, oxygen and pH. In addition to that, two fairly new on-line instruments were included; corrosion monitors and differential pH calculated from specific and cation conductivity. The chemical-free water treatment plant consists of softening, reverse osmosis and electro-deionisation. The operational experience shows that the cycle chemistry is not within the guidelines due to major problems with the operation of the power plant. These problems have made it impossible to reach steady state and thereby not viable to fully verify and validate the concept with robust instrumentation. From readings on the panel of the online analysers some conclusions may be drawn, e.g. the differential pH measurements have fulfilled the expectations. The other on-line analysers have been working satisfactorily apart from contamination with turbine oil, which has been noticed at least twice. The corrosion monitors seem to be working but the lack of trend curves from the mainframe computer system makes it hard to draw any clear conclusions. The chemical-free water treatment has met all
Robust Passivity and Feedback Design for Nonlinear Stochastic Systems with Structural Uncertainty
Zhongwei Lin
2013-01-01
Full Text Available This paper discusses the robust passivity and global stabilization problems for a class of uncertain nonlinear stochastic systems with structural uncertainties. A robust version of stochastic Kalman-Yakubovitch-Popov (KYP lemma is established, which sustains the robust passivity of the system. Moreover, a robust strongly minimum phase system is defined, based on which the uncertain nonlinear stochastic system can be feedback equivalent to a robust passive system. Following with the robust passivity theory, a global stabilizing control is designed, which guarantees that the closed-loop system is globally asymptotically stable in probability (GASP. A numerical example is presented to illustrate the effectiveness of our results.
Genomic variance estimates: With or without disequilibrium covariances?
Lehermeier, C; de Los Campos, G; Wimmer, V; Schön, C-C
2017-06-01
Whole-genome regression methods are often used for estimating genomic heritability: the proportion of phenotypic variance that can be explained by regression on marker genotypes. Recently, there has been an intensive debate on whether and how to account for the contribution of linkage disequilibrium (LD) to genomic variance. Here, we investigate two different methods for genomic variance estimation that differ in their ability to account for LD. By analysing flowering time in a data set on 1,057 fully sequenced Arabidopsis lines with strong evidence for diversifying selection, we observed a large contribution of covariances between quantitative trait loci (QTL) to the genomic variance. The classical estimate of genomic variance that ignores covariances underestimated the genomic variance in the data. The second method accounts for LD explicitly and leads to genomic variance estimates that when added to error variance estimates match the sample variance of phenotypes. This method also allows estimating the covariance between sets of markers when partitioning the genome into subunits. Large covariance estimates between the five Arabidopsis chromosomes indicated that the population structure in the data led to strong LD also between physically unlinked QTL. By consecutively removing population structure from the phenotypic variance using principal component analysis, we show how population structure affects the magnitude of LD contribution and the genomic variance estimates obtained with the two methods. © 2017 Blackwell Verlag GmbH.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.
Geometry of magnetosonic shocks and plane-polarized waves: Coplanarity Variance Analysis (CVA)
Scudder, J. D.
2005-02-01
Minimum Variance Analysis (MVA) is frequently used for the geometrical organization of a time series of vectors. The Coplanarity Variance Analysis (CVA) developed in this paper reproduces the layer geometry involving coplanar magnetosonic shocks or plane-polarized wave trains (including normals and coplanarity directions) 300 times more precisely (CVA technique exploits the eigenvalue degeneracy of the covariance matrix present at planar structures to find a consistent normal to the coplanarity plane of the fluctuations. Although Tangential Discontinuities (TDs) have a coplanarity plane, the eigenvalues of their covariance matrix are usually not degenerate; accordingly, CVA does not misdiagnose TDs as shocks or plane-polarized waves. Together CVA and MVA may be used to sort between the hypotheses that the time series is caused by a one-dimensional current layer that has magnetic disturbances that are (1) coplanar, linearly polarized (shocks/plane waves), (2) intrinsically helical (rotational/tangential discontinuities), or (3) neither 1 nor 2.
Estimating the Allan variance in the presence of long periods of missing data and outliers
Sesia, Ilaria; Tavella, Patrizia
2008-12-01
The ability of the Allan variance (AVAR) to identify and estimate the typical clock noise is widely accepted, and its use is recommended by international standards. Recently, a time-varying version called Dynamic Allan variance (DAVAR) was suggested and exploited. Currently, the AVAR is commonly used in applications to space and satellite systems, in particular in monitoring the clocks of the Global Positioning System, and also in the framework of the European project Galileo. In these applications stability estimation, either AVAR or DAVAR (or other similar variances), presents some peculiar aspects which are not commonly encountered when the clock data are measured in a laboratory. In particular, data from space clocks may typically present outliers and missing values. Hence, special attention has to be paid when dealing with such experimental measurements. In this work we propose an estimation algorithm and its implementation in a robust software code (in MATLAB® language) able to estimate the AVAR in the case of missing data, unequally spaced data, outliers, and with long periods of missing observation, so that the Allan variance estimates turn out unbiased and with the maximum use of all the available data.
Epistemically Robust Strategy Subsets
Geir B. Asheim
2016-11-01
Full Text Available We define a concept of epistemic robustness in the context of an epistemic model of a finite normal-form game where a player type corresponds to a belief over the profiles of opponent strategies and types. A Cartesian product X of pure-strategy subsets is epistemically robust if there is a Cartesian product Y of player type subsets with X as the associated set of best reply profiles such that the set Y i contains all player types that believe with sufficient probability that the others are of types in Y − i and play best replies. This robustness concept provides epistemic foundations for set-valued generalizations of strict Nash equilibrium, applicable also to games without strict Nash equilibria. We relate our concept to closedness under rational behavior and thus to strategic stability and to the best reply property and thus to rationalizability.
Sriboonchitta, Songsak; Huynh, Van-Nam
2017-01-01
This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.
Robustness of Spatial Micronetworks
McAndrew, Thomas C; Bagrow, James P
2015-01-01
Power lines, roadways, pipelines and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that, when failures depend on spatial distances, networks are more fragile than expected. Accounting...
2013-01-01
This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...
Robustness - acceptance criteria
Rizzuto, Enrico; Sørensen, John Dalsgaard; Kroon, Inger B.
2010-01-01
This factsheet describes the general framework on the bases of which acceptance criteria for requirements on the robustness of structures can be set. Such framework is based on the more general concept of risk-based assessment of engineering systems. The present factsheet is to be seen in conjunc......This factsheet describes the general framework on the bases of which acceptance criteria for requirements on the robustness of structures can be set. Such framework is based on the more general concept of risk-based assessment of engineering systems. The present factsheet is to be seen...... in conjunction with the one on the theoretical framework for robustness (Sørensen et al. 2009). In the present factsheet, the focus is on normative implications....
Natarajan Sripriya
2004-02-01
Full Text Available Abstract Background Gene microarray technology provides the ability to study the regulation of thousands of genes simultaneously, but its potential is limited without an estimate of the statistical significance of the observed changes in gene expression. Due to the large number of genes being tested and the comparatively small number of array replicates (e.g., N = 3, standard statistical methods such as the Student's t-test fail to produce reliable results. Two other statistical approaches commonly used to improve significance estimates are a penalized t-test and a Z-test using intensity-dependent variance estimates. Results The performance of these approaches is compared using a dataset of 23 replicates, and a new implementation of the Z-test is introduced that pools together variance estimates of genes with similar minimum intensity. Significance estimates based on 3 replicate arrays are calculated using each statistical technique, and their accuracy is evaluated by comparing them to a reliable estimate based on the remaining 20 replicates. The reproducibility of each test statistic is evaluated by applying it to multiple, independent sets of 3 replicate arrays. Two implementations of a Z-test using intensity-dependent variance produce more reproducible results than two implementations of a penalized t-test. Furthermore, the minimum intensity-based Z-statistic demonstrates higher accuracy and higher or equal precision than all other statistical techniques tested. Conclusion An intensity-based variance estimation technique provides one simple, effective approach that can improve p-value estimates for differentially regulated genes derived from replicated microarray datasets. Implementations of the Z-test algorithms are available at http://vessels.bwh.harvard.edu/software/papers/bmcg2004.
Robust global motion estimation
无
2007-01-01
A global motion estimation method based on robust statistics is presented in this paper. By using tracked feature points instead of whole image pixels to estimate parameters the process speeds up. To further speed up the process and avoid numerical instability, an alterative description of the problem is given, and three types of solution to the problem are compared. By using a two step process, the robustness of the estimator is also improved. Automatic initial value selection is an advantage of this method. The proposed approach is illustrated by a set of examples, which shows good results with high speed.
Minimum Q Electrically Small Antennas
Kim, O. S.
2012-01-01
for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q.......Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for the stored energies obtained through the vector spherical wave theory, it is shown that a magnetic-coated metal core reduces the internal stored energy of both TM1m and TE1m modes simultaneously, so that a self-resonant antenna with the Q approaching the fundamental minimum is created. Numerical results...
Gene set analysis using variance component tests
2013-01-01
Background Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. Results We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). Conclusion We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data. PMID:23806107
Functional analysis of variance for association studies.
Olga A Vsevolozhskaya
Full Text Available While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1 it tests for a joint effect of gene variants, including both common and rare; (2 it fully utilizes linkage disequilibrium and genetic position information; and (3 allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM, - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.
Savaux, Vincent
2014-01-01
This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr
The determinants of the bias in Minimum Rank Factor Analysis (MRFA)
Socan, G; ten Berge, JMF; Yanai, H; Okada, A; Shigemasu, K; Kano, Y; Meulman, JJ
2003-01-01
Minimum Rank Factor Analysis (MRFA), see Ten Berge (1998), and Ten Berge and Kiers (1991), is a method of common factor analysis which yields, for any given covariance matrix Sigma, a diagonal matrix Psi of unique variances which are nonnegative and which entail a reduced covariance matrix Sigma-Psi
Robustness of mission plans for unmanned aircraft
Niendorf, Moritz
This thesis studies the robustness of optimal mission plans for unmanned aircraft. Mission planning typically involves tactical planning and path planning. Tactical planning refers to task scheduling and in multi aircraft scenarios also includes establishing a communication topology. Path planning refers to computing a feasible and collision-free trajectory. For a prototypical mission planning problem, the traveling salesman problem on a weighted graph, the robustness of an optimal tour is analyzed with respect to changes to the edge costs. Specifically, the stability region of an optimal tour is obtained, i.e., the set of all edge cost perturbations for which that tour is optimal. The exact stability region of solutions to variants of the traveling salesman problems is obtained from a linear programming relaxation of an auxiliary problem. Edge cost tolerances and edge criticalities are derived from the stability region. For Euclidean traveling salesman problems, robustness with respect to perturbations to vertex locations is considered and safe radii and vertex criticalities are introduced. For weighted-sum multi-objective problems, stability regions with respect to changes in the objectives, weights, and simultaneous changes are given. Most critical weight perturbations are derived. Computing exact stability regions is intractable for large instances. Therefore, tractable approximations are desirable. The stability region of solutions to relaxations of the traveling salesman problem give under approximations and sets of tours give over approximations. The application of these results to the two-neighborhood and the minimum 1-tree relaxation are discussed. Bounds on edge cost tolerances and approximate criticalities are obtainable likewise. A minimum spanning tree is an optimal communication topology for minimizing the cumulative transmission power in multi aircraft missions. The stability region of a minimum spanning tree is given and tolerances, stability balls
Robust Helicopter Stabilization in the Face of Wind Disturbance
A. Danapalasingam, Kumeresan; Leth, John-Josef; la Cour-Harbo, Anders
2010-01-01
When a helicopter is required to hover with minimum deviations from a desired position without measurements of an affecting persistent wind disturbance, a robustly stabilizing control action is vital. In this paper, the stabilization of the position and translational velocity of a nonlinear...
Complexity, Robustness, and Performance
B. Visser (Bauke)
2002-01-01
textabstractThis paper analyses the relationship between organizational complexity ( the degree of detail of information necessary to correctly assign agents to positions), robustness (the relative loss of performance due to mis-allocated agents), and performance. More complex structures are not
Robustness via Diffractal Architectures
Moocarme, Matthew
2015-01-01
When plane waves diffract through fractal-patterned apertures, the resulting far-field profiles or diffractals also exhibit iterated, self-similar features. Here we show that this specific architecture enables robust signal processing and spatial multiplexing: arbitrary parts of a diffractal contain sufficient information to recreate the entire original sparse signal.
Vrouwenvelder, T.; Sørensen, John Dalsgaard
2009-01-01
robustness is still an issue of controversy and poses difficulties in regard to interpretation as well as regulation. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages'. However, despite the importance...
Robustness Envelopes of Networks
Trajanovski, S.; Martín-Hernández, J.; Winterbach, W.; Van Mieghem, P.
2013-01-01
We study the robustness of networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. Whilst both of these have been studied in the literature, existing approaches tend to study random failure in terms of average-case behavior
Complexity, Robustness, and Performance
B. Visser (Bauke)
2002-01-01
textabstractThis paper analyses the relationship between organizational complexity ( the degree of detail of information necessary to correctly assign agents to positions), robustness (the relative loss of performance due to mis-allocated agents), and performance. More complex structures are not nec
Minimum Thermal Conductivity of Superlattices
Simkin, M. V.; Mahan, G. D.
2000-01-31
The phonon thermal conductivity of a multilayer is calculated for transport perpendicular to the layers. There is a crossover between particle transport for thick layers to wave transport for thin layers. The calculations show that the conductivity has a minimum value for a layer thickness somewhat smaller then the mean free path of the phonons. (c) 2000 The American Physical Society.
Minimum aanlandingsmaat Brasem (Abramis brama)
Hal, van R.; Miller, D.C.M.
2016-01-01
Ter ondersteuning van een besluit aangaande een minimum aanlandingsmaat voor brasem, primair voor het IJsselmeer en Markermeer, heeft het ministerie van Economische Zaken IMARES verzocht een overzicht te geven van aanlandingsmaten voor brasem in andere landen en waar mogelijk de motivatie achter dez
Coupling between minimum scattering antennas
Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans
1974-01-01
Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...
Anatomic variance of the iliopsoas tendon.
Philippon, Marc J; Devitt, Brian M; Campbell, Kevin J; Michalski, Max P; Espinoza, Chris; Wijdicks, Coen A; Laprade, Robert F
2014-04-01
The iliopsoas tendon has been implicated as a generator of hip pain and a cause of labral injury due to impingement. Arthroscopic release of the iliopsoas tendon has become a preferred treatment for internal snapping hips. Traditionally, the iliopsoas tendon has been considered the conjoint tendon of the psoas major and iliacus muscles, although anatomic variance has been reported. The iliopsoas tendon consists of 2 discrete tendons in the majority of cases, arising from both the psoas major and iliacus muscles. Descriptive laboratory study. Fifty-three nonmatched, fresh-frozen, cadaveric hemipelvis specimens (average age, 62 years; range, 47-70 years; 29 male and 24 female) were used in this study. The iliopsoas muscle was exposed via a Smith-Petersen approach. A transverse incision across the entire iliopsoas musculotendinous unit was made at the level of the hip joint. Each distinctly identifiable tendon was recorded, and the distance from the lesser trochanter was recorded. The prevalence of a single-, double-, and triple-banded iliopsoas tendon was 28.3%, 64.2%, and 7.5%, respectively. The psoas major tendon was consistently the most medial tendinous structure, and the primary iliacus tendon was found immediately lateral to the psoas major tendon within the belly of the iliacus muscle. When present, an accessory iliacus tendon was located adjacent to the primary iliacus tendon, lateral to the primary iliacus tendon. Once considered a rare anatomic variant, the finding of ≥2 distinct tendinous components to the iliacus and psoas major muscle groups is an important discovery. It is essential to be cognizant of the possibility that more than 1 tendon may exist to ensure complete release during endoscopy. Arthroscopic release of the iliopsoas tendon is a well-accepted surgical treatment for iliopsoas impingement. The most widely used site for tendon release is at the level of the anterior hip joint. The findings of this novel cadaveric anatomy study suggest that
一般最小方差组合投资权系数%Generalized Minimum-Variance-Portfolio Weights
N.L. Kennedy; 朱允民
2004-01-01
组合投资优化在组合投资管理中被广泛研究,在研究中,一般使用的是拉格朗日乘子法.然而,这一方法有某些限制:其基本假设是回报的方差阵是正定的,这使得该方法不能在一般情况下使用.本文作者的目标是应用二次优化理论以获得一般情况下的最优权系数,所得结果突破了前述的方差阵的限制.%Portfolio weights optimization has been extensively studied in the literature of portfolio management. The commonly used method is the Lagrange multiplier; however, this approach has some limitations: the fundamental assumption in this approach is that the covariance matrix of returns is positive definite, which renders the method not applicable in general. In this paper, the authors aim to use quadratic optimization theory in obtaining generalized optimal weights, whereby, the restriction on the covariance matrix is just a mere special case.
2012-09-01
electroencephalography (EEG) recording systems. The four systems examined, Emotiv’s EPOC , Biosemi’s ActiveTwo, Advanced Brain Monitoring’s B-Alert X10...and Quasar’s prototype represent different approaches to the problem of recording brain activity in human subjects. We found that the EPOC introduces...drift with EPOC system is very large. A) The error between the trigger being logged by the DAQ and when it was sent is on the order of hundreds of
An Adaptive Antenna Utilizing Minimum Norm and LCMV Algorithms
M.E.Ahmed; TANZhanzhong
2005-01-01
This paper introduces a new structure based on the minimum norm and Linearly constrained minimum variance (LCMV) algorithms to highly suppress the jammers to the Global positioning system (GPS) receiver. Minimum norm can assign a deep null blindly to the jammer direction, also it doesn't introduce any false nulls like other algorithms do. So combining it with LCMV algorithm give a structure capable of adjusting the weights of the antenna array in real time to respond to the signals coming from the desired directions while highly suppresses the jammers coming from the other directions. The simulations were performed for fixed and moving jammers. Two jammers are used one of power -100dBW, the other is-120dBW. The nulls depths attained by Minimum norm alone are 88.4dB for the strong jammer and 45dB for the weak one. The simulation indicates that the new structure can give deeper nulls to the jammers directions, more than 114dB nulls depths for both jammer when they come from fixed directions and about 103dB nulls depths when they come from moving directions. The new structure not only improves the nulls depths but also can control the nulls depths. In addition, it can control the antenna gain in the directions of the useful GPS Signals.
Integrating robust timetabling in line plan optimization for railway systems
Burggraeve, Sofie; Bull, Simon Henry; Vansteenwegen, Pieter
2017-01-01
We propose a heuristic algorithm to build a railway line plan from scratch that minimizes passenger travel time and operator cost and for which a feasible and robust timetable exists. A line planning module and a timetabling module work iteratively and interactively. The line planning module...... creates an initial line plan. The timetabling module evaluates the line plan and identifies a critical line based on minimum buffer times between train pairs. The line planning module proposes a new line plan in which the time length of the critical line is modified in order to provide more flexibility......, but is constrained by limited shunt capacity. While the operator and passenger cost remain close to those of the initially and (for these costs) optimally built line plan, the timetable corresponding to the finally developed robust line plan significantly improves the minimum buffer time, and thus the robustness...
40 CFR 190.11 - Variances for unusual operations.
2010-07-01
... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.
Ashby, Neil; Patla, Bijunath
2016-04-01
Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Ashton M Verdery
Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
RR-Interval variance of electrocardiogram for atrial fibrillation detection
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
A Maximum Entropy Method for a Robust Portfolio Problem
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
Comparing dependent robust correlations.
Wilcox, Rand R
2016-11-01
Let r1 and r2 be two dependent estimates of Pearson's correlation. There is a substantial literature on testing H0 : ρ1 = ρ2 , the hypothesis that the population correlation coefficients are equal. However, it is well known that Pearson's correlation is not robust. Even a single outlier can have a substantial impact on Pearson's correlation, resulting in a misleading understanding about the strength of the association among the bulk of the points. A way of mitigating this concern is to use a correlation coefficient that guards against outliers, many of which have been proposed. But apparently there are no results on how to compare dependent robust correlation coefficients when there is heteroscedasicity. Extant results suggest that a basic percentile bootstrap will perform reasonably well. This paper reports simulation results indicating the extent to which this is true when using Spearman's rho, a Winsorized correlation or a skipped correlation.
Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus
2011-01-01
In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.
1985-09-19
13.2 3.6. 14.0. 1.8. 11111.52 *.6 L 3 n1 i erated ~~~m nc. AFOSR-TR- 798 s AD-A 161 349 ROBUST ADAPTIVE CONTROL * FINAL REPORT PREPARED BY: R~ OBERT L... Centre Block Computes the Norm of the [1I] Solo, V., "Time Series Recursions and Stochastc Regressors. The Rematning Elemerts Imple- Approximation
Rider, William, E-mail: wjrider@sandia.gov [Sandia National Laboratories, Center for Computing Research, Albuquerque, NM 87185 (United States); Witkowski, Walt [Sandia National Laboratories, Verification and Validation, Uncertainty Quantification, Credibility Processes Department, Engineering Sciences Center, Albuquerque, NM 87185 (United States); Kamm, James R. [Los Alamos National Laboratory, Methods and Algorithms Group, Computational Physics Division, Los Alamos, NM 87545 (United States); Wildey, Tim [Sandia National Laboratories, Center for Computing Research, Albuquerque, NM 87185 (United States)
2016-02-15
We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.
Robust Self Tuning Controllers
Poulsen, Niels Kjølstad
1985-01-01
The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay....
Robustness of Interdependent Networks
Havlin, Shlomo
2011-03-01
In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of many interdependent networks. We will present a framework for understanding the robustness of interacting networks subject to such cascading failures and provide a basic analytic approach that may be useful in future studies. We present exact analytical solutions for the critical fraction of nodes that upon removal will lead to a failure cascade and to a complete fragmentation of two interdependent networks in a first order transition. Surprisingly, analyzing complex systems as a set of interdependent networks may alter a basic assumption that network theory has relied on: while for a single network a broader degree distribution of the network nodes results in the network being more robust to random failures, for interdependent networks, the broader the distribution is, the more vulnerable the networks become to random failure. We also show that reducing the coupling between the networks leads to a change from a first order percolation phase transition to a second order percolation transition at a critical point. These findings pose a significant challenge to the future design of robust networks that need to consider the unique properties of interdependent networks.
Marek Hicar
2004-01-01
Full Text Available The article is about a control design for complete structure of the crane: crab, bridge and crane uplift.The most important unknown parameters for simulations are burden weight and length of hanging rope. We will use robustcontrol for crab and bridge control to ensure adaptivity for burden weight and rope length. Robust control will be designed for current control of the crab and bridge, necessary is to know the range of unknown parameters. Whole robust will be splitto subintervals and after correct identification of unknown parameters the most suitable robust controllers will be chosen.The most important condition at the crab and bridge motion is avoiding from burden swinging in the final position. Crab and bridge drive is designed by asynchronous motor fed from frequency converter. We will use crane uplift with burden weightobserver in combination for uplift, crab and bridge drive with cooperation of their parameters: burden weight, rope length and crab and bridge position. Controllers are designed by state control method. We will use preferably a disturbance observerwhich will identify burden weight as a disturbance. The system will be working in both modes at empty hook as well asat maximum load: burden uplifting and dropping down.
Understanding the Minimum Wage: Issues and Answers.
Employment Policies Inst. Foundation, Washington, DC.
This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
Quantum mechanics the theoretical minimum
Susskind, Leonard
2014-01-01
From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.
Kwee, R E; The ATLAS collaboration
2010-01-01
Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp-collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.09 < |eta| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presen...
Robust estimation of nonstationary, fractionally integrated, autoregressive, stochastic volatility
Mark J. Jensen
2015-01-01
Empirical volatility studies have discovered nonstationary, long-memory dynamics in the volatility of the stock market and foreign exchange rates. This highly persistent, infinite variance - but still mean reverting - behavior is commonly found with nonparametric estimates of the fractional differencing parameter d, for financial volatility. In this paper, a fully parametric Bayesian estimator, robust to nonstationarity, is designed for the fractionally integrated, autoregressive, stochastic ...
Minimum thickness anterior porcelain restorations.
Radz, Gary M
2011-04-01
Porcelain laminate veneers (PLVs) provide the dentist and the patient with an opportunity to enhance the patient's smile in a minimally to virtually noninvasive manner. Today's PLV demonstrates excellent clinical performance and as materials and techniques have evolved, the PLV has become one of the most predictable, most esthetic, and least invasive modalities of treatment. This article explores the latest porcelain materials and their use in minimum thickness restoration.
Fingerprinting with Minimum Distance Decoding
Lin, Shih-Chun; Gamal, Hesham El
2007-01-01
This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\\approx 0.188$ is achievable using an ...
Minimum feature size preserving decompositions
Aloupis, Greg; Demaine, Martin L; Dujmovic, Vida; Iacono, John
2009-01-01
The minimum feature size of a crossing-free straight line drawing is the minimum distance between a vertex and a non-incident edge. This quantity measures the resolution needed to display a figure or the tool size needed to mill the figure. The spread is the ratio of the diameter to the minimum feature size. While many algorithms (particularly in meshing) depend on the spread of the input, none explicitly consider finding a mesh whose spread is similar to the input. When a polygon is partitioned into smaller regions, such as triangles or quadrangles, the degradation is the ratio of original to final spread (the final spread is always greater). Here we present an algorithm to quadrangulate a simple n-gon, while achieving constant degradation. Note that although all faces have a quadrangular shape, the number of edges bounding each face may be larger. This method uses Theta(n) Steiner points and produces Theta(n) quadrangles. In fact to obtain constant degradation, Omega(n) Steiner points are required by any al...
An Analysis of Variance Framework for Matrix Sampling.
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
Error Variance of Rasch Measurement with Logistic Ability Distributions.
Dimitrov, Dimiter M.
Exact formulas for classical error variance are provided for Rasch measurement with logistic distributions. An approximation formula with the normal ability distribution is also provided. With the proposed formulas, the additive contribution of individual items to the population error variance can be determined without knowledge of the other test…
On the Endogeneity of the Mean-Variance Efficient Frontier.
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
Delivery Time Variance Reduction in the Military Supply Chain
2010-03-01
DELIVERY TIME VARIANCE REDUCTION IN THE MILITARY SUPPLY CHAIN THESIS...IN THE MILITARY SUPPLY CHAIN THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering...March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-OR-MS-ENS-10-02 DELIVERY TIME VARIANCE IN THE MILITARY SUPPLY CHAIN Preston
The asymptotic variance of departures in critically loaded queues
A. Al Hanbali; M.R.H. Mandjes (Michel); Y. Nazarathy (Yoni); W. Whitt
2010-01-01
htmlabstractWe consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case that the system load rho equals 1, and prove that the asymptotic variance rate satisfies lim_t Var D(t)/t = lambda
76 FR 78698 - Proposed Revocation of Permanent Variances
2011-12-19
... Occupational Safety and Health Administration Proposed Revocation of Permanent Variances AGENCY: Occupational... short and plain statement detailing (1) how the proposed revocation would affect the requesting party..., subpart L. The following table provides information about the variances proposed for revocation by...
Adjustment for heterogeneous variances due to days in milk and ...
ARC-IRENE
Adjustment of heterogeneous variances and a calving year effect in test-day ... Regression Test-Day Model (FRTDM), which assumes equal variances of the response variable at different .... random residual error .... records were included in the selection, while in the unadjusted data set, lactations consisting of six and more.
Characterizing the evolution of genetic variance using genetic covariance tensors.
Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W
2009-06-12
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.
Productive Failure in Learning the Concept of Variance
Kapur, Manu
2012-01-01
In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…
Time variance effects and measurement error indications for MLS measurements
Liu, Jiyuan
1999-01-01
Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....
ROBUST PARTIAL INVERSE NETWORK FLOW PROBLEMS%强部分逆网络流问题
杨晓光
2001-01-01
In this paper,a new model for inverse network flow problems,robust partial inverse problem is presented. For a given partial solution,the robust partial inverse problem is to modify the coefficients optimally such that all full solutions containing the partial solution become optimal under new coefficients. It has been shown that the robust partial inverse spanning tree problem can be formulated as a combinatorial linear program,while the robust partial inverse minimum cut problem and the robust partial inverse assignment problem can be solved by combinatorial strongly polynomial algorithms.
Two-level Robust Measurement Fusion Kalman Filter for Clustering Sensor Networks
ZHANG Peng; QI Wen-Juan; DENG Zi-Li
2014-01-01
This paper investigates the distributed fusion Kalman filtering over clustering sensor networks. The sensor network is partitioned as clusters by the nearest neighbor rule and each cluster consists of sensing nodes and cluster-head. Using the minimax robust estimation principle, based on the worst-case conservative system with the conservative upper bounds of noise variances, two-level robust measurement fusion Kalman filter is presented for the clustering sensor network systems with uncertain noise variances. It can significantly reduce the communication load and save energy when the number of sensors is very large. A Lyapunov equation approach for the robustness analysis is presented, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented, and the robust accuracy relations among the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the two-level weighted measurement fuser is equal to that of the global centralized robust fuser and is higher than those of each local robust filter and each local weighted measurement fuser. A simulation example shows the correctness and effectiveness of the proposed results.
Performance of medical students admitted via regular and admission-variance routes.
Simon, H J; Covell, J W
1975-03-01
Twenty-three medical students from socioeconomically disadvantaged backgrounds and drawn chiefly from Chicano and black racial minority groups were granted admission variances to the University of California, San Diego, School of Medicine in 1970 and 1971. This group was compared with 21 regularly admitted junior and senoir medical students with respect to: specific admissions criteria (Medical College Admission Test scores, grade-point average, and college rating score); scores, on Part I of the examinations of the National Board of Medical Examiners (NBME); and performance in at least two of the medicine, surgery, and pediatrics clerkships. The two populations differed markedly on admission. The usual screen would have precluded admission of all but one of the students granted variances. At the end of the second year, average NBME Part I scores again identified two distinct populations, but the average scores of both groups were clearly above the minimum passing level. The groups still differ on analysis of their aggregate performances on the clinical services, but the difference following completion of two of three major clinical clerkships has become the distinction between a "slightly above average" level of performance for the regularly admitted students and an "average" level for students admitted on variances.
Confidence Intervals of Variance Functions in Generalized Linear Model
Yong Zhou; Dao-ji Li
2006-01-01
In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.
Research on variance of subnets in network sampling
Qi Gao; Xiaoting Li; Feng Pan
2014-01-01
In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
Robustness analysis of an air heating plant and control law by using polynomial chaos
Colón, Diego [University of São Paulo, Polytechnic School, LAC -PTC, São Paulo (Brazil); Ferreira, Murillo A. S.; Bueno, Átila M. [São Paulo State University - Sorocaba Campus, Sorocaba (Brazil); Balthazar, José M. [São Paulo State University - Rio Claro Campus, Rio Claro (Brazil); Rosa, Suélia S. R. F. de [University of Brasilia, Brasilia (Brazil)
2014-12-10
This paper presents a robustness analysis of an air heating plant with a multivariable closed-loop control law by using the polynomial chaos methodology (MPC). The plant consists of a PVC tube with a fan in the air input (that forces the air through the tube) and a mass flux sensor in the output. A heating resistance warms the air as it flows inside the tube, and a thermo-couple sensor measures the air temperature. The plant has thus two inputs (the fan's rotation intensity and heat generated by the resistance, both measured in percent of the maximum value) and two outputs (air temperature and air mass flux, also in percent of the maximal value). The mathematical model is obtained by System Identification techniques. The mass flux sensor, which is nonlinear, is linearized and the delays in the transfer functions are properly approximated by non-minimum phase transfer functions. The resulting model is transformed to a state-space model, which is used for control design purposes. The multivariable robust control design techniques used is the LQG/LTR, and the controllers are validated in simulation software and in the real plant. Finally, the MPC is applied by considering some of the system's parameters as random variables (one at a time, and the system's stochastic differential equations are solved by expanding the solution (a stochastic process) in an orthogonal basis of polynomial functions of the basic random variables. This method transforms the stochastic equations in a set of deterministic differential equations, which can be solved by traditional numerical methods (That is the MPC). Statistical data for the system (like expected values and variances) are then calculated. The effects of randomness in the parameters are evaluated in the open-loop and closed-loop pole's positions.
Robustness of Cantor diffractals.
Verma, Rupesh; Sharma, Manoj Kumar; Banerjee, Varsha; Senthilkumaran, Paramasivam
2013-04-08
Diffractals are electromagnetic waves diffracted by a fractal aperture. In an earlier paper, we reported an important property of Cantor diffractals, that of redundancy [R. Verma et. al., Opt. Express 20, 8250 (2012)]. In this paper, we report another important property, that of robustness. The question we address is: How much disorder in the Cantor grating can be accommodated by diffractals to continue to yield faithfully its fractal dimension and generator? This answer is of consequence in a number of physical problems involving fractal architecture.
Robust Kriged Kalman Filtering
Baingana, Brian; Dall' Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.
2015-11-11
Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.
Variance-Constrained Multiobjective Control and Filtering for Nonlinear Stochastic Systems: A Survey
Lifeng Ma
2013-01-01
Full Text Available The multiobjective control and filtering problems for nonlinear stochastic systems with variance constraints are surveyed. First, the concepts of nonlinear stochastic systems are recalled along with the introduction of some recent advances. Then, the covariance control theory, which serves as a practical method for multi-objective control design as well as a foundation for linear system theory, is reviewed comprehensively. The multiple design requirements frequently applied in engineering practice for the use of evaluating system performances are introduced, including robustness, reliability, and dissipativity. Several design techniques suitable for the multi-objective variance-constrained control and filtering problems for nonlinear stochastic systems are discussed. In particular, as a special case for the multi-objective design problems, the mixed H2/H∞ control and filtering problems are reviewed in great detail. Subsequently, some latest results on the variance-constrained multi-objective control and filtering problems for the nonlinear stochastic systems are summarized. Finally, conclusions are drawn, and several possible future research directions are pointed out.
On the Design of Attitude-Heading Reference Systems Using the Allan Variance.
Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis
2016-04-01
The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).
Variance Entropy: A Method for Characterizing Perceptual Awareness of Visual Stimulus
Meng Hu
2012-01-01
Full Text Available Entropy, as a complexity measure, is a fundamental concept for time series analysis. Among many methods, sample entropy (SampEn has emerged as a robust, powerful measure for quantifying complexity of time series due to its insensitivity to data length and its immunity to noise. Despite its popular use, SampEn is based on the standardized data where the variance is routinely discarded, which may nonetheless provide additional information for discriminant analysis. Here we designed a simple, yet efficient, complexity measure, namely variance entropy (VarEn, to integrate SampEn with variance to achieve effective discriminant analysis. We applied VarEn to analyze local field potential (LFP collected from visual cortex of macaque monkey while performing a generalized flash suppression task, in which a visual stimulus was dissociated from perceptual experience, to study neural complexity of perceptual awareness. We evaluated the performance of VarEn in comparison with SampEn on LFP, at both single and multiple scales, in discriminating different perceptual conditions. Our results showed that perceptual visibility could be differentiated by VarEn, with significantly better discriminative performance than SampEn. Our findings demonstrate that VarEn is a sensitive measure of perceptual visibility, and thus can be used to probe perceptual awareness of a stimulus.
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sam
Robustness Metrics: Consolidating the multiple approaches to quantify Robustness
Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.
2016-01-01
determined to be conceptually different from one another. The metrics were classified by their meaning and interpretation based on the types of information necessary to calculate the metrics. Four different classes were identified: 1) Sensitivity robustness metrics; 2) Size of feasible design space...... and to remove the ambiguities of the term robustness. By applying an exemplar metric from each class to a case study, the differences between the classes were further highlighted.These classes form the basis for the definition of four specific sub-definitions of robustness, namely the ‘robust concept’, ‘robust...
Ceramic veneers with minimum preparation.
da Cunha, Leonardo Fernandes; Reis, Rachelle; Santana, Lino; Romanini, Jose Carlos; Carvalho, Ricardo Marins; Furuse, Adilson Yoshio
2013-10-01
The aim of this article is to describe the possibility of improving dental esthetics with low-thickness glass ceramics without major tooth preparation for patients with small to moderate anterior dental wear and little discoloration. For this purpose, a carefully defined treatment planning and a good communication between the clinician and the dental technician helped to maximize enamel preservation, and offered a good treatment option. Moreover, besides restoring esthetics, the restorative treatment also improved the function of the anterior guidance. It can be concluded that the conservative use of minimum thickness ceramic laminate veneers may provide satisfactory esthetic outcomes while preserving the dental structure.
Adaptive Robust Variable Selection
Fan, Jianqing; Barut, Emre
2012-01-01
Heavy-tailed high-dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. A natural procedure to address this problem is to use penalized least absolute deviation (LAD) method with weighted $L_1$-penalty, called weighted robust Lasso (WR-Lasso), in which weights are introduced to ameliorate the bias problem induced by the $L_1$-penalty. In the ultra-high dimensional setting, where the dimensionality can grow exponentially with the sample size, we investigate the model selection oracle property and establish the asymptotic normality of the WR-Lasso. We show that only mild conditions on the model error distribution are needed. Our theoretical results also reveal that adaptive choice of the weight vector is essential for the WR-Lasso to enjoy these nice asymptotic properties. To make the WR-Lasso practically feasible, we propose a two-step procedure, called adaptive robust Lasso (AR-Lasso), in which the weight vector in the second step is c...
Richard A. Shweder
2013-11-01
Full Text Available In this wide ranging interview, Professor Richard A. Shweder from the Department of Comparative Human Development at the University of Chicago, discusses whether it is or is not possible to be a robust cultural pluralist and a dedicated political liberal at the same time. In this discussion, Professor Shweder offers his insights - based on over 40 years of research - on issues related to the history and re-emergence of cultural psychology; moral anthropology and psychology; the experimental method in psychological investigation and its philosophical basis; contemporary and historical cultural collisions – most notably conflicting representations of female genital surgeries; cultural diversity and inequality; and the dissemination of ideas through open access publishing and Twitter. Professor Shweder ends by offering valuable advice to young researchers in the field of cultural psychology as well as a glimpse into the larger themes of his forthcoming book, which seeks to provide answers to the question of what forms of political liberalism are most compatible with robust cultural pluralism and which are not.
Doubly Robust Policy Evaluation and Learning
Dudik, Miroslav; Li, Lihong
2011-01-01
We study decision making in environments where the reward is only partially observed, but can be modeled as a function of an action and an observed context. This setting, known as contextual bandits, encompasses a wide variety of applications including health-care policy and Internet advertising. A central task is evaluation of a new policy given historic data consisting of contexts, actions and received rewards. The key challenge is that the past data typically does not faithfully represent proportions of actions taken by a new policy. Previous approaches rely either on models of rewards or models of the past policy. The former are plagued by a large bias whereas the latter have a large variance. We leverage the strength and overcome the weaknesses of the two approaches by adapting doubly robust estimation techniques to the problems of policy evaluation and optimization. We prove that this approach yields unbiased (and often lower variance) value estimates when we have either a good model of rewards or a goo...
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric ...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation......We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...
Filtered kriging for spatial data with heterogeneous measurement error variances.
Christensen, William F
2011-09-01
When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.
Meta-analysis of ratios of sample variances.
Prendergast, Luke A; Staudte, Robert G
2016-05-20
When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set.
Global Approach for Calculation of Minimum Miscibility Pressure
Jessen, Kristian; Michelsen, Michael Locht; Stenby, Erling Halfdan
1998-01-01
An algorithm has been developed for calculation of minimum miscibility pressure (MMP) for the displacement of oil by multicomponent gas injection. The algorithm is based on the key tie line identification approach initially addressed by Wang and Orr [Y. Wang and F.M. Orr Jr., Analytical calculation...... of minimum miscibility pressure, Fluid Phase Equilibria, 139 (1997) 101-124]. In this work a new global approach is introduced. A number of deficiencies of the sequential approach have been eliminated resulting in a robust and highly efficient algorithm. The time consumption for calculation of the MMP...... results from the key tie line identification approach are shown to be in excellent agreement with slimtube data and with other multicell/slimtube simulators presented in the literature....
Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation
Namyong Kim
2016-06-01
Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.
Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.
Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H
2014-06-01
Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Robust guaranteed-cost adaptive quantum phase estimation
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Variance decomposition of apolipoproteins and lipids in Danish twins
Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A
2007-01-01
OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...... been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...
Rates of Convergence of Minimum Distance Estimators and Kolmogorov's Entropy
Yatracos, Yannis G.
1985-01-01
Let $(\\mathscr{X, A})$ be a space with a $\\sigma$-field, $M = \\{P_s; s \\in \\Theta\\}$ be a family of probability measures on $\\mathscr{A}$ with $\\Theta$ arbitrary, $X_1, \\cdots, X_n$ i.i.d. observations on $P_\\theta.$ Define $\\mu_n(A) = (1/n) \\sum^n_{i = 1} I_A(X_i),$ the empirical measure indexed by $A \\in \\mathscr{A}.$ Assume $\\Theta$ is totally bounded when metrized by the $L_1$ distance between measures. Robust minimum distance estimators $\\hat{\\theta}_n$ are constructed for $\\theta$ and t...
International Conference on Robust Statistics
Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter
2003-01-01
Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.
Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model
Leunglung Chan; Eckhard Platen
2015-01-01
This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.
Surface-preserving robust watermarking of 3-D shapes.
Luo, Ming; Bors, Adrian G
2011-10-01
This paper describes a new statistical approach for watermarking mesh representations of 3-D graphical objects. A robust digital watermarking method has to mitigate among the requirements of watermark invisibility, robustness, embedding capacity and key security. The proposed method employs a mesh propagation distance metric procedure called the fast marching method (FMM), which defines regions of equal geodesic distance width calculated with respect to a reference location on the mesh. Each of these regions is used for embedding a single bit. The embedding is performed by changing the normalized distribution of local geodesic distances from within each region. Two different embedding methods are used by changing the mean or the variance of geodesic distance distributions. Geodesic distances are slightly modified statistically by displacing the vertices in their existing triangle planes. The vertex displacements, performed according to the FMM, ensure a minimal surface distortion while embedding the watermark code. Robustness to a variety of attacks is shown according to experimental results.
The EWMA control chart based on robust scale estimators
Nadia Saeed
2016-12-01
Full Text Available The exponentially weighted moving average (EWMA chart is very popular in statistical process control for detecting the small shifts in process mean and variance. This chart performs well under the assumption of normality but when data violate the assumption of normality, the robust approaches needed. We have developed the EWMA charts under different robust scale estimators available in literature and also compared the performance of these charts by calculating expected out-of-control points and expected widths under non-symmetric distributions (i.e. gamma and exponential. The simulation studies are being carried out for the purpose and results showed that amongst six robust estimators, the chart based on estimator Q_n relatively performed well for non-normal processes in terms of its shorter expected width and more number of expected out-of-control points which shows its sensitivity to detect the out of control signal.
Robust automated knowledge capture.
Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt
2011-10-01
This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.
Passion, Robustness and Perseverance
Lim, Miguel Antonio; Lund, Rebecca
2016-01-01
Evaluation and merit in the measured university are increasingly based on taken-for-granted assumptions about the “ideal academic”. We suggest that the scholar now needs to show that she is passionate about her work and that she gains pleasure from pursuing her craft. We suggest that passion...... and pleasure achieve an exalted status as something compulsory. The scholar ought to feel passionate about her work and signal that she takes pleasure also in the difficult moments. Passion has become a signal of robustness and perseverance in a job market characterised by funding shortages, increased pressure...... to produce measurable output, and precariousness. For young academics in particular it is increasingly important to demonstrate the “right attitude”, “feelings”, and “personality traits” because they have yet to accumulate a record of past achievements that are used as the basis of merit. In hiring decisions...
Robust procedures in chemometrics
Kotwa, Ewelina
-way chemometrical methods, such as PCA and PARAFAC models for analysing spatial and depth profiles of sea water samples, defined by three data modes: depth, variables and geographical location. Emphasis was also put on predicting fluorescence values, as being a natural measure of biological activity, by applying....... applying a multivariate and multi-way data analytical frame-work in fields where less sophisticated data analysis methods are currently used, and 2. developing new, more robust alternatives to already existing multivariate tools. The first part of the study was realised by applying two- and three...... and comparing the Partial Least Squares (PLS) regression technique with its multi-way alternative, N-PLS. Results of the analysis indicated superiority of the three-way frame-work, potentially constituting a novel assessment of the sea water measurements. Particularly in the case of regression models...
Robust Optical Flow Estimation
Javier Sánchez Pérez
2013-10-01
Full Text Available n this work, we describe an implementation of the variational method proposed by Brox etal. in 2004, which yields accurate optical flows with low running times. It has several benefitswith respect to the method of Horn and Schunck: it is more robust to the presence of outliers,produces piecewise-smooth flow fields and can cope with constant brightness changes. Thismethod relies on the brightness and gradient constancy assumptions, using the information ofthe image intensities and the image gradients to find correspondences. It also generalizes theuse of continuous L1 functionals, which help mitigate the effect of outliers and create a TotalVariation (TV regularization. Additionally, it introduces a simple temporal regularizationscheme that enforces a continuous temporal coherence of the flow fields.
Validation of community robustness
Carissimo, Annamaria; Defeis, Italia
2016-01-01
The large amount of work on community detection and its applications leaves unaddressed one important question: the statistical validation of the results. In this paper we present a methodology able to clearly detect if the community structure found by some algorithms is statistically significant or is a result of chance, merely due to edge positions in the network. Given a community detection method and a network of interest, our proposal examines the stability of the partition recovered against random perturbations of the original graph structure. To address this issue, we specify a perturbation strategy and a null model to build a set of procedures based on a special measure of clustering distance, namely Variation of Information, using tools set up for functional data analysis. The procedures determine whether the obtained clustering departs significantly from the null model. This strongly supports the robustness against perturbation of the algorithm used to identify the community structure. We show the r...
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
Provably robust digital watermarking
Chen, Brian; Wornell, Gregory W.
1999-11-01
Copyright notification and enforcement, authentication, covert communication, and hybrid transmission are examples of emerging multimedia applications for digital watermarking methods, methods for embedding one signal (e.g., the digital watermark) within another 'host' signal to form a third, 'composite' signal. The embedding is designed to achieve efficient trade-offs among the three conflicting goals of maximizing information-embedding rate, minimizing distortion between the host signal and composite signal, and maximizing the robustness of the embedding. Quantization index modulation (QIM) methods are a class of watermarking methods that achieve provably good rate-distortion-robustness performance. Indeed, QIM methods exist that achieve performance within a few dB of capacity in the case of a (possibly colored) Gaussian host signal and an additive (possibly colored) Gaussian noise channel. Also, QIM methods can achieve capacity with a type of postprocessing called distortion compensation. This capacity is independent of host signal statistics, and thus, contrary to popular belief, the information-embedding capacity when the host signal is not available at the decoder is the same as the case when the host signal is available at the decoder. A low-complexity realization of QIM called dither modulation has previously been proven to be better than both linear methods of spread spectrum and nonlinear methods of low-bit(s) modulation against square-error distortion-constrained intentional attacks. We introduce a new form of dither modulation called spread-transform dither modulation that retains these favorable performance characteristics while achieving better performance against other attacks such as JPEG compression.
Detecting Pulsars with Interstellar Scintillation in Variance Images
Dai, S; Bell, M E; Coles, W A; Hobbs, G; Ekers, R D; Lenc, E
2016-01-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show th...
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
40 CFR 141.4 - Variances and exemptions.
2010-07-01
... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... maintenance of the distribution system. ...
Fundamental Indexes As Proxies For Mean-Variance Efficient Portfolios
Kathleen Hodnett; Gearé Botes; Khumbudzo Daswa; Kimberly Davids; Emmanuel Che Fongwa; Candice Fortuin
2014-01-01
Mean-variance efficiency was first explained by Markowitz (1952) who derived an efficient frontier comprised of portfolios with the highest expected returns for a given level of risk borne by the investor...
TESTS FOR VARIANCE COMPONENTS IN VARYING COEFFICIENT MIXED MODELS
Zaixing Li; Yuedong Wang; Ping Wu; Wangli Xu; Lixing Zhu
2012-01-01
.... To address the question of whether a varying coefficient mixed model can be reduced to a simpler varying coefficient model, we develop one-sided tests for the null hypothesis that all the variance components are zero...
Estimating the generalized concordance correlation coefficient through variance components.
Carrasco, Josep L; Jover, Lluís
2003-12-01
The intraclass correlation coefficient (ICC) and the concordance correlation coefficient (CCC) are two of the most popular measures of agreement for variables measured on a continuous scale. Here, we demonstrate that ICC and CCC are the same measure of agreement estimated in two ways: by the variance components procedure and by the moment method. We propose estimating the CCC using variance components of a mixed effects model, instead of the common method of moments. With the variance components approach, the CCC can easily be extended to more than two observers, and adjusted using confounding covariates, by incorporating them in the mixed model. A simulation study is carried out to compare the variance components approach with the moment method. The importance of adjusting by confounding covariates is illustrated with a case example.
Adaptive robust control of robot manipulators -- Theory and experiment
Imura, Junichi; Sugie, Toshiharu; Yoshikawa, Tsuneo (Kyoto Univ. (Japan))
1994-10-01
In this paper, a new adaptive robust control scheme for manipulators is proposed that overcomes the drawbacks of conventional adaptive robust control methods. The proposed controller has a simple structure by exploiting the special structure of the manipulator dynamics, and achieves the specified tracking precision without any a priori information on uncertainty. Furthermore, the feedback gain of the proposed method is almost necessary and minimum for the specified precision. To verify the advantages of the method, experimental results are shown for the trajectory control of a 2 DOF direct-drive arm.
Robust level set method for computer vision
Si, Jia-rui; Li, Xiao-pei; Zhang, Hong-wei
2005-12-01
Level set method provides powerful numerical techniques for analyzing and solving interface evolution problems based on partial differential equations. It is particularly appropriate for image segmentation and other computer vision tasks. However, there exists noise in every image and the noise is the main obstacle to image segmentation. In level set method, the propagation fronts are apt to leak through the gaps at locations of missing or fuzzy boundaries that are caused by noise. The robust level set method proposed in this paper is based on the adaptive Gaussian filter. The fast marching method provides a fast implementation for level set method and the adaptive Gaussian filter can adapt itself to the local characteristics of an image by adjusting its variance. Thus, the different parts of an image can be smoothed in different way according to the degree of noisiness and the type of edges. Experiments results demonstrate that the adaptive Gaussian filter can greatly reduce the noise without distorting the image and made the level set methods more robust and accurate.
Asymmetric k-Center with Minimum Coverage
Gørtz, Inge Li
2008-01-01
In this paper we give approximation algorithms and inapproximability results for various asymmetric k-center with minimum coverage problems. In the k-center with minimum coverage problem, each center is required to serve a minimum number of clients. These problems have been studied by Lim et al. [A....... Lim, B. Rodrigues, F. Wang, Z. Xu, k-center problems with minimum coverage, Theoret. Comput. Sci. 332 (1–3) (2005) 1–17] in the symmetric setting....
Dimension free and infinite variance tail estimates on Poisson space
Breton, J. C.; Houdré, C.; Privault, N.
2004-01-01
Concentration inequalities are obtained on Poisson space, for random functionals with finite or infinite variance. In particular, dimension free tail estimates and exponential integrability results are given for the Euclidean norm of vectors of independent functionals. In the finite variance case these results are applied to infinitely divisible random variables such as quadratic Wiener functionals, including L\\'evy's stochastic area and the square norm of Brownian paths. In the infinite vari...
The asymptotic variance of departures in critically loaded queues
Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.
2011-01-01
We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +
Wavelet Variance Analysis of EEG Based on Window Function
ZHENG Yuan-zhuang; YOU Rong-yi
2014-01-01
A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Multiperiod mean-variance efficient portfolios with endogenous liabilities
Markus LEIPPOLD; Trojani, Fabio; Vanini, Paolo
2011-01-01
We study the optimal policies and mean-variance frontiers (MVF) of a multiperiod mean-variance optimization of assets and liabilities (AL). This makes the analysis more challenging than for a setting based on purely exogenous liabilities, in which the optimization is only performed on the assets while keeping liabilities fixed. We show that, under general conditions for the joint AL dynamics, the optimal policies and the MVF can be decomposed into an orthogonal set of basis returns using exte...
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Estimating Income Variances by Probability Sampling: A Case Study
Akbar Ali Shah
2010-08-01
Full Text Available The main focus of the study is to estimate variability in income distribution of households by conducting a survey. The variances in income distribution have been calculated by probability sampling techniques. The variances are compared and relative gains are also obtained. It is concluded that the income distribution has been better as compared to first Household Income and Expenditure Survey (HIES conducted in Pakistan 1993-94.
Testing for Causality in Variance Usinf Multivariate GARCH Models
Christian M. Hafner; Herwartz, Helmut
2008-01-01
Tests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently, little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causality in var...
Testing for causality in variance using multivariate GARCH models
Hafner, Christian; Herwartz, H.
2004-01-01
textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causa...
Simultaneous Robust Fault and State Estimation for Linear Discrete-Time Uncertain Systems
Feten Gannouni
2017-01-01
Full Text Available We consider the problem of robust simultaneous fault and state estimation for linear uncertain discrete-time systems with unknown faults which affect both the state and the observation matrices. Using transformation of the original system, a new robust proportional integral filter (RPIF having an error variance with an optimized guaranteed upper bound for any allowed uncertainty is proposed to improve robust estimation of unknown time-varying faults and to improve robustness against uncertainties. In this study, the minimization problem of the upper bound of the estimation error variance is formulated as a convex optimization problem subject to linear matrix inequalities (LMI for all admissible uncertainties. The proportional and the integral gains are optimally chosen by solving the convex optimization problem. Simulation results are given in order to illustrate the performance of the proposed filter, in particular to solve the problem of joint fault and state estimation.
Minimum Delay Moving Object Detection
Lao, Dong
2017-01-08
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Minimum Competency Testing and the Handicapped.
Wildemuth, Barbara M.
This brief overview of minimum competency testing and disabled high school students discusses: the inclusion or exclusion of handicapped students in minimum competency testing programs; approaches to accommodating the individual needs of handicapped students; and legal issues. Surveys of states that have mandated minimum competency tests indicate…
Do Some Workers Have Minimum Wage Careers?
Carrington, William J.; Fallick, Bruce C.
2001-01-01
Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…
Does the Minimum Wage Affect Welfare Caseloads?
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
Minimum income protection in the Netherlands
van Peijpe, T.
2009-01-01
This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its effect
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
Variance estimation in the analysis of microarray data
Wang, Yuedong
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
The phenome-wide distribution of genetic variance.
Blows, Mark W; Allen, Scott L; Collet, Julie M; Chenoweth, Stephen F; McGuigan, Katrina
2015-07-01
A general observation emerging from estimates of additive genetic variance in sets of functionally or developmentally related traits is that much of the genetic variance is restricted to few trait combinations as a consequence of genetic covariance among traits. While this biased distribution of genetic variance among functionally related traits is now well documented, how it translates to the broader phenome and therefore any trait combination under selection in a given environment is unknown. We show that 8,750 gene expression traits measured in adult male Drosophila serrata exhibit widespread genetic covariance among random sets of five traits, implying that pleiotropy is common. Ultimately, to understand the phenome-wide distribution of genetic variance, very large additive genetic variance-covariance matrices (G) are required to be estimated. We draw upon recent advances in matrix theory for completing high-dimensional matrices to estimate the 8,750-trait G and show that large numbers of gene expression traits genetically covary as a consequence of a single genetic factor. Using gene ontology term enrichment analysis, we show that the major axis of genetic variance among expression traits successfully identified genetic covariance among genes involved in multiple modes of transcriptional regulation. Our approach provides a practical empirical framework for the genetic analysis of high-dimensional phenome-wide trait sets and for the investigation of the extent of high-dimensional genetic constraint.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. © 2011, The International Biometric Society.
Analytic variance estimates of Swank and Fano factors.
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank
2014-07-01
Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
Analytic variance estimates of Swank and Fano factors
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov [US Food and Drug Administration, Silver Spring, Maryland 20993 (United States)
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Genetic heterogeneity of residual variance in broiler chickens
Hill William G
2006-11-01
Full Text Available Abstract Aims were to estimate the extent of genetic heterogeneity in environmental variance. Data comprised 99 535 records of 35-day body weights from broiler chickens reared in a controlled environment. Residual variance within dam families was estimated using ASREML, after fitting fixed effects such as genetic groups and hatches, for each of 377 genetically contemporary sires with a large number of progeny (> 100 males or females each. Residual variance was computed separately for male and female offspring, and after correction for sampling, strong evidence for heterogeneity was found, the standard deviation between sires in within variance amounting to 15–18% of its mean. Reanalysis using log-transformed data gave similar results, and elimination of 2–3% of outlier data reduced the heterogeneity but it was still over 10%. The correlation between estimates for males and females was low, however. The correlation between sire effects on progeny mean and residual variance for body weight was small and negative (-0.1. Using a data set bigger than any yet presented and on a trait measurable in both sexes, this study has shown evidence for heterogeneity in the residual variance, which could not be explained by segregation of major genes unless very few determined the trait.
An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index
Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle
2013-01-01
We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...
An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index
Dierckx, G.; Goegebeur, Y.; Guillou, A.
2013-01-01
We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency and as...... by a small simulation experiment involving both uncontaminated and contaminated samples. (C) 2013 Elsevier Inc. All rights reserved....
Dynamic preconditioning of the September sea-ice extent minimum
Williams, James; Tremblay, Bruno; Newton, Robert; Allard, Richard
2016-04-01
There has been an increased interest in seasonal forecasting of the sea-ice extent in recent years, in particular the minimum sea-ice extent. We propose a dynamical mechanism, based on winter preconditioning through first year ice formation, that explains a significant fraction of the variance in the anomaly of the September sea-ice extent from the long-term linear trend. To this end, we use a Lagrangian trajectory model to backtrack the September sea-ice edge to any time during the previous winter and quantify the amount of sea-ice divergence along the Eurasian and Alaskan coastlines as well as the Fram Strait sea-ice export. We find that coastal divergence that occurs later in the winter (March, April and May) is highly correlated with the following September sea-ice extent minimum (r = -0.73). This is because the newly formed first year ice will melt earlier allowing for other feedbacks (e.g. ice albedo feedback) to start amplifying the signal early in the melt season when the solar input is large. We find that the winter mean Fram Strait sea-ice export anomaly is also correlated with the minimum sea-ice extent the following summer. Next we backtrack a synthetic ice edge initialized at the beginning of the melt season (June 1st) in order to develop hindcast models of the September sea-ice extent that do not rely on a-priori knowledge of the minimum sea-ice extent. We find that using a multi-variate regression model of the September sea-ice extent anomaly based on coastal divergence and Fram Strait ice export as predictors reduces the error by 41%. A hindcast model based on the mean DJFMA Arctic Oscillation index alone reduces the error by 24%.
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Robust Principal Component Analysis?
Candes, Emmanuel J; Ma, Yi; Wright, John
2009-01-01
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for th...
Robust relativistic bit commitment
Chakraborty, Kaushik; Chailloux, André; Leverrier, Anthony
2016-12-01
Relativistic cryptography exploits the fact that no information can travel faster than the speed of light in order to obtain security guarantees that cannot be achieved from the laws of quantum mechanics alone. Recently, Lunghi et al. [Phys. Rev. Lett. 115, 030502 (2015), 10.1103/PhysRevLett.115.030502] presented a bit-commitment scheme where each party uses two agents that exchange classical information in a synchronized fashion, and that is both hiding and binding. A caveat is that the commitment time is intrinsically limited by the spatial configuration of the players, and increasing this time requires the agents to exchange messages during the whole duration of the protocol. While such a solution remains computationally attractive, its practicality is severely limited in realistic settings since all communication must remain perfectly synchronized at all times. In this work, we introduce a robust protocol for relativistic bit commitment that tolerates failures of the classical communication network. This is done by adding a third agent to both parties. Our scheme provides a quadratic improvement in terms of expected sustain time compared with the original protocol, while retaining the same level of security.
M Sankar Kishore; K Veerabhadra Rao
2001-06-01
Correlation tracking plays an important role in the automation of weapon systems. Area correlation is an effective technique for tracking targets that have neither prominent features nor high contrast with the background and the ‘target’ can even be an area or a scene of interest. Even though this technique is robust under varying conditions of target background and light conditions, it has some problems like target drift and false registration. When the tracker or target is moving, the registration point drifts due to the discrete pixel size and aspect angle change. In this research work, an attempt has been made to improve the performance of a correlation tracker for tracking ground targets with very poor contrast. In the present work only the CCD visible images with very poor target to background contrast are considered. Applying novel linear and nonlinear filters, the problems present in the correlation tracker are overcome. Confidence and redundancy measures have been proposed to improve the performance by detecting misregistration. The proposed algorithm is tested on different sequences of images and its performance is satisfactory.
Robust and efficient designs for the Michaelis-Menten model
Dette, Holger; Biedermann, Stefanie
2002-01-01
For the Michaelis-Menten model, we determine designs that maximize the minimum of the D-efficiencies over a certain interval for the nonlinear parameter. The best two point designs can be found explicitly, and a characterization is given when these designs are optimal within the class of all designs. In most cases of practical interest, the determined designs are highly efficient and robust with respect to misspecification of the nonlinear parameter. The results are illustrated and applied in...
CMB-S4 and the hemispherical variance anomaly
O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.
2017-09-01
Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.
A Robust Crowdsourcing-Based Indoor Localization System.
Zhou, Baoding; Li, Qingquan; Mao, Qingzhou; Tu, Wei
2017-04-14
WiFi fingerprinting-based indoor localization has been widely used due to its simplicity and can be implemented on the smartphones. The major drawback of WiFi fingerprinting is that the radio map construction is very labor-intensive and time-consuming. Another drawback of WiFi fingerprinting is the Received Signal Strength (RSS) variance problem, caused by environmental changes and device diversity. RSS variance severely degrades the localization accuracy. In this paper, we propose a robust crowdsourcing-based indoor localization system (RCILS). RCILS can automatically construct the radio map using crowdsourcing data collected by smartphones. RCILS abstracts the indoor map as the semantics graph in which the edges are the possible user paths and the vertexes are the location where users may take special activities. RCILS extracts the activity sequence contained in the trajectories by activity detection and pedestrian dead-reckoning. Based on the semantics graph and activity sequence, crowdsourcing trajectories can be located and a radio map is constructed based on the localization results. For the RSS variance problem, RCILS uses the trajectory fingerprint model for indoor localization. During online localization, RCILS obtains an RSS sequence and realizes localization by matching the RSS sequence with the radio map. To evaluate RCILS, we apply RCILS in an office building. Experiment results demonstrate the efficiency and robustness of RCILS.
Robustness Evaluation of Timber Structures
Kirkegaard, Poul Henning; Sørensen, John Dalsgaard
2009-01-01
The present paper considers robustness evaluation of a Norwegian sports arena with a structural system of glulam frames. The robustness evaluation is based on the framework for robustness analysis introduced in the Danish Code of Practice for the Safety of Structures and a probabilistic modelling...... of the timber material proposed in the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS). The results show that the requirements for robustness of the structure are highly related to the modelling of the snow load used on the structures when ‘removal of a limited part...
Tipton, Elizabeth; Pustejovsky, James E.
2015-01-01
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Mohammad Manir Hossain Mollah
Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large
Minimum Delay Moving Object Detection
Lao, Dong
2017-05-14
This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Dose variation during solar minimum
Gussenhoven, M.S.; Mullen, E.G.; Brautigam, D.H. (Phillips Lab., Geophysics Directorate, Hanscom Air Force Base, MA (US)); Holeman, E. (Boston Univ., MA (United States). Dept. of Physics)
1991-12-01
In this paper, the authors use direct measurement of dose to show the variation in inner and outer radiation belt populations at low altitude from 1984 to 1987. This period includes the recent solar minimum that occurred in September 1986. The dose is measured behind four thicknesses of aluminum shielding and for two thresholds of energy deposition, designated HILET and LOLET. The authors calculate an average dose per day for each month of satellite operation. The authors find that the average proton (HILET) dose per day (obtained primarily in the inner belt) increased systematically from 1984 to 1987, and has a high anticorrelation with sunspot number when offset by 13 months. The average LOLET dose per day behind the thinnest shielding is produced almost entirely by outer zone electrons and varies greatly over the period of interest. If any trend can be discerned over the 4 year period it is a decreasing one. For shielding of 1.55 gm/cm{sup 2} (227 mil) Al or more, the LOLET dose is complicated by contributions from {gt} 100 MeV protons and bremsstrahlung.
How does variance in fertility change over the demographic transition?
Hruschka, Daniel J; Burger, Oskar
2016-04-19
Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).
Variance-based fingerprint distance adjustment algorithm for indoor localization
Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang
2015-01-01
The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
Robust tumor morphometry in multispectral fluorescence microscopy
Tabesh, Ali; Vengrenyuk, Yevgen; Teverovskiy, Mikhail; Khan, Faisal M.; Sapir, Marina; Powell, Douglas; Mesa-Tejada, Ricardo; Donovan, Michael J.; Fernandez, Gerardo
2009-02-01
Morphological and architectural characteristics of primary tissue compartments, such as epithelial nuclei (EN) and cytoplasm, provide important cues for cancer diagnosis, prognosis, and therapeutic response prediction. We propose two feature sets for the robust quantification of these characteristics in multiplex immunofluorescence (IF) microscopy images of prostate biopsy specimens. To enable feature extraction, EN and cytoplasm regions were first segmented from the IF images. Then, feature sets consisting of the characteristics of the minimum spanning tree (MST) connecting the EN and the fractal dimension (FD) of gland boundaries were obtained from the segmented compartments. We demonstrated the utility of the proposed features in prostate cancer recurrence prediction on a multi-institution cohort of 1027 patients. Univariate analysis revealed that both FD and one of the MST features were highly effective for predicting cancer recurrence (p <= 0.0001). In multivariate analysis, an MST feature was selected for a model incorporating clinical and image features. The model achieved a concordance index (CI) of 0.73 on the validation set, which was significantly higher than the CI of 0.69 for the standard multivariate model based solely on clinical features currently used in clinical practice (p < 0.0001). The contributions of this work are twofold. First, it is the first demonstration of the utility of the proposed features in morphometric analysis of IF images. Second, this is the largest scale study of the efficacy and robustness of the proposed features in prostate cancer prognosis.
Sensitivity to Estimation Errors in Mean-variance Models
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
Expectation Values and Variance Based on Lp-Norms
George Livadiotis
2012-11-01
Full Text Available This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose the Lp variance. These Lp expectation values embody a generic formal scheme of means characterization. Having the p-norm as a free parameter, both the Lp-normed expectation values and their variance are flexible to analyze new phenomena that cannot be described under the notions of classical statistics based on Euclidean norms. The new statistical approach provides insights into regression theory and Statistical Physics. Several illuminating examples are examined.
CMB-S4 and the Hemispherical Variance Anomaly
O'Dwyer, Marcio; Knox, Lloyd; Starkman, Glenn D
2016-01-01
Cosmic Microwave Background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the northern and southern Ecliptic hemispheres. In this context, the northern hemisphere displays an anomalously low variance while the southern hemisphere appears unremarkable (consistent with expectations from the best-fitting theory, $\\Lambda$CDM). While this is a well established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground ba...
Variance inflation in high dimensional Support Vector Machines
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... is not the full input space. Hence, when applying the model to future data the model is effectively blind to the missed orthogonal subspace. This can lead to an inflated variance of hidden variables estimated in the training set and when the model is applied to test data we may find that the hidden variables...... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Variance swap payoffs, risk premia and extreme market conditions
Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco
This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic...... constraint. Our approach, only requiring option implied volatilities and daily returns for the underlying, provides measurement error free estimates of the part of the VRP related to normal market conditions, and allows constructing variables indicating agents' expectations under extreme market conditions....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....
Saturation of number variance in embedded random-matrix ensembles.
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
The positioning algorithm based on feature variance of billet character
Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang
2015-12-01
In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.
Saturation of number variance in embedded random-matrix ensembles
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
Measuring Robustness of Timetables in Stations using a Probability Distribution
Jensen, Lars Wittrup; Landex, Alex
of a station based on the plan of operation and the minimum headway times However, none of the above methods take a given timetable into account when the complexity of the station is calculated. E.g. two timetable candidates are given following the same plan of operation in a station; one will be more...... vulnerable to delays (less robust) while the other will be less vulnerable (more robust), but this cannot be measured by the above methods. In the light of this, the article will describe a new method where the complexity of a given station with a given timetable can be calculated based on a probability...... delays caused by interdependencies, and result in a more robust operation. Currently three methods to calculate the complexity of station exists: 1. Complexity of a station based on the track layout 2. Complexity of a station based on the probability of a conflict using a plan of operation 3. Complexity...
Robust methods and asymptotic theory in nonlinear econometrics
Bierens, Herman J
1981-01-01
This Lecture Note deals with asymptotic properties, i.e. weak and strong consistency and asymptotic normality, of parameter estimators of nonlinear regression models and nonlinear structural equations under various assumptions on the distribution of the data. The estimation methods involved are nonlinear least squares estimation (NLLSE), nonlinear robust M-estimation (NLRME) and non linear weighted robust M-estimation (NLWRME) for the regression case and nonlinear two-stage least squares estimation (NL2SLSE) and a new method called minimum information estimation (MIE) for the case of structural equations. The asymptotic properties of the NLLSE and the two robust M-estimation methods are derived from further elaborations of results of Jennrich. Special attention is payed to the comparison of the asymptotic efficiency of NLLSE and NLRME. It is shown that if the tails of the error distribution are fatter than those of the normal distribution NLRME is more efficient than NLLSE. The NLWRME method is appropriate ...
A Robust Design Applicability Model
Ebro, Martin; Lars, Krogstie; Howard, Thomas J.
2015-01-01
This paper introduces a model for assessing the applicability of Robust Design (RD) in a project or organisation. The intention of the Robust Design Applicability Model (RDAM) is to provide support for decisions by engineering management considering the relevant level of RD activities. The applic...
Robust and distributed hypothesis testing
Gül, Gökhan
2017-01-01
This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...
A Robust Enough Virtue Epistemology
Broncano-Berrocal, Fernando
2016-01-01
What is the nature of knowledge? A popular answer to that long-standing question comes from robust virtue epistemology, whose key idea is that knowing is just a matter of succeeding cognitively—i.e., coming to believe a proposition truly—due to an exercise of cognitive ability. Versions of robust...
Robust Understanding of Statistical Variation
Peters, Susan A.
2011-01-01
This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…
Prasitmeeboon, Pitcha
repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.
A computationally simple and robust method to detect determinism in a time series
Lu, Sheng; Ju, Ki Hwan; Kanters, Jørgen K.;
2006-01-01
We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals. The IS ...
Variance squeezing and entanglement of the XX central spin model
El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)
2011-01-21
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Recursive identification for multidimensional ARMA processes with increasing variances
CHEN Hanfu
2005-01-01
In time series analysis, almost all existing results are derived for the case where the driven noise {wn} in the MA part is with bounded variance (or conditional variance). In contrast to this, the paper discusses how to identify coefficients in a multidimensional ARMA process with fixed orders, but in its MA part the conditional moment E(‖wn‖β| Fn-1), β＞ 2 Is possible to grow up at a rate of a power of logn. The wellknown stochastic gradient (SG) algorithm is applied to estimating the matrix coefficients of the ARMA process, and the reasonable conditions are given to guarantee the estimate to be strongly consistent.
Levine's guide to SPSS for analysis of variance
Braver, Sanford L; Page, Melanie
2003-01-01
A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi
Variance components for body weight in Japanese quails (Coturnix japonica
RO Resende
2005-03-01
Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.
Asymptotic variance of grey-scale surface area estimators
Svane, Anne Marie
Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....
Precise Asymptotics of Error Variance Estimator in Partially Linear Models
Shao-jun Guo; Min Chen; Feng Liu
2008-01-01
In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known principle of least-squares. With this method the estimation of the (co)variance components is based on a linear model of observation equations. The method is flexible since it works with a user-defined we...
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.
On Variance and Covariance for Bounded Linear Operators
Chia Shiang LIN
2001-01-01
In this paper we initiate a study of covariance and variance for two operators on a Hilbert space, proving that the c-v (covariance-variance) inequality holds, which is equivalent to the CauchySchwarz inequality. As for applications of the c-v inequality we prove uniformly the Bernstein-type incqualities and equalities, and show the generalized Heinz-Kato-Furuta-type inequalities and equalities,from which a generalization and sharpening of Reid's inequality is obtained. We show that every operator can be expressed as a p-hyponormal-type, and a hyponornal-type operator. Finally, some new characterizations of the Furuta inequality are given.
Robust statistical methods with R
Jureckova, Jana
2005-01-01
Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...
Minimax Robust Quickest Change Detection
Unnikrishnan, Jayakrishnan; Meyn, Sean
2009-01-01
The two popular criteria of optimality for quickest change detection procedures are Lorden's criterion and the Bayesian criterion. In this paper a robust version of these quickest change detection problems is considered when the pre-change and post-change distributions are not known exactly but belong to known uncertainty classes of distributions. For uncertainty classes that satisfy a specific condition, it is shown that one can identify least favorable distributions (LFDs) from the uncertainty classes, such that the detection rule designed for the LFDs is optimal for the robust problem in a minimax sense. The condition is similar to that required for the identification of LFDs for the robust hypothesis testing problem studied by Huber. An upper bound on the delay incurred by the robust test is also obtained in the asymptotic setting under Lorden's criterion of optimality, which quantifies the delay penalty incurred to guarantee robustness. When the LFDs can be identified, the proposed test is easier to impl...
Graph measures and network robustness
Ellens, W
2013-01-01
Network robustness research aims at finding a measure to quantify network robustness. Once such a measure has been established, we will be able to compare networks, to improve existing networks and to design new networks that are able to continue to perform well when it is subject to failures or attacks. In this paper we survey a large amount of robustness measures on simple, undirected and unweighted graphs, in order to offer a tool for network administrators to evaluate and improve the robustness of their network. The measures discussed in this paper are based on the concepts of connectivity (including reliability polynomials), distance, betweenness and clustering. Some other measures are notions from spectral graph theory, more precisely, they are functions of the Laplacian eigenvalues. In addition to surveying these graph measures, the paper also contains a discussion of their functionality as a measure for topological network robustness.
Robust optimization based upon statistical theory.
Sobotta, B; Söhn, M; Alber, M
2010-08-01
Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose
A random effects variance shift model for detecting and accommodating outliers in meta-analysis.
Gumedze, Freedom N; Jackson, Dan
2011-02-16
Meta-analysis typically involves combining the estimates from independent studies in order to estimate a parameter of interest across a population of studies. However, outliers often occur even under the random effects model. The presence of such outliers could substantially alter the conclusions in a meta-analysis. This paper proposes a methodology for identifying and, if desired, downweighting studies that do not appear representative of the population they are thought to represent under the random effects model. An outlier is taken as an observation (study result) with an inflated random effect variance. We used the likelihood ratio test statistic as an objective measure for determining whether observations have inflated variance and are therefore considered outliers. A parametric bootstrap procedure was used to obtain the sampling distribution of the likelihood ratio test statistics and to account for multiple testing. Our methods were applied to three illustrative and contrasting meta-analytic data sets. For the three meta-analytic data sets our methods gave robust inferences when the identified outliers were downweighted. The proposed methodology provides a means to identify and, if desired, downweight outliers in meta-analysis. It does not eliminate them from the analysis however and we consider the proposed approach preferable to simply removing any or all apparently outlying results. We do not however propose that our methods in any way replace or diminish the standard random effects methodology that has proved so useful, rather they are helpful when used in conjunction with the random effects model.
Mriganka Gogoi
2013-07-01
Full Text Available Digital watermarking plays a very important role in copyright protection. It is one of the techniques which are used for safeguarding the origins of the image, audio and video by protecting it against Piracy. This paper proposes a low variance based spread spectrum watermarking for image and video in which the watermark is obtained twice in the receiver. The watermark to be added is a binary image of comparatively smaller size than the Cover Image. Cover Image is divided into number of 8x8 blocks and transform into frequency domain using Discrete Cosine Transform. A gold sequence is added as well as subtracted in each block for each watermark bit. In most cases, researchers has generally used algorithms for extracting single watermark and also it is seen that finding the location of the distorted bit of the watermark due to some attacks is one of the most challenging task. However, in this paper the same watermark is embedded as well as extracted twice with gold code without much distortion of the image and comparing these two watermarks will help in finding the distorted bit. Another feature is that as this algorithm is based on embedding of watermark in low variance region, therefore proper extraction of the watermark is obtained at a lesser modulating factor. The proposed algorithm is very much useful in applications like real-time broad casting, image and video authentication and secure camera system. The experimental results show that the watermarking technique is robust against various attacks.
RepExplore: addressing technical replicate variance in proteomics and metabolomics data analysis.
Glaab, Enrico; Schneider, Reinhard
2015-07-01
High-throughput omics datasets often contain technical replicates included to account for technical sources of noise in the measurement process. Although summarizing these replicate measurements by using robust averages may help to reduce the influence of noise on downstream data analysis, the information on the variance across the replicate measurements is lost in the averaging process and therefore typically disregarded in subsequent statistical analyses.We introduce RepExplore, a web-service dedicated to exploit the information captured in the technical replicate variance to provide more reliable and informative differential expression and abundance statistics for omics datasets. The software builds on previously published statistical methods, which have been applied successfully to biomedical omics data but are difficult to use without prior experience in programming or scripting. RepExplore facilitates the analysis by providing a fully automated data processing and interactive ranking tables, whisker plot, heat map and principal component analysis visualizations to interpret omics data and derived statistics. Freely available at http://www.repexplore.tk enrico.glaab@uni.lu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
The density variance - Mach number relation in isothermal and non-isothermal adiabatic turbulence
Nolan, Chris A; Sutherland, Ralph S
2015-01-01
The density variance - Mach number relation of the turbulent interstellar medium is relevant for theoretical models of the star formation rate, efficiency, and the initial mass function of stars. Here we use high-resolution hydrodynamical simulations with grid resolutions of up to 1024^3 cells to model compressible turbulence in a regime similar to the observed interstellar medium. We use Fyris Alpha, a shock-capturing code employing a high-order Godunov scheme to track large density variations induced by shocks. We investigate the robustness of the standard relation between the logarithmic density variance (sigma_s^2) and the sonic Mach number (M) of isothermal interstellar turbulence, in the non-isothermal regime. Specifically, we test ideal gases with diatomic molecular (gamma = 7/5) and monatomic (gamma = 5/3) adiabatic indices. A periodic cube of gas is stirred with purely solenoidal forcing at low wavenumbers, leading to a fully-developed turbulent medium. We find that as the gas heats in adiabatic comp...
Purves, R D
1994-02-01
Noncompartmental investigation of the distribution of residence times from concentration-time data requires estimation of the second noncentral moment (AUM2C) as well as the area under the curve (AUC) and the area under the moment curve (AUMC). The accuracy and precision of 12 numerical integration methods for AUM2C were tested on simulated noisy data sets representing bolus, oral, and infusion concentration-time profiles. The root-mean-squared errors given by the best methods were only slightly larger than the corresponding errors in the estimation of AUC and AUMC. AUM2C extrapolated "tail" areas as estimated from a log-linear fit are biased, but the bias is minimized by application of a simple correction factor. The precision of estimates of variance of residence times (VRT) can be severely impaired by the variance of the extrapolated tails. VRT is therefore not a useful parameter unless the tail areas are small or can be shown to be estimated with little error. Estimates of the coefficient of variation of residence times (CVRT) and its square (CV2) are robust in the sense of being little affected by errors in the concentration values. The accuracy of estimates of CVRT obtained by optimum numerical methods is equal to or better than that of AUC and mean residence time estimates, even in data sets with large tail areas.
How Do Alternative Minimum Wage Variables Compare?
Sara Lemos
2005-01-01
Several minimum wage variables have been suggested in the literature. Such a variety of variables makes it difficult to compare the associated estimates across studies. One problem is that these estimates are not always calibrated to represent the effect of a 10% increase in the minimum wage. Another problem is that these estimates measure the effect of the minimum wage on the employment of different groups of workers. In this paper we critically compare employment effect estimates using five...
Minimum wages, globalization and poverty in Honduras
Gindling, T. H.; Terrell, Katherine
2008-01-01
To be competitive in the global economy, some argue that Latin American countries need to reduce or eliminate labour market regulations such as minimum wage legislation because they constrain job creation and hence increase poverty. On the other hand, minimum wage increases can have a direct positive impact on family income and may therefore help to reduce poverty. We take advantage of a complex minimum wage system in a poor country that has been exposed to the forces of globalization to test...
Tracking error with minimum guarantee constraints
Diana Barro; Elio Canestrelli
2008-01-01
In recent years the popularity of indexing has greatly increased in financial markets and many different families of products have been introduced. Often these products also have a minimum guarantee in the form of a minimum rate of return at specified dates or a minimum level of wealth at the end of the horizon. Period of declining stock market returns together with low interest rate levels on Treasury bonds make it more difficult to meet these liabilities. We formulate a dynamic asset alloca...
Effect of Pressure on Minimum Fluidization Velocity
Zhu Zhiping; Na Yongjie; Lu Qinggang
2007-01-01
Minimum fluidization velocity of quartz sand and glass bead under different pressures of 0.5, 1.0, 1.5 and 2.0 Mpa were investigated. The minimum fluidization velocity decreases with the increasing of pressure. The influence of pressure to the minimum fluidization velocities is stronger for larger particles than for smaller ones.Based on the test results and Ergun equation, an experience equation of minimum fluidization velocity is proposed and the calculation results are comparable to other researchers' results.
7 CFR 35.11 - Minimum requirements.
2010-01-01
..., Denmark, East Germany, England, Finland, France, Greece, Hungary, Iceland, Ireland, Italy, Liechtenstein..., Switzerland, Wales, West Germany, Yugoslavia), or Greenland shall meet each applicable minimum requirement...
HOTELLING'S T2 CONTROL CHARTS BASED ON ROBUST ESTIMATORS
SERGIO YÁÑEZ
2010-01-01
Full Text Available Under the presence of multivariate outliers, in a Phase I analysis of historical set of data, the T 2 control chart based on the usual sample mean vector and sample variance covariance matrix performs poorly. Several alternative estimators have been proposed. Among them, estimators based on the minimum volume ellipsoid (MVE and the minimum covariance determinant (MCD are powerful in detecting a reasonable number of outliers. In this paper we propose a T 2 control chart using the biweight S estimators for the location and dispersion parameters when monitoring multivariate individual observations. Simulation studies show that this method outperforms the T 2 control chart based on MVE estimators for a small number of observations.
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.
Cadena, Brian C
2014-03-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage*
Cadena, Brian C.
2014-01-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants’ location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents. PMID:24999288
An entropy approach to size and variance heterogeneity
Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.
2012-01-01
In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity si
Analysis of Variance: What Is Your Statistical Software Actually Doing?
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Gender variance in Asia: discursive contestations and legal implications
Wieringa, S.E.
2010-01-01
A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the
Permutation tests for multi-factorial analysis of variance
Anderson, M.J.; Braak, ter C.J.F.
2003-01-01
Several permutation strategies are often possible for tests of individual terms in analysis-of-variance (ANOVA) designs. These include restricted permutations, permutation of whole groups of units, permutation of some form of residuals or some combination of these. It is unclear, especially for
A Hold-out method to correct PCA variance inflation
Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai
2012-01-01
In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure was int...
Similarities Derived from 3-D Nonlinear Psychophysics: Variance Distributions.
Gregson, Robert A. M.
1994-01-01
The derivation of the variance of similarity judgments is made from the 3-D process in nonlinear psychophysics. The idea of separability of dimensions in metric space theories of similarity is replaced by one parameter that represents the degree of a form of interdimensional cross-sampling. (SLD)
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative
20 CFR 901.40 - Proof; variance; amendment of pleadings.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...
Multivariate Variance Targeting in the BEKK-GARCH Model
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Vertical velocity variances and Reynold stresses at Brookhaven
Busch, Niels E.; Brown, R.M.; Frizzola, J.A.
1970-01-01
Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...... velocity component....
Estimation of dominance variance in purebred Yorkshire swine.
Culbertson, M S; Mabry, J W; Misztal, I; Gengler, N; Bertrand, J K; Varona, L
1998-02-01
We used 179,485 Yorkshire reproductive and 239,354 Yorkshire growth records to estimate additive and dominance variances by Method Fraktur R. Estimates were obtained for number born alive (NBA), 21-d litter weight (LWT), days to 104.5 kg (DAYS), and backfat at 104.5 kg (BF). The single-trait models for NBA and LWT included the fixed effects of contemporary group and regression on inbreeding percentage and the random effects mate within contemporary group, animal permanent environment, animal additive, and parental dominance. The single-trait models for DAYS and BF included the fixed effects of contemporary group, sex, and regression on inbreeding percentage and the random effects litter of birth, dam permanent environment, animal additive, and parental dominance. Final estimates were obtained from six samples for each trait. Regression coefficients for 10% inbreeding were found to be -.23 for NBA, -.52 kg for LWT, 2.1 d for DAYS, and 0 mm for BF. Estimates of additive and dominance variances expressed as a percentage of phenotypic variances were, respectively, 8.8 +/- .5 and 2.2 +/- .7 for NBA, 8.1 +/- 1.1 and 6.3 +/- .9 for LWT, 33.2 +/- .4 and 10.3 +/- 1.5 for DAYS, and 43.6 +/- .9 and 4.8 +/- .7 for BF. The ratio of dominance to additive variances ranged from .78 to .11.
Common Persistence and Error-Correction Mode in Conditional Variance
LI Han-dong; ZHANG Shi-ying
2001-01-01
We firstly define the persistence and common persistence of vector GARCH process from the point of view of the integration, and then discuss the sufficient and necessary condition of the copersistence in variance. In the end of this paper, we give the properties and the error correction model of vector GARCH process under the condition of the co-persistence.
Bounds for Tail Probabilities of the Sample Variance
V. Bentkus
2009-01-01
Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.
Variance Ranklets : Orientation-selective rank features for contrast modulations
Azzopardi, George; Smeraldi, Fabrizio
2009-01-01
We introduce a novel type of orientation–selective rank features that are sensitive to contrast modulations (second–order stimuli). Variance Ranklets are designed in close analogy with the standard Ranklets, but use the Siegel–Tukey statistics for dispersion instead of the Wilcoxon statistics. Their
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative s
Average local values and local variances in quantum mechanics
Muga, J G; Sala, P R
1998-01-01
Several definitions for the average local value and local variance of a quantum observable are examined and compared with their classical counterparts. An explicit way to construct an infinite number of these quantities is provided. It is found that different classical conditions may be satisfied by different definitions, but none of the quantum definitions examined is entirely consistent with all classical requirements.
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known
Multivariate Variance Targeting in the BEKK-GARCH Model
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Multivariate variance targeting in the BEKK-GARCH model
Pedersen, Rasmus S.; Rahbæk, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...
CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset
Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
2012-01-01
We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var
Gender variance in Asia: discursive contestations and legal implications
Wieringa, S.E.
2010-01-01
A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the impli
CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset
Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
2012-01-01
We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var
Infinite variance in fermion quantum Monte Carlo calculations
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Testing for causality in variance using multivariate GARCH models
C.M. Hafner (Christian); H. Herwartz
2004-01-01
textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual
Variance Components for NLS: Partitioning the Design Effect.
Folsom, Ralph E., Jr.
This memorandum demonstrates a variance components methodology for partitioning the overall design effect (D) for a ratio mean into stratification (S), unequal weighting (W), and clustering (C) effects, so that D = WSC. In section 2, a sample selection scheme modeled after the National Longitudinal Study of the High School Class of 1972 (NKS)…
Perspective projection for variance pose face recognition from camera calibration
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic
Robust Control Methods for On-Line Statistical Learning
Capobianco Enrico
2001-01-01
Full Text Available The issue of controlling that data processing in an experiment results not affected by the presence of outliers is relevant for statistical control and learning studies. Learning schemes should thus be tested for their capacity of handling outliers in the observed training set so to achieve reliable estimates with respect to the crucial bias and variance aspects. We describe possible ways of endowing neural networks with statistically robust properties by defining feasible error criteria. It is convenient to cast neural nets in state space representations and apply both Kalman filter and stochastic approximation procedures in order to suggest statistically robustified solutions for on-line learning.
Theoretical Framework for Robustness Evaluation
Sørensen, John Dalsgaard
2011-01-01
This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although...... the importance of robustness for structural design is widely recognized the code requirements are not specified in detail, which makes the practical use difficult. This paper describes a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines...
Robustness Analysis of Kinetic Structures
Kirkegaard, Poul Henning; Sørensen, John Dalsgaard
2009-01-01
The present paper considers robustness of kinetic structures. Robustness of structures has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. Especially for these types of structural syst...... systems, it is of interest to investigate how robust the structures are, or what happens if a structural element is added to or removed from the original structure. The present paper discusses this issue for kinetic structures in architecture....
Population genetics of translational robustness.
Wilke, Claus O; Drummond, D Allan
2006-05-01
Recent work has shown that expression level is the main predictor of a gene's evolutionary rate and that more highly expressed genes evolve slower. A possible explanation for this observation is selection for proteins that fold properly despite mistranslation, in short selection for translational robustness. Translational robustness leads to the somewhat paradoxical prediction that highly expressed genes are extremely tolerant to missense substitutions but nevertheless evolve very slowly. Here, we study a simple theoretical model of translational robustness that allows us to gain analytic insight into how this paradoxical behavior arises.
Robustness of airline route networks
Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David
2016-03-01
Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.
A COMPARISON BETWEEN CLASSICAL AND ROBUST METHOD IN A FACTORIAL DESIGN IN THE PRESENCE OF OUTLIER
Anwar Fitrianto
2013-01-01
Full Text Available Analysis of Variance (ANOVA techniques which is based on classical Least Squares (LS method requires several assumptions, such as normality, constant variances and independency. Those assumptions can be violated due to several causes, such as the presence of an outlying observation. There are many evident in literatures that the LS estimate is easily affected by outliers. To remedy this problem, a robust procedure that provides estimation, inference and testing that are not influenced by outlying observations is put forward. A well-known approach to handle dataset with outliers is the M-estimation. In this study, both classical and robust procedures are employed to data of a factorial experiment. The results signify that the classical method of least squares estimates instead of robust methods lead to misleading conclusion of the analysis in factorial designs.
Stochastic variational approach to minimum uncertainty states
Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)
1995-05-21
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
2007-01-01
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
New Minimum Wage Research: A Symposium.
Ehrenberg, Ronald G.; And Others
1992-01-01
Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…
5 CFR 630.206 - Minimum charge.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum charge. 630.206 Section 630.206 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ABSENCE AND LEAVE Definitions and General Provisions for Annual and Sick Leave § 630.206 Minimum charge. (a) Unless an agency...
Stochastic variational approach to minimum uncertainty states
Illuminati, F; Illuminati, F; Viola, L
1995-01-01
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schr\\"{o}dinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials.