Feldman, Hume A; Hudson, Michael J
2009-01-01
The low order moments of the large scale peculiar velocity field are sensitive probes of the matter density fluctuations on very large scales. However, peculiar velocity surveys have varying spatial distributions of tracers, and so the moments estimated are hard to model and thus are not directly comparable between surveys. In addition, the sparseness of typical proper distance surveys can lead to aliasing of small scale power into what is meant to be a probe of the largest scales. Here we extend our previous optimization analysis of the bulk flow to include the shear and octupole moments where velocities are weighted to give an optimal estimate of the moments of an idealized survey, with the variance of the difference between the estimate and the actual flow being minimized. These "minimum variance" (MV) estimates can be designed to calculate the moments on a particular scale with minimal sensitivity to small scale power, and thus different surveys can be directly compared. The MV moments were also designed ...
Linear Minimum variance estimation fusion
ZHU Yunmin; LI Xianrong; ZHAO Juan
2004-01-01
This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.
A Broadband Beamformer Using Controllable Constraints and Minimum Variance
Karimian-Azari, Sam; Benesty, Jacob; Jensen, Jesper Rindom
2014-01-01
The minimum variance distortionless response (MVDR) and the linearly constrained minimum variance (LCMV) beamformers are two optimal approaches in the sense of noise reduction. The LCMV beamformer can also reject interferers using linear constraints at the expense of reducing the degree of freedom...
Minimum Variance Portfolios in the Brazilian Equity Market
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
minimum variance estimation of yield parameters of rubber tree with ...
2013-03-01
Mar 1, 2013 ... STAMP, an OxMetric modular software system for time series analysis, was used to estimate the yield ... derlying regression techniques. .... Kalman Filter Minimum Variance Estimation of Rubber Tree Yield Parameters. 83.
A note on minimum-variance theory and beyond
Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)
2004-04-30
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.
A comparison between temporal and subband minimum variance adaptive beamforming
Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.
2014-01-01
This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated...
PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS
Daniel Menezes Cavalcante
2016-07-01
Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.
Generalized Minimum Variance Control for MDOF Structures under Earthquake Excitation
Lakhdar Guenfaf
2016-01-01
Full Text Available Control of a multi-degree-of-freedom structural system under earthquake excitation is investigated in this paper. The control approach based on the Generalized Minimum Variance (GMV algorithm is developed and presented. Our approach is a generalization to multivariable systems of the GMV strategy designed initially for single-input-single-output (SISO systems. Kanai-Tajimi and Clough-Penzien models are used to generate the seismic excitations. Those models are calculated using the specific soil parameters. Simulation tests using a 3DOF structure are performed and show the effectiveness of the control method.
Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging
Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt
2007-01-01
This paper investigates the application of adaptive beamforming in medical ultrasound imaging. A minimum variance (MV) approach for near-field beamforming of broadband data is proposed. The approach is implemented in the frequency domain, and it provides a set of adapted, complex apodization...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...
Interdependence of NAFTA capital markets: A minimum variance portfolio approach
López-Herrera Francisco
2014-01-01
Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.
Testing the Minimum Variance Method for Estimating Large Scale Velocity Moments
Agarwal, Shankar; Watkins, Richard
2012-01-01
The estimation and analysis of large-scale bulk flow moments of peculiar velocity surveys is complicated by non-spherical survey geometry, the non-uniform sampling of the matter velocity field by the survey objects, and the typically large measurement errors of the measured line-of-sight velocities. Previously we have developed an optimal "minimum variance" (MV) weighting scheme for using peculiar velocity data to estimate bulk flow moments for idealized dense and isotropic surveys with Gaussian radial distributions that avoids many of these complications. These moments are designed to be easy to interpret and are comparable between surveys. In this paper, we test the robustness of our MV estimators using numerical simulations. Using MV weights, we estimate the underlying bulk flow moments for DEEP, SFI++ and COMPOSITE mock catalogues extracted from the LasDamas and the Horizon Run numerical simulations and compare these estimates to the true moments calculated directly from the simulation boxes. We show that...
Automated Clutch of AMT Vehicle Based on Adaptive Generalized Minimum Variance Controller
Ze Li; Xinhao Yang
2014-01-01
... of the automated clutch of automatic mechanical transmission vehicle. In this paper, an adaptive generalized minimum variance controller is applied to the automated clutch, which is driven by a brushless DC motor...
WU Wentao; PU Jie; LU Yi
2012-01-01
In medical ultrasound imaging field, in order to obtain high resolution and correct the phase errors induced by the velocity in-homogeneity of the tissue, a high-resolution medical ultrasound imaging method combining minimum variance beamforming and general coherence factor was presented. First, the data from the elements is delayed for focusing; then the multi-channel data is used for minimum variance beamforming; at the same time, the data is transformed from array space to beam space to calculate the general coherence factor; in the end, the general coherence factor is used to weight the results of minimum variance beamforming. The medical images are gotten by the imaging system. Experiments based on point object and anechoic cyst object are used to verify the proposed method. The results show the proposed method in the aspects of resolution, contrast and robustness is better than minimum variance beamforming and conventional beamforming.
Diamantis, Konstantinos; Greenaway, Alan H.; Anderson, Tom
2017-01-01
Recent progress in adaptive beamforming techniques for medical ultrasound has shown that current resolution limits can be surpassed. One method of obtaining improved lateral resolution is the Minimum Variance (MV) beamformer. The frequency domain implementation of this method effectively divides ...... the MVS beamformer is not suitable for imaging continuous targets, and significant resolution gains were obtained only for isolated targets....
SIMULATION STUDY OF GENERALIZED MINIMUM VARIANCE CONTROL FOR AN EXTRACTION TURBINE
Shi Xiaoping
2003-01-01
In an extraction turbine, pressure of the extracted steam and rotate speed of the rotor are two important controlled quantities. The traditional linear state feedback control method is not perfect enough to control the two quantities accurately because of existence of nonlinearity and coupling. A generalized minimum variance control method is studied for an extraction turbine. Firstly, a nonlinear mathematical model of the control system about the two quantities is transformed into a linear system with two white noises. Secondly, a generalized minimum variance control law is applied to the system.A comparative simulation is done. The simulation results indicate that precision and dynamic quality of the regulating system under the new control law are both better than those under the state feedback control law.
Juan ZHAO; Yunmin ZHU
2009-01-01
The optimally weighted least squares estimate and the linear minimum variance estimate are two of the most popular estimation methods for a linear model. In this paper, the authors make a comprehensive discussion about the relationship between the two estimates. Firstly, the authors consider the classical linear model in which the coefficient matrix of the linear model is deterministic,and the necessary and sufficient condition for equivalence of the two estimates is derived. Moreover,under certain conditions on variance matrix invertibility, the two estimates can be identical provided that they use the same a priori information of the parameter being estimated. Secondly, the authors consider the linear model with random coefficient matrix which is called the extended linear model;under certain conditions on variance matrix invertibility, it is proved that the former outperforms the latter when using the same a priori information of the parameter.
Image fractal coding algorithm based on complex exponent moments and minimum variance
Yang, Feixia; Ping, Ziliang; Zhou, Suhua
2017-02-01
Image fractal coding possesses very high compression ratio, the main problem is low speed of coding. The algorithm based on Complex Exponent Moments(CEM) and minimum variance is proposed to speed up the fractal coding compression. The definition of CEM and its FFT algorithm are presented, and the multi-distorted invariance of CEM are discussed. The multi-distorted invariance of CEM is fit to the fractal property of an image. The optimal matching pair of range blocks and domain blocks in an image is determined by minimizing the variance of their CEM. Theory analysis and experimental results have proved that the algorithm can dramatically reduce the iteration time and speed up image encoding and decoding process.
An improved minimum variance beamforming applied to plane-wave imaging in medical ultrasound
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh; Jensen, Jørgen Arendt
2016-01-01
Minimum variance beamformer (MVB) is an adaptive beamformer which provides images with higher resolution and contrast in comparison with non-adaptive beamformers like delay and sum (DAS). It finds weight vector of beamformer by minimizing output power while keeping the desired signal unchanged. We...... used the eigen-based MVB and generalized coherence factor (GCF) to further improve the quality of MVB beamformed images. The eigen-based MVB projects the weight vector with a transformation matrix constructed from eigen-decomposing of the array covariance matrix that increases resolution and contrast...
Minimum variance system identification with application to digital adaptive flight control
Kotob, S.; Kaufman, H.
1975-01-01
A new on-line minimum variance filter for the identification of systems with additive and multiplicative noise is described which embodies both accuracy and computational efficiency. The resulting filter is shown to use both the covariance of the parameter vector itself and the covariance of the error in identification. A bias reduction scheme can be used to yield asymptotically unbiased estimates. Experimental results for simulated linearized lateral aircraft motion in a digital closed loop mode are presented, showing the utility of the identification schemes.
Minimum variance imaging based on correlation analysis of Lamb wave signals.
Hua, Jiadong; Lin, Jing; Zeng, Liang; Luo, Zhi
2016-08-01
In Lamb wave imaging, MVDR (minimum variance distortionless response) is a promising approach for the detection and monitoring of large areas with sparse transducer network. Previous studies in MVDR use signal amplitude as the input damage feature, and the imaging performance is closely related to the evaluation accuracy of the scattering characteristic. However, scattering characteristic is highly dependent on damage parameters (e.g. type, orientation and size), which are unknown beforehand. The evaluation error can degrade imaging performance severely. In this study, a more reliable damage feature, LSCC (local signal correlation coefficient), is established to replace signal amplitude. In comparison with signal amplitude, one attractive feature of LSCC is its independence of damage parameters. Therefore, LSCC model in the transducer network could be accurately evaluated, the imaging performance is improved subsequently. Both theoretical analysis and experimental investigation are given to validate the effectiveness of the LSCC-based MVDR algorithm in improving imaging performance.
Designing a robust minimum variance controller using discrete slide mode controller approach.
Alipouri, Yousef; Poshtan, Javad
2013-03-01
Designing minimum variance controllers (MVC) for nonlinear systems is confronted with many difficulties. The methods able to identify MIMO nonlinear systems are scarce. Harsh control signals produced by MVC are among other disadvantages of this controller. Besides, MVC is not a robust controller. In this article, the Vector ARX (VARX) model is used for simultaneously modeling the system and disturbance in order to tackle these disadvantages. For ensuring the robustness of the control loop, the discrete slide mode controller design approach is used in designing MVC and generalized MVC (GMVC). The proposed method for controller design is tested on a nonlinear experimental Four-Tank benchmark process and is compared with nonlinear MVCs designed by neural networks. In spite of the simplicity of designing GMVCs for the VARX models with uncertainty, the results show that the proposed method is accurate and implementable.
Tiong Sieh Kiong
2014-01-01
Full Text Available In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR for wanted signals.
Early fault detection in automotive ball bearings using the minimum variance cepstrum
Park, Choon-Su; Choi, Young-Chul; Kim, Yang-Hann
2013-07-01
Ball bearings in automotive wheels play an important role in a vehicle. They enable an automobile to run and simultaneously support the vehicle. Once faults are generated, even if they are small, they often grow fast even under normal driving condition and cause vibration and noise. Therefore, it is critical to detect faults as early as possible to prevent bearings from generating harsh noise and vibration. How early faults can be detected is associated with how well a detecting method finds the information of early faults from measured signal. Incipient faults are so small that the fault signal is inherently buried by noise. Minimum variance cepstrum (MVC) has been introduced for the observation of periodic impulse signal under noisy environments. We are particularly focusing on the definition of MVC that goes back to the original definition by Bogert et al. in comparison with the recently prevalent definition of cepstral analysis. In this work, the MVC is, therefore, obtained by liftering a logarithmic power spectrum, and the lifter bank is designed by the minimum variance algorithm. Furthermore, it is also shown how efficient the method is for detecting periodic fault signal made by early faults by using automotive ball bearings, with which an automobile is equipped under running conditions. We were able to detect incipient faults in 4 out of 12 normal bearings which passed acceptance test as well as in bearings that were recalled due to noise and vibration. In addition, we compared the results of the proposed method with results obtained using other older well-established early fault detection methods that were chosen from 4 groups of methods which were classified by the domain of observation. The results demonstrated that MVC determined bearing fault periods more clearly than other methods under the given condition.
Yakup Hundur; Rainer Hippler; Ziya B. Güven(c)
2006-01-01
@@ Linear thermal expansion coefficient (TEC) of Ti bulk is investigated by means of molecular dynamics simulation.The elastic minimum image convention of periodic boundary conditions is introduced to allow the bulk to adjust its size according to the new fixed temperature. The TEC and the specific heat of Ti are compared to the available theoretical and experimental data.
Thermography based breast cancer detection using texture features and minimum variance quantization
Milosevic, Marina; Jankovic, Dragan; Peulic, Aleksandar
2014-01-01
In this paper, we present a system based on feature extraction techniques and image segmentation techniques for detecting and diagnosing abnormal patterns in breast thermograms. The proposed system consists of three major steps: feature extraction, classification into normal and abnormal pattern and segmentation of abnormal pattern. Computed features based on gray-level co-occurrence matrices are used to evaluate the effectiveness of textural information possessed by mass regions. A total of 20 GLCM features are extracted from thermograms. The ability of feature set in differentiating abnormal from normal tissue is investigated using a Support Vector Machine classifier, Naive Bayes classifier and K-Nearest Neighbor classifier. To evaluate the classification performance, five-fold cross validation method and Receiver operating characteristic analysis was performed. The verification results show that the proposed algorithm gives the best classification results using K-Nearest Neighbor classifier and a accuracy of 92.5%. Image segmentation techniques can play an important role to segment and extract suspected hot regions of interests in the breast infrared images. Three image segmentation techniques: minimum variance quantization, dilation of image and erosion of image are discussed. The hottest regions of thermal breast images are extracted and compared to the original images. According to the results, the proposed method has potential to extract almost exact shape of tumors. PMID:26417334
Soodabeh Darzi
Full Text Available An experience oriented-convergence improved gravitational search algorithm (ECGSA based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α, is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.
Chen, Yang; Zou, Ling; Zhou, Bin
2017-07-01
The high mounting precision of the fiber underwater acoustic array leads to an array manifold without perturbation. Besides, the targets are either static or slowly moving in azimuth in underwater acoustic array signal processing. Therefore, the covariance matrix can be estimated accurately by prolonging the observation time. However, this processing is limited to poor bearing resolution due to small aperture, low SNR and strong interferences. In this paper, diagonal rejection (DR) technology for Minimum Variance Distortionless Response (MVDR) was developed to enhance the resolution performance. The core idea of DR is rejecting the main diagonal elements of the covariance matrix to improve the output signal to interference and noise ratio (SINR). The definition of SINR here implicitly assumes independence between the spatial filter and the received observations at which the SINR is measured. The power of noise converges on the diagonal line in the covariance matrix and then it is integrated into the output beams. With the diagonal noise rejected by a factor smaller than 1, the array weights of MVDR will concentrate on interference suppression, leading to a better resolution capability. The algorithm was theoretically proved with optimal rejecting coefficient derived under both infinite and finite snapshots scenarios. Numerical simulations were conducted with an example of a linear array with eight elements half-wavelength spaced. Both resolution and Direction-of-Arrival (DOA) performances of MVDR and DR-based MVDR (DR-MVDR) were compared under different SNR and snapshot numbers. A conclusion can be drawn that with the covariance matrix accurately estimated, DR-MVDR can provide a lower sidelobe output level and a better bearing resolution capacity than MVDR without harming the DOA performance.
Panea, I.; Drijkoningen, G.G.
2008-01-01
Coherent noise generated by surface waves or ground roll within a heterogeneous near surface is a major problem in land seismic data. Array forming based on single-sensor recordings might reduce such noise more robustly than conventional hardwired arrays. We use the minimum-variance
Bereteu, L; Drăgănescu, G E; Stănescu, D; Sinescu, C
2011-12-01
In this paper, we search an adequate quantitative method based on minimum variance spectral analysis in order to reflect the dependence of the speech quality on the correct positioning of the dental prostheses. We also search some quantitative parameters, which reflect the correct position of dental prostheses in a sensitive manner.
Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.
Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash
2017-01-01
Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.
Mohammad Ali Barati
2016-04-01
Full Text Available Multi-period models of portfolio selection have been developed in the literature with respect to certain assumptions. In this study, for the first time, the portfolio selection problem has been modeled based on mean-semi variance with transaction cost and minimum transaction lots considering functional constraints and fuzzy parameters. Functional constraints such as transaction cost and minimum transaction lots were included. In addition, the returns on assets parameters were considered as trapezoidal fuzzy numbers. An efficient genetic algorithm (GA was designed, results were analyzed using numerical instances and sensitivity analysis were executed. In the numerical study, the problem was solved based on the presence or absence of each mode of constraints including transaction costs and minimum transaction lots. In addition, with the use of sensitivity analysis, the results of the model were presented with the variations of minimum expected rate of programming periods.
Performance assessment of excitation system based on minimum variance benchmark%基于最小方差基准的励磁系统性能评估
张虹; 徐滨; 高健; 庞健
2014-01-01
Step response test methods are generally used to evaluate synchronous generator excitation system performance, but this method can not be implemented online. A method for evaluating the excitation system performance of the minimum variance control benchmark is proposed. Performance of the system under the action of the minimum variance controller output is considered as the upper bound of performance. The ratio of this output performance and actual output performance of the system is defined as the performance index. To avoid expanding the Diophantine equation, filtering and correlation analysis (FCOR) algorithm is introduced. The analysis results show that this method only requires synchronous generator output voltage data and a priori knowledge of the system dead time. Simulation results show that this method simplifies the calculation process, and evaluates the performance of excitation control system timely and accurately.%对同步发电机励磁系统性能评价一般通过阶跃响应方法，但该方法无法在线进行，为此提出了最小方差控制基准的性能评估方法。对系统设计最小方差控制器并作为系统控制性能上限，与系统实际性能进行比较而得到性能指标，并对该方法进行系统滤波和相关性分析 FCOR(Filtering and Correlation Analysis)算法的改进，避免了 Diophantine 方程的展开运算。分析表明该评估方法只需利用同步发电机输出端电压数据，结合系统时滞d就可以得到励磁系统的性能指标。仿真结果表明该方法简化了计算过程，能够及时准确地在线评估励磁系统的控制性能。
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
范永辉; 杨华龙; 刘金霞
2012-01-01
In view of the interactive relationship among handysize, panamax and capsize dry bulk shipping markets, the dry bulk freight indexes of different vessel types issued by the Baltic Shipping Exchange were employed and the volatility spillover effect among three dry bulk shipping markets of different vessel types was studied by BEKK variance model of multivariate GARCH. It is pointed that capesize dry bulk shipping market has volatility spillover effect on handysize dry bulk shipping market and panamax dry bulk shipping market while handysize dry bulk shipping market and panamax dry bulk shipping market have no volatility spillover effect on capesize dry bulk shipping market, and there is a two-way volatility spillover effect between handysize dry bulk shipping market and panamax dry bulk shipping market. Wald test verified the correctness of above inference. The results can provide references for dry bulk shipping operators to avoid risk of market volatility.%针对灵便型、巴拿马型和海岬型干散货航运市场间的互动关系问题,选取波罗的海干散货运价指数,应用多元广义自回归条件异方差中的BEKK方差分析模型,研究了干散货航运市场间的波动溢出效应.发现海岬型干散货航运市场对灵便型和巴拿马型干散货航运市场存在波动溢出效应,而灵便型和巴拿马型干散货航运市场对海岬型干散货航运市场不存在波动溢出效应,灵便型干散货航运市场和巴拿马型干散货航运市场之间存在双向波动溢出效应,Wald检验验证了上述结论的正确性.从而可为航运经营者规避干散货航运市场波动风险提供决策参考.
Hu, Y.F., E-mail: Yongfeng.hu@lightsource.ca [Canadian Light Source, Saskatoon, SK (Canada); Xiao, Q.; Wang, D.; Cui, X. [Canadian Light Source, Saskatoon, SK (Canada); Nesbitt, H.W. [Department of Earth Sciences, University of Western Ontario, London, ONT (Canada); Bancroft, G.M. [Department of Chemistry, University of Western Ontario, London, ONT (Canada)
2015-07-15
Highlights: • Electronic structure of non-conducting glass studied by hard X-ray photoelectron spectroscopy. • A thin film of Cr was deposited on the vitreous SiO{sub 2} glass to overcome the sample charging. • Excellent O 1s and Si 1s linewidths were obtained, matching those reported using the laboratory based Kratos Axis Ultra spectrometer equipped with a magnetic compensation system. • The bulk and interface states of non-conducting samples are studied as a function of photon energy. - Abstract: Hard X-ray photoelectron spectra (2200 eV to 5000 eV photon energies) have been obtained for the first time on a bulk non-conductor, vitreous SiO{sub 2}, on a high resolution (E/ΔE of 10,000) synchrotron beamline at the Canadian Light Source (CLS). To minimize charging and differential charging, the SiO{sub 2} was coated with very thin layers (0.5 to 1.5 nm) of Cr metal. The O 1s linewidth obtained at 2500 eV photon energy was 1.26 eV—the minimum linewidth for SiO{sub 2}—and in good agreement with that obtained at 1486 eV on a Kratos Axis Ultra spectrometer equipped with a magnetic charge compensation system. The Si 1s linewidth of 1.5 eV, somewhat broader than that previously obtained at 1486 eV on the Si 2p{sub 3/2} line of 1.16 eV, is mainly due to the much larger inherent Si 1s linewidth (0.5 eV) compared to the inherent Si 2p linewidth (<0.1 eV). Both linewidths are dominated by the large final state vibrational broadening previously described. The Cr coating produces surface monolayers of interfacial Cr “suboxide” (Cr-subox), Cr metal, and a surface Cr oxide (Cr-surfox). Cr-subox (Si−O−Cr) gives rise to the weak near-surface Si 1s peak, while both oxides give rise to both the weak surface O 1s peak and the Cr 2p oxide peak. Both the O 1s and Si 1s surface peaks are shifted by ∼2 eV relative to the large bulk Si 1s and O 1s peaks. The weak Si 1s and O 1s surface peaks along with the Cr 2p oxide peak decrease in intensity greatly as the photon
Broadband Minimum Variance Beamforming for Ultrasound Imaging
Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt
2009-01-01
to the ultrasound data. As the error increases, it is seen that the MV beamformer is not as robust compared with the DS beamformer with boxcar an Harming weights. Nevertheless, it is noted that the DS does not outperform the MV beamformer. For errors of 2% and 4% of the correct value, the FWHM are {0.81, 1.25, 0...
Downside Variance Risk Premium
Feunou, Bruno; Jahan-Parvar, Mohammad R.; Okou, Cédric
2015-01-01
We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...
Portfolio optimization with mean-variance model
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Measuring Bulk Flows in Large Scale Surveys
Feldman, H A; Feldman, Hume A.; Watkins, Richard
1993-01-01
We follow a formalism presented by Kaiser to calculate the variance of bulk flows in large scale surveys. We apply the formalism to a mock survey of Abell clusters \\'a la Lauer \\& Postman and find the variance in the expected bulk velocities in a universe with CDM, MDM and IRAS--QDOT power spectra. We calculate the velocity variance as a function of the 1--D velocity dispersion of the clusters and the size of the survey.
Simultaneous optimal estimates of fixed effects and variance components in the mixed model
WU Mixia; WANG Songgui
2004-01-01
For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.
Conversations across Meaning Variance
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Eigenvalue variance bounds for covariance matrices
Dallaporta, Sandrine
2013-01-01
This work is concerned with finite range bounds on the variance of individual eigenvalues of random covariance matrices, both in the bulk and at the edge of the spectrum. In a preceding paper, the author established analogous results for Wigner matrices and stated the results for covariance matrices. They are proved in the present paper. Relying on the LUE example, which needs to be investigated first, the main bounds are extended to complex covariance matrices by means of the Tao, Vu and Wan...
Statistical inference of Minimum Rank Factor Analysis
Shapiro, A; Ten Berge, JMF
2002-01-01
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the
Statistical inference of Minimum Rank Factor Analysis
Shapiro, A; Ten Berge, JMF
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
李敏; 王飞雪; 李峥嵘; 曾祥华
2012-01-01
To mitigate multipath in monitoring (reference) stations of satellite navigation systems, a weighting criterion for antenna arrays called Down-up-ratio Constrained Minimum Variance ( DCMV) criterion is proposed in this paper. The proposed criterion aims at minimizing the array output power under the constraint of down-up-ratio not greater than some threshold r. Therefore, this criterion is able to mitigate both interference and multipath. Simulation results show that it outperformed other criteria in satellite navigation systems, such as Power Inversion, Beam Steering, Maximum Signal-to-Interference-plus-Noise Ratio criterion, etc. The DCMV criterion is able to quantitatively control the incoming multipath energy, however, it losses some array gain as a cost.%针对卫星导航系统监测站(参考站)面临的典型多径环境,设计了一种具有多径抑制能力的阵列加权准则——约束下上比的最小方差( DCMV)准则.该准则的优化目标是在约束有用信号方向的下上比不大于某个门限r的条件下,使阵列输出功率最小.理论推导和仿真结果表明,相比卫星导航领域常见的几种天线阵最优加权准则(如功率倒置、波束控制、最大信干噪比准则等),DCMV准则可以定量控制地面反射多径的入射能量,然而其代价是损失了一定的阵列增益.
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Fixed effects analysis of variance
Fisher, Lloyd; Birnbaum, Z W; Lukacs, E
1978-01-01
Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi
Statistical inference on variance components
Verdooren, L.R.
1988-01-01
In several sciences but especially in animal and plant breeding, the general mixed model with fixed and random effects plays a great role. Statistical inference on variance components means tests of hypotheses about variance components, constructing confidence intervals for them, estimating them,
On methods of estimating cosmological bulk flows
Nusser, Adi
2015-01-01
We explore similarities and differences between several estimators of the cosmological bulk flow, $\\bf B$, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of $\\bf B$ as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring $\\bf B$ for either of these definitions which coincide only for a constant velocity field. We focus on the Wiener Filtering (WF, Hoffman et al. 2015) and the Constrained Minimum Variance (CMV,Feldman et al. 2010) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute $\\bf B$ in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer $\\bf B$ directly from the observed velocities for the second definition of $\\bf B$. The WF ...
Hedging with stock index futures: downside risk versus the variance
Brouwer, F.; Nat, van der M.
1995-01-01
In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to Fis
Modelling volatility by variance decomposition
Amado, Cristina; Teräsvirta, Timo
on the multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspecification tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns...... illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance....
Revision: Variance Inflation in Regression
D. R. Jensen
2013-01-01
the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.
Analysis of variance: Comfortless questions
L.V. Nedorezov
2017-01-01
In this paper the simplest variant of analysis of variance is under consideration. Three examples from textbooks by Lakin (1990) and Rokitsky (1973) were re-considered. It was obtained that traditional one-way ANOVA and Kruskal - Wallis criterion can lead to unreal results about factor's influence on value of characteristics. Alternative way to solution of the same problem is under consideration too.
Analysis of Variance: Variably Complex
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Variance based OFDM frame synchronization
Z. Fedra
2012-04-01
Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.
Variance decomposition in stochastic simulators
Le Maître, O. P.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance-based uncertainty relations
Huang, Yichen
2010-01-01
It is hard to overestimate the fundamental importance of uncertainty relations in quantum mechanics. In this work, I propose state-independent variance-based uncertainty relations for arbitrary observables in both finite and infinite dimensional spaces. We recover the Heisenberg uncertainty principle as a special case. By studying examples, we find that the lower bounds provided by our new uncertainty relations are optimal or near-optimal. I illustrate the uses of our new uncertainty relations by showing that they eliminate one common obstacle in a sequence of well-known works in entanglement detection, and thus make these works much easier to access in applications.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.
Neutrino mass without cosmic variance
LoVerde, Marilena
2016-01-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological datasets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias $b(k)$ and the linear growth parameter $f(k)$ inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on $b(k)$ and $f(k)$ continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via $b(k)$ and $f(k)$. The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high density limit, using multiple tracers allows cosmic-variance to be beaten and the forecasted errors on neutrino mass shrink dramatically. In...
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
Speed Variance and Its Influence on Accidents.
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Variance optimal stopping for geometric Levy processes
Gad, Kamille Sofie Tågholt; Pedersen, Jesper Lund
2015-01-01
The main result of this paper is the solution to the optimal stopping problem of maximizing the variance of a geometric Lévy process. We call this problem the variance problem. We show that, for some geometric Lévy processes, we achieve higher variances by allowing randomized stopping. Furthermore...
Bulk Superconductors in Mobile Application
Werfel, F. N.; Delor, U. Floegel-; Rothfeld, R.; Riedel, T.; Wippich, D.; Goebel, B.; Schirrmeister, P.
We investigate and review concepts of multi - seeded REBCO bulk superconductors in mobile application. ATZ's compact HTS bulk magnets can trap routinely 1 T@77 K. Except of magnetization, flux creep and hysteresis, industrial - like properties as compactness, power density, and robustness are of major device interest if mobility and light-weight construction is in focus. For mobile application in levitated trains or demonstrator magnets we examine the performance of on-board cryogenics either by LN2 or cryo-cooler application. The mechanical, electric and thermodynamical requirements of compact vacuum cryostats for Maglev train operation were studied systematically. More than 30 units are manufactured and tested. The attractive load to weight ratio is more than 10 and favours group module device constructions up to 5 t load on permanent magnet (PM) track. A transportable and compact YBCO bulk magnet cooled with in-situ 4 Watt Stirling cryo-cooler for 50 - 80 K operation is investigated. Low cooling power and effective HTS cold mass drives the system construction to a minimum - thermal loss and light-weight design.
Vincenza Di Stefano
2009-11-01
Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Generalized analysis of molecular variance.
Caroline M Nievergelt
2007-04-01
Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by
Minimum Length - Maximum Velocity
Panes, Boris
2011-01-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.
The phenotypic variance gradient - a novel concept.
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-11-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Zhou, Hao
predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
Koch, C. C.; Langdon, T. G.; Lavernia, E. J.
2017-09-01
This paper will address three topics of importance to bulk nanostructured materials. Bulk nanostructured materials are defined as bulk solids with nanoscale or partly nanoscale microstructures. This category of nanostructured materials has historical roots going back many decades but has relatively recent focus due to new discoveries of unique properties of some nanoscale materials. Bulk nanostructured materials are prepared by a variety of severe plastic deformation methods, and these will be reviewed. Powder processing to prepare bulk nanostructured materials requires that the powders be consolidated by typical combinations of pressure and temperature, the latter leading to coarsening of the microstructure. The thermal stability of nanostructured materials will also be discussed. An example of bringing nanostructured materials to applications as structural materials will be described in terms of the cryomilling of powders and their consolidation.
Influence of Family Structure on Variance Decomposition
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs... PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for... shall modify the tag, label, or other certification required by § 1010.2 to state: (1) That the...
Analysis of variance for model output
Jansen, M.J.W.
1999-01-01
A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va
The Correct Kriging Variance Estimated by Bootstrapping
den Hertog, D.; Kleijnen, J.P.C.; Siem, A.Y.D.
2004-01-01
The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrappi
Influence of Family Structure on Variance Decomposition
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...
Nonlinear Epigenetic Variance: Review and Simulations
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Variance Risk Premia on Stocks and Bonds
Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea
is different from the equity variance risk premium. Third, the conditional correlation between stock and bond market variance risk premium switches sign often and ranges between -60% and +90%. We then show that these stylized facts pose a challenge to standard consumption-based asset pricing models....
Large area bulk superconductors
Miller, Dean J.; Field, Michael B.
2002-01-01
A bulk superconductor having a thickness of not less than about 100 microns is carried by a polycrystalline textured substrate having misorientation angles at the surface thereof not greater than about 15.degree.; the bulk superconductor may have a thickness of not less than about 100 microns and a surface area of not less than about 50 cm.sup.2. The textured substrate may have a thickness not less than about 10 microns and misorientation angles at the surface thereof not greater than about 15.degree.. Also disclosed is a process of manufacturing the bulk superconductor and the polycrystalline biaxially textured substrate material.
Reduced K-best sphere decoding algorithm based on minimum route distance and noise variance
Xinyu Mao; Jianjun Wu; Haige Xiang
2014-01-01
This paper focuses on reducing the complexity of K-best sphere decoding (SD) algorithm for the detection of uncoded multi-ple input multiple output (MIMO) systems. The proposed algorithm utilizes the threshold-pruning method to cut nodes with partial Euclidean distances (PEDs) larger than the threshold. Both the known noise value and the unknown noise value are considered to generate the threshold, which is the sum of the two values. The known noise value is the smal est PED of signals in the detected layers. The unknown noise value is generated by the noise power, the quality of service (QoS) and the signal-to-noise ratio (SNR) bound. Simulation results show that by considering both two noise values, the proposed algorithm makes an efficient reduction while the performance drops little.
A phantom study on temporal and subband Minimum Variance adaptive beamforming
Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.
2014-01-01
BK8804 linear transducer was used to scan a wire phantom in which wires are separated by 10 mm. Performance is then evaluated by the lateral Full-Width-Half-Maximum (FWHM), the Peak Sidelobe Level (PSL), and the computational load. Beamformed single emission responses are also compared with those...
A minimum variance benchmark to measure the performance of pension funds in Mexico
Oscar V. De la Torre Torres
2015-01-01
Full Text Available En el presente artículo proponemos el portafolio de mínima varianza como método de ponderación para un benchmark que mida el desempeno˜ de fondos de pensiones en México. Se contrastó éste portafolio contra los logrados ya sea con la máxima razón de Sharpe o el resultante de una combinación lineal de ambos métodos. Esto se hizo con tres simulaciones de eventos discretos con datos diarios de enero de 2002 a mayo de 2013. Con la razón de Sharpe, la prueba de significancia de la Alfa de Jensen y la prueba de expansión de Huberman y Kandel (1987, se encontró que los portafolios simulados tienen una performance similar. Al utilizar los criterios exposición al riesgo, representatividad de los mercados objeto de inversiín y el nivel de rebalanceo propuestos por Bailey (1992, encontramos que el método de mínima varianza es preferible para medir el desempeño de fondos de pensiones en México.
Spatio-angular Minimum-variance Tomographic Controller for Multi-Object Adaptive Optics systems
Correia, Carlos M; Veran, Jean-Pierre; Andersen, David; Lardiere, Olivier; Bradley, Colin
2015-01-01
Multi-object astronomical adaptive-optics (MOAO) is now a mature wide-field observation mode to enlarge the adaptive-optics-corrected field in a few specific locations over tens of arc-minutes. The work-scope provided by open-loop tomography and pupil conjugation is amenable to a spatio-angular Linear-Quadratic Gaussian (SA-LQG) formulation aiming to provide enhanced correction across the field with improved performance over static reconstruction methods and less stringent computational complexity scaling laws. Starting from our previous work [1], we use stochastic time-progression models coupled to approximate sparse measurement operators to outline a suitable SA-LQG formulation capable of delivering near optimal correction. Under the spatio-angular framework the wave-fronts are never explicitly estimated in the volume,providing considerable computational savings on 10m-class telescopes and beyond. We find that for Raven, a 10m-class MOAO system with two science channels, the SA-LQG improves the limiting mag...
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Zhou, Hao
We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia...... predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander
2013-01-01
of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic...
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Grammatical and lexical variance in English
Quirk, Randolph
2014-01-01
Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.
78 FR 14122 - Revocation of Permanent Variances
2013-03-04
... Occupational Safety and Health Administration Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is... into consideration these newly corrected cross references. DATES: The effective date of the...
Importance Sampling Variance Reduction in GRESS ATMOSIM
Wakeford, Daniel Tyler [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-26
This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....
Variance components in discrete force production tasks.
Varadhan, S K M; Zatsiorsky, Vladimir M; Latash, Mark L
2010-09-01
The study addresses the relationships between task parameters and two components of variance, "good" and "bad", during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does ("bad") and does not ("good") affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the "bad" variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. "Good" variance scaled linearly with force magnitude and did not depend on force rate. "Bad" variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic tasks was associated with
Discrimination of frequency variance for tonal sequencesa)
Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.
2014-01-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTA...
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
The Variance Composition of Firm Growth Rates
Luiz Artur Ledur Brito
2009-04-01
Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.
Rising above the Minimum Wage.
Even, William; Macpherson, David
An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…
Cardinal, Jean; Joret, Gwenaël
2008-01-01
We study graph orientations that minimize the entropy of the in-degree sequence. The problem of finding such an orientation is an interesting special case of the minimum entropy set cover problem previously studied by Halperin and Karp [Theoret. Comput. Sci., 2005] and by the current authors [Algorithmica, to appear]. We prove that the minimum entropy orientation problem is NP-hard even if the graph is planar, and that there exists a simple linear-time algorithm that returns an approximate solution with an additive error guarantee of 1 bit. This improves on the only previously known algorithm which has an additive error guarantee of log_2 e bits (approx. 1.4427 bits).
DILEMATIKA PENETAPAN UPAH MINIMUM
. Pitaya
2015-02-01
Full Text Available In the effort of creating appropiate wage for employees, it is necessary to determine the wages by considering the increase of poverty without ignoring the increase of productivity, the progressivity of companies and the growth of economic. The new minimum wages in the provincial level and the regoinal/municipality level have been implemented per 1st January in Indonesia since 2001. The determination of minimum wage for provinvial level should be done 30 days before 1st January, whereas the determination of minimumwage for regional/municipality level should be done 40 days before 1st January. Moreover,there is an article which governs thet the minimumwage will be revised annually. By considering the time of determination and the time of revision above,it can be predicted that before and after the determination date will be crucial time. This is because the controversy among parties in industrial relationships will arise. The determination of minimum wage will always become a dilemmatic step which has to be done by the Government. Through this policy, on one side the government attempts to attract many investors, however, on the other side the government also has to protect the employees in order to have the appropiate wage in accordance with the standard of living.
Minimum quality standards and exports
2015-01-01
This paper studies the interaction of a minimum quality standard and exports in a vertical product differentiation model when firms sell global products. If ex ante quality of foreign firms is lower (higher) than the quality of exporting firms, a mild minimum quality standard in the home market hinders (supports) exports. The minimum quality standard increases quality in both markets. A welfare maximizing minimum quality standard is always lower under trade than under autarky. A minimum quali...
Haveren, van J.; Scott, E.L.; Sanders, J.P.M.
2008-01-01
Given the current robust forces driving sustainable production, and available biomass conversion technologies, biomass-based routes are expected to make a significant impact on the production of bulk chemicals within 10 years, and a huge impact within 20-30 years. In the Port of Rotterdam there is a
Auctioning Bulk Mobile Messages
S. Meij (Simon); L-F. Pau (Louis-François); H.W.G.M. van Heck (Eric)
2003-01-01
textabstractThe search for enablers of continued growth of SMS traffic, as well as the take-off of the more diversified MMS message contents, open up for enterprises the potential of bulk use of mobile messaging , instead of essentially one-by-one use. In parallel, such enterprises or value added
Haveren, van J.; Scott, E.L.; Sanders, J.P.M.
2008-01-01
Given the current robust forces driving sustainable production, and available biomass conversion technologies, biomass-based routes are expected to make a significant impact on the production of bulk chemicals within 10 years, and a huge impact within 20-30 years. In the Port of Rotterdam there is a
Discrimination of frequency variance for tonal sequences.
Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A
2014-12-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) > σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.
Bulk flow of halos in $\\Lambda$CDM simulation
Li, Ming; Gao, Liang; Jing, Yipeng; Yang, Xiaohu; Chi, Xuebin; Feng, Longlong; Kang, Xi; Lin, Weipeng; Shang, Guihua; Wang, Long; Zhao, Donghai; Zhang, Pengjie
2012-01-01
Analysis of the Pangu N-body simulation validates that bulk flow of halos follows Maxwellian distribution of which variance is consistent with prediction of linear perturbation theory of structure formation. We propose that consistency between observed bulk velocity and theories shall be examined at the effective scale as radius of spherical top-hat window function yielding the same smoothed velocity variance in linear theory as the sample window does. Then we compared some recently estimated bulk flows from observational samples with prediction of the $\\Lambda$CDM model we used, some results deviate the expectation at level of $\\sim 3\\sigma$ but the tension is not as severe as previously claimed. We disclose that bulk flow is weakly correlated with dipole of internal mass distribution, alignment angle between mass dipole and bulk flow has broad distribution but is peaked at $\\sim 30-50^\\circ$, meanwhile bulk flow shows little dependence on mass of halos used for estimation. In the simulation of box size $1h^...
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
Estimating quadratic variation using realized variance
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process is a semimar......This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....
Do Minimum Wages Fight Poverty?
David Neumark; William Wascher
1997-01-01
The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...
Integrating Variances into an Analytical Database
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Orme, John S.; Nobbs, Steven G.
1995-01-01
The minimum fuel mode of the NASA F-15 research aircraft is designed to minimize fuel flow while maintaining constant net propulsive force (FNP), effectively reducing thrust specific fuel consumption (TSFC), during cruise flight conditions. The test maneuvers were at stabilized flight conditions. The aircraft test engine was allowed to stabilize at the cruise conditions before data collection initiated; data were then recorded with performance seeking control (PSC) not-engaged, then data were recorded with the PSC system engaged. The maneuvers were flown back-to-back to allow for direct comparisons by minimizing the effects of variations in the test day conditions. The minimum fuel mode was evaluated at subsonic and supersonic Mach numbers and focused on three altitudes: 15,000; 30,000; and 45,000 feet. Flight data were collected for part, military, partial, and maximum afterburning power conditions. The TSFC savings at supersonic Mach numbers, ranging from approximately 4% to nearly 10%, are in general much larger than at subsonic Mach numbers because of PSC trims to the afterburner.
Sources of variance in ocular microtremor.
Sheahan, N F; Coakley, D; Bolger, C; O'Neill, D; Fry, G; Phillips, J; Malone, J F
1994-02-01
This study presents a preliminary investigation of the sources of variance in the measurement of ocular microtremor frequency in a normal population. When the results from both experienced and relatively inexperienced operators are pooled, factors that contribute significantly to the total variance include the measurement procedure (p < 0.001), day-to-day variations within subjects (p < 0.001), and inter-subject differences (p < 0.01). Operator experience plays a role in determining the measurement precision: the intra-subject coefficient of variation is about 5% for a very experienced operator, and about 14% for a relatively inexperienced operator.
Bulk locality and boundary creating operators
Nakayama, Yu; Ooguri, Hirosi
2015-10-01
We formulate a minimum requirement for CFT operators to be localized in the dual AdS. In any spacetime dimensions, we show that a general solution to the requirement is a linear superposition of operators creating spherical boundaries in CFT, with the dilatation by the imaginary unit from their centers. This generalizes the recent proposal by Miyaji et al. for bulk local operators in the three dimensional AdS. We show that Ishibashi states for the global conformal symmetry in any dimensions and with the imaginary di-latation obey free field equations in AdS and that incorporating bulk interactions require their superpositions. We also comment on the recent proposals by Kabat et al., and by H. Verlinde.
Bulk Locality and Boundary Creating Operators
Nakayama, Yu
2015-01-01
We formulate a minimum requirement for CFT operators to be localized in the dual AdS. In any spacetime dimensions, we show that a general solution to the requirement is a linear superposition of operators creating spherical boundaries in CFT, with the dilatation by the imaginary unit from their centers. This generalizes the recent proposal by Miyaji et al. for bulk local operators in the three dimensional AdS. We show that Ishibashi states for the global conformal symmetry in any dimensions and with the imaginary dilatation obey free field equations in AdS and that incorporating bulk interactions require their superpositions. We also comment on the recent proposals by Kabat et al., and by H. Verlinde.
Bulk locality and boundary creating operators
Nakayama, Yu [Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, California 91125 (United States); Ooguri, Hirosi [Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, California 91125 (United States); Kavli Institute for the Physics and Mathematics of the Universe, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583 (Japan)
2015-10-19
We formulate a minimum requirement for CFT operators to be localized in the dual AdS. In any spacetime dimensions, we show that a general solution to the requirement is a linear superposition of operators creating spherical boundaries in CFT, with the dilatation by the imaginary unit from their centers. This generalizes the recent proposal by Miyaji et al. for bulk local operators in the three dimensional AdS. We show that Ishibashi states for the global conformal symmetry in any dimensions and with the imaginary dilatation obey free field equations in AdS and that incorporating bulk interactions require their superpositions. We also comment on the recent proposals by Kabat et al., and by H. Verlinde.
Continuous-Time Mean-Variance Portfolio Selection with Random Horizon
Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)
2013-12-15
This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.
The minimum jet power and equipartition
Zdziarski, Andrzej A
2014-01-01
We derive the minimum power of jets and their magnetic field strength based on their observed non-thermal synchrotron emission. The correct form of this method takes into account both the internal energy in the jet and the ion rest-mass energy associated with the bulk motion. The latter was neglected in a number of papers, which instead adopted the well-known energy-content minimization method. That method was developed for static sources, for which there is no bulk-motion component of the energy. In the case of electron power-law spectra with index >2 in ion-electron jets, the rest-mass component dominates. The minimization method for the jet power taking it into account was considered in some other work, but only based on either an assumption of a constant total synchrotron flux or a fixed range of the Lorentz factors. Instead, we base our method on an observed optically-thin synchrotron spectrum. We find the minimum jet power is independent of its radius when the rest-mass power dominates, which becomes th...
Chen, X
2001-01-01
Viscous resistance to changes in the volume of a gas arises when different degrees of freedom have different relaxation times. Collisions tend to oppose the resulting departures from equilibrium and, in so doing, generate entropy. Even for a classical gas of hard spheres, when the mean free paths or mean flight times of constituent particles are long, we find a nonvanishing bulk viscosity. Here we apply a method recently used to uncover this result for a classical rarefied gas to radiative transfer theory and derive an expression for the radiative stress tensor for a gray medium with absorption and Thomson scattering. We determine the transport coefficients through the calculation of the comoving entropy generation. When scattering dominates absorption, the bulk viscosity becomes much larger than either the shear viscosity or the thermal conductivity.
Minimum wages, earnings, and migration
Boffy-Ramirez, Ernest
2013-01-01
Does increasing a state’s minimum wage induce migration into the state? Previous literature has shown mobility in response to welfare benefit differentials across states, yet few have examined the minimum wage as a cause of mobility...
Managing product inherent variance during treatment
Verdenius, F.
1996-01-01
The natural variance of agricultural product parameters complicates recipe planning for product treatment, i.e. the process of transforming a product batch from its initial state to a prespecified final state. For a specific product P, recipes are currently composed by human experts on the basis of
The Variance of Language in Different Contexts
申一宁
2012-01-01
language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.
Regression calibration with heteroscedastic error variance.
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Formative Use of Intuitive Analysis of Variance
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
Linear transformations of variance/covariance matrices
Parois, P.J.A.; Lutz, M.
2011-01-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Decomposition of variance for spatial Cox processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...
Decomposition of variance for spatial Cox processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
40 CFR 142.43 - Disposition of a variance request.
2010-07-01
... during the period of variance shall specify interim treatment techniques, methods and equipment, and... the specified treatment technique for which the variance was granted is necessary to protect...
Adewunmi, Adrian; Byrne, Mike
2008-01-01
This paper investigates the reduction of variance associated with a simulation output performance measure, using the Sequential Sampling method while applying minimum simulation replications, for a class of JIT (Just in Time) warehousing system called crossdocking. We initially used the Sequential Sampling method to attain a desired 95% confidence interval half width of plus/minus 0.5 for our chosen performance measure (Total usage cost, given the mean maximum level of 157,000 pounds and a mean minimum level of 149,000 pounds). From our results, we achieved a 95% confidence interval half width of plus/minus 2.8 for our chosen performance measure (Total usage cost, with an average mean value of 115,000 pounds). However, the Sequential Sampling method requires a huge number of simulation replications to reduce variance for our simulation output value to the target level. Arena (version 11) simulation software was used to conduct this study.
Kavitha, Telikepalli; Nimbhorkar, Prajakta
2010-01-01
We consider an extension of the {\\em popular matching} problem in this paper. The input to the popular matching problem is a bipartite graph G = (A U B,E), where A is a set of people, B is a set of items, and each person a belonging to A ranks a subset of items in an order of preference, with ties allowed. The popular matching problem seeks to compute a matching M* between people and items such that there is no matching M where more people are happier with M than with M*. Such a matching M* is called a popular matching. However, there are simple instances where no popular matching exists. Here we consider the following natural extension to the above problem: associated with each item b belonging to B is a non-negative price cost(b), that is, for any item b, new copies of b can be added to the input graph by paying an amount of cost(b) per copy. When G does not admit a popular matching, the problem is to "augment" G at minimum cost such that the new graph admits a popular matching. We show that this problem is...
Bias-variance decomposition in Genetic Programming
Kowaliw Taras
2016-01-01
Full Text Available We study properties of Linear Genetic Programming (LGP through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a the variance between runs is primarily due to initialization rather than the selection of training samples, (b parameters can be reasonably optimized to obtain gains in efficacy, and (c functions detrimental to evolvability are easily eliminated, while functions well-suited to the problem can greatly improve performance—therefore, larger and more diverse function sets are always preferable.
Realized Variance and Market Microstructure Noise
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
Linear transformations of variance/covariance matrices.
Parois, Pascal; Lutz, Martin
2011-07-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance matrix. For the transformation of second-rank tensors it is suggested that the 3 × 3 matrix is re-written into a 9 × 1 vector. The transformation of the corresponding variance/covariance matrix is then straightforward and easily implemented into computer software. This method is applied in the transformation of anisotropic displacement parameters, the calculation of equivalent isotropic displacement parameters, the comparison of refinements in different space-group settings and the calculation of standard uncertainties of eigenvalues.
Variance and covariance of accumulated displacement estimates.
Bayer, Matthew; Hall, Timothy J
2013-04-01
Tracking large deformations in tissue using ultrasound can enable the reconstruction of nonlinear elastic parameters, but poses a challenge to displacement estimation algorithms. Such large deformations have to be broken up into steps, each of which contributes an estimation error to the final accumulated displacement map. The work reported here measured the error variance for single-step and accumulated displacement estimates using one-dimensional numerical simulations of ultrasound echo signals, subjected to tissue strain and electronic noise. The covariance between accumulation steps was also computed. These simulations show that errors due to electronic noise are negatively correlated between steps, and therefore accumulate slowly, whereas errors due to tissue deformation are positively correlated and accumulate quickly. For reasonably low electronic noise levels, the error variance in the accumulated displacement estimates is remarkably constant as a function of step size, but increases with the length of the tracking kernel.
Realized Variance and Market Microstructure Noise
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
Social Security's special minimum benefit.
Olsen, K A; Hoffmeyer, D
Social Security's special minimum primary insurance amount (PIA) provision was enacted in 1972 to increase the adequacy of benefits for regular long-term, low-earning covered workers and their dependents or survivors. At the time, Social Security also had a regular minimum benefit provision for persons with low lifetime average earnings and their families. Concerns were rising that the low lifetime average earnings of many regular minimum beneficiaries resulted from sporadic attachment to the covered workforce rather than from low wages. The special minimum benefit was seen as a way to reward regular, low-earning workers without providing the windfalls that would have resulted from raising the regular minimum benefit to a much higher level. The regular minimum benefit was subsequently eliminated for workers reaching age 62, becoming disabled, or dying after 1981. Under current law, the special minimum benefit will phase out over time, although it is not clear from the legislative history that this was Congress's explicit intent. The phaseout results from two factors: (1) special minimum benefits are paid only if they are higher than benefits payable under the regular PIA formula, and (2) the value of the regular PIA formula, which is indexed to wages before benefit eligibility, has increased faster than that of the special minimum PIA, which is indexed to inflation. Under the Social Security Trustees' 2000 intermediate assumptions, the special minimum benefit will cease to be payable to retired workers attaining eligibility in 2013 and later. Their benefits will always be larger under the regular benefit formula. As policymakers consider Social Security solvency initiatives--particularly proposals that would reduce benefits or introduce investment risk--interest may increase in restoring some type of special minimum benefit as a targeted protection for long-term low earners. Two of the three reform proposals offered by the President's Commission to Strengthen
The Theory of Variances in Equilibrium Reconstruction
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
High-dimensional regression with unknown variance
Giraud, Christophe; Verzelen, Nicolas
2011-01-01
We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.
Fractional constant elasticity of variance model
Ngai Hang Chan; Chi Tim Ng
2007-01-01
This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed.
Fundamentals of exploratory analysis of variance
Hoaglin, David C; Tukey, John W
2009-01-01
The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.
Discussion on variance reduction technique for shielding
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Applications of non-parametric statistics and analysis of variance on sample variances
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Hierarchical Bulk Synchronous Parallel Model and Performance Optimization
HUANG Linpeng; SUNYongqiang; YUAN Wei
1999-01-01
Based on the framework of BSP, aHierarchical Bulk Synchronous Parallel (HBSP) performance model isintroduced in this paper to capture the performance optimizationproblem for various stages in parallel program development and toaccurately predict the performance of a parallel program byconsidering factors causing variance at local computation and globalcommunication. The related methodology has been applied to several realapplications and the results show that HBSP is a suitable model foroptimizing parallel programs.
The Parabolic variance (PVAR), a wavelet variance based on least-square fit
Vernotte, F; Bourgeois, P -Y; Rubiola, E
2015-01-01
The Allan variance (AVAR) is one option among the wavelet variances. However a milestone in the analysis of frequency fluctuations and in the long-term stability of clocks, and certainly the most widely used one, AVAR is not suitable when fast noise processes show up, chiefly because of the poor rejection of white phase noise. The modified Allan variance (MVAR) features high resolution in the presence of white PM noise, but it is poorer for slow phenomena because the wavelet spans over 50% longer time. This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The PVAR relates to the Omega frequency counter, which is the topics of a companion article [the reference to the article, or to the ArXiv manuscript, will be provided later]. The PVAR wavelet spans over 2 tau, the same of the AVAR wavelet. After setting the theoretical framework, we analyze the degrees of freedom and the detection of weak noise processes in...
Benetti, Ana Raquel; Havndrup-Pedersen, Cæcilie; Honoré, Daniel;
2015-01-01
the restorative procedure. The aim of this study, therefore, was to compare the depth of cure, polymerization contraction, and gap formation in bulk-fill resin composites with those of a conventional resin composite. To achieve this, the depth of cure was assessed in accordance with the International Organization...... for Standardization 4049 standard, and the polymerization contraction was determined using the bonded-disc method. The gap formation was measured at the dentin margin of Class II cavities. Five bulk-fill resin composites were investigated: two high-viscosity (Tetric EvoCeram Bulk Fill, SonicFill) and three low......-viscosity (x-tra base, Venus Bulk Fill, SDR) materials. Compared with the conventional resin composite, the high-viscosity bulk-fill materials exhibited only a small increase (but significant for Tetric EvoCeram Bulk Fill) in depth of cure and polymerization contraction, whereas the low-viscosity bulk...
Miller, Jacob Lee
2015-04-21
An explosive bulk charge, including: a first contact surface configured to be selectively disposed substantially adjacent to a structure or material; a second end surface configured to selectively receive a detonator; and a curvilinear side surface joining the first contact surface and the second end surface. The first contact surface, the second end surface, and the curvilinear side surface form a bi-truncated hemispherical structure. The first contact surface, the second end surface, and the curvilinear side surface are formed from an explosive material. Optionally, the first contact surface and the second end surface each have a substantially circular shape. Optionally, the first contact surface and the second end surface consist of planar structures that are aligned substantially parallel or slightly tilted with respect to one another. The curvilinear side surface has one of a smooth curved geometry, an elliptical geometry, and a parabolic geometry.
Fukushima, Keita; Kumar, Jason; Sandick, Pearl; Yamamoto, Takahiro
2014-01-01
Recent experimental results from the LHC have placed strong constraints on the masses of colored superpartners. The MSSM parameter space is also constrained by the measurement of the Higgs boson mass, and the requirement that the relic density of lightest neutralinos be consistent with observations. Although large regions of the MSSM parameter space can be excluded by these combined bounds, leptophilic versions of the MSSM can survive these constraints. In this paper we consider a scenario in which the requirements of minimal flavor violation, vanishing $CP$-violation, and mass universality are relaxed, specifically focusing on scenarios with light sleptons. We find a large region of parameter space, analogous to the original bulk region, for which the lightest neutralino is a thermal relic with an abundance consistent with that of dark matter. We find that these leptophilic models are constrained by measurements of the magnetic and electric dipole moments of the electron and muon, and that these models have ...
Creating bulk nanocrystalline metal.
Fredenburg, D. Anthony (Georgia Institute of Technology, Atlanta, GA); Saldana, Christopher J. (Purdue University, West Lafayette, IN); Gill, David D.; Hall, Aaron Christopher; Roemer, Timothy John (Ktech Corporation, Albuquerque, NM); Vogler, Tracy John; Yang, Pin
2008-10-01
Nanocrystalline and nanostructured materials offer unique microstructure-dependent properties that are superior to coarse-grained materials. These materials have been shown to have very high hardness, strength, and wear resistance. However, most current methods of producing nanostructured materials in weapons-relevant materials create powdered metal that must be consolidated into bulk form to be useful. Conventional consolidation methods are not appropriate due to the need to maintain the nanocrystalline structure. This research investigated new ways of creating nanocrystalline material, new methods of consolidating nanocrystalline material, and an analysis of these different methods of creation and consolidation to evaluate their applicability to mesoscale weapons applications where part features are often under 100 {micro}m wide and the material's microstructure must be very small to give homogeneous properties across the feature.
Anisotropy of transport in bulk Rashba metals
Brosco, Valentina; Grimaldi, Claudio
2017-05-01
The recent experimental discovery of three-dimensional (3D) materials hosting a strong Rashba spin-orbit coupling calls for the theoretical investigation of their transport properties. Here we study the zero-temperature dc conductivity of a 3D Rashba metal in the presence of static diluted impurities. We show that, at variance with the two-dimensional case, in 3D systems, spin-orbit coupling affects dc charge transport in all density regimes. We find in particular that the effect of spin-orbit interaction strongly depends on the direction of the current, and we show that this yields strongly anisotropic transport characteristics. In the dominant spin-orbit coupling regime where only the lowest band is occupied, the conductivity anisotropy is governed entirely by the anomalous component of the renormalized current. We propose that measurements of the conductivity anisotropy in bulk Rashba metals may give a direct experimental assessment of the spin-orbit strength.
2013-03-21
...; and (3) any views or arguments on any issue of fact or law presented in the variance application. ] I... located in the car; (c)(14)(i)--Using a minimum of two wire ropes for drum hoisting; and (c)(16)--Material... Federal OSHA. Kentucky stated that its statutory law requires affected employers to apply to the state...
Visual SLAM Using Variance Grid Maps
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
A relation between information entropy and variance
Pandey, Biswajit
2016-01-01
We obtain an analytic relation between the information entropy and the variance of a distribution in the regime of small fluctuations. We use a set of Monte Carlo simulations of different homogeneous and inhomogeneous distributions to verify the relation and also test it in a set of cosmological N-body simulations. We find that the relation is in excellent agreement with the simulations and is independent of number density and the nature of the distributions. The relation would help us to relate entropy to other conventional measures and widen its scope.
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2010-01-01
This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...
Markov bridges, bisection and variance reduction
Asmussen, Søren; Hobolth, Asger
Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints....... In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented...... where the methods of stratification, importance sampling and quasi Monte Carlo are investigated....
Least squares with non-normal data: estimating experimental variance functions.
Tellinghuisen, Joel
2008-02-01
Contrary to popular belief, the method of least squares (LS) does not require that the data have normally distributed (Gaussian) error for its validity. One practically important application of LS fitting that does not involve normal data is the estimation of data variance functions (VFE) from replicate statistics. If the raw data are normal, sampling estimates s(2) of the variance sigma(2) are chi(2) distributed. For small degrees of freedom, the chi(2) distribution is strongly asymmetrical -- exponential in the case of three replicates, for example. Monte Carlo computations for linear variance functions demonstrate that with proper weighting, the LS variance-function parameters remain unbiased, minimum-variance estimates of the true quantities. However, the parameters are strongly non-normal -- almost exponential for some parameters estimated from s(2) values derived from three replicates, for example. Similar LS estimates of standard deviation functions from estimated s values have a predictable and correctable bias stemming from the bias inherent in s as an estimator of sigma. Because s(2) and s have uncertainties proportional to their magnitudes, the VFE and SDFE fits require weighting as s(-4) and s(-2), respectively. However, these weights must be evaluated on the calculated functions rather than directly from the sampling estimates. The computation is thus iterative but usually converges in a few cycles, with remaining 'weighting' bias sufficiently small as to be of no practical consequence.
Minimum signals in classical physics
邓文基; 许基桓; 刘平
2003-01-01
The bandwidth theorem for Fourier analysis on any time-dependent classical signal is shown using the operator approach to quantum mechanics. Following discussions about squeezed states in quantum optics, the problem of minimum signals presented by a single quantity and its squeezing is proposed. It is generally proved that all such minimum signals, squeezed or not, must be real Gaussian functions of time.
EXPLANATORY VARIANCE IN MAXIMAL OXYGEN UPTAKE
Jacalyn J. Robert McComb
2006-06-01
Full Text Available The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females, ages 18 - 24 years, underwent the following testing procedures: (a a 7-site skin fold assessment; (b a land VO2max running treadmill test; and (c a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants' head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF, height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27 of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF.
Dimension reduction based on weighted variance estimate
无
2009-01-01
In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Dimension reduction based on weighted variance estimate
ZHAO JunLong; XU XingZhong
2009-01-01
In this paper,we propose a new estimate for dimension reduction,called the weighted variance estimate (WVE),which includes Sliced Average Variance Estimate (SAVE) as a special case.Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension.And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR),SAVE,etc.Many methods such as SIR,SAVE,etc.usually put the same weight on each observation to estimate central subspace (CS).By introducing a weight function,WVE puts different weights on different observations according to distance of observations from CS.The weight function makes WVE have very good performance in general and complicated situations,for example,the distribution of regressor deviating severely from elliptical distribution which is the base of many methods,such as SIR,etc.And compared with many existing methods,WVE is insensitive to the distribution of the regressor.The consistency of the WVE is established.Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Developing bulk exchange spring magnets
Mccall, Scott K.; Kuntz, Joshua D.
2017-06-27
A method of making a bulk exchange spring magnet by providing a magnetically soft material, providing a hard magnetic material, and producing a composite of said magnetically soft material and said hard magnetic material to make the bulk exchange spring magnet. The step of producing a composite of magnetically soft material and hard magnetic material is accomplished by electrophoretic deposition of the magnetically soft material and the hard magnetic material to make the bulk exchange spring magnet.
A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del
Hou Ying-li; Liu Guo-xin; Jiang Chun-lan
2015-01-01
In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the eﬃcient strategies and eﬃcient frontier) are derived explicitly.
On Eliminating The Scrambling Variance In Scrambled Response Models
Zawar Hussain
2012-06-01
Full Text Available To circumvent the response bias in sensitive surveys randomized response models are being used. To add into it we propose an improved response model utilizing both the additive and multiplicative scrambling method. The proposed model provides greater flexibility in terms of fixing the constantKdepending upon the guessed distribution of sensitive variable and nature of the population. The proposed model yields an unbiased estimator and is anticipated as more protective against the privacy of the respondents. The relative efficiency comparison of the proposed estimator is made relative to Hussain and Shabbir (2007 RRM. Furthermore, the proposed model itself is improved by taking the two responses from each respondent and suggesting a weighted estimator yielding an unbiased estimator having the minimum possible sampling variance. The suggested weighted estimator is unconditionally more efficient than all of the suggested estimators until now. Future research may be focused on privacy protection provided by the scrambling models. More scrambling models may be identified and improved by taking the two responses from each respondent in such a way that the scrambling effect is balanced out.
Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability
Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco
We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data...... and realized variances, our model allows to infer the occurrence and size of extreme variance events, and construct indicators signalling agents sentiment towards future market conditions. Our results show that excess returns are to a large extent explained by fear or optimism towards future extreme variance...
Minimum length-maximum velocity
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2011-01-01
This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....
Power Estimation in Multivariate Analysis of Variance
Jean François Allaire
2007-09-01
Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Tauchen, George; Zhou, Hao
Motivated by the implications from a stylized self-contained general equilibrium model incorporating the effects of time-varying economic uncertainty, we show that the difference between implied and realized variation, or the variance risk premium, is able to explain a non-trivial fraction...... of the time series variation in post 1990 aggregate stock market returns, with high (low) premia predicting high (low) future returns. Our empirical results depend crucially on the use of "model-free," as opposed to Black- Scholes, options implied volatilities, along with accurate realized variation measures...... constructed from high-frequency intraday, as opposed to daily, data. The magnitude of the predictability is particularly strong at the intermediate quarterly return horizon, where it dominates that afforded by other popular predictor variables, like the P/E ratio, the default spread, and the consumption...
Alkhudhairy, Fahad; Vohra, Fahim
2016-01-01
To assess compressive strength and effect of duration after photoactivation on the compressive strength of different dual cure bulk fill composites. Seventy-two disc shaped (4x10mm) specimens were prepared from three dual cure bulk fill materials, ZirconCore (ZC) (n=24), MulticCore Flow (MC) (n=24) and Luxacore Dual (LC) (n=24). Half of the specimens in each material were tested for failure loads after one hour [MC1 (n=12), LC1 (n=12) & ZC1 (n=12)] and the other half in 7 days [MC7 (n=12), LC7 (n=12), ZC7 (n=12)] from photo-polymerization using the universal testing machine at a cross-head speed of 0.5 cm/minutes. Compressive strength was calculated using the formula UCS=4f/πd(2). Compressive strengths among different groups were compared using analysis of variance (ANOVA) and Tukey's multiple comparisons test. Maximum and minimum compressive strengths were observed in ZC7 (344.14±19.22) and LC1 (202.80±15.52) groups. Specimens in LC1 [202.80 (15.52)] showed significantly lower compressive strength as compared to MC1 [287.06 (15.03)] (pstrengths compared to LC7 [324.56 (19.47)] and MC7 [315.26 (12.36)]. Compressive strengths among all three materials were significantly higher (pstrength compared to MC and LC. Increasing the post photo-activation duration (from one hour to 7 days) significantly improves the compressive strengths of dual cure bulk fill material.
Brane Couplings from Bulk Loops
Georgi, Howard; Grant, Aaron K.; Hailu, Girma
2000-01-01
We compute loop corrections to the effective action of a field theory on a five-dimensional $S_1/Z_2$ orbifold. We find that the quantum loop effects of interactions in the bulk produce infinite contributions that require renormalization by four-dimensional couplings on the orbifold fixed planes. Thus bulk couplings give rise to renormalization group running of brane couplings.
Can bulk viscosity drive inflation
Pacher, T.; Stein-Schabes, J.A.; Turner, M.S.
1987-09-15
Contrary to other claims, we argue that bulk viscosity associated with the interactions of non- relativistic particles with relativistic particles around the time of the grand unified theory (GUT) phase transition cannot lead to inflation. Simply put, the key ingredient for inflation, negative pressure, cannot arise due to the bulk-viscosity effects of a weakly interacting mixture of relativistic and nonrelativistic particles.
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.
Cook, Philip
2013-01-01
A minimum voting age is defended as the most effective and least disrespectful means of ensuring all members of an electorate are sufficiently competent to vote. Whilst it may be reasonable to require competency from voters, a minimum voting age should be rejected because its view of competence is unreasonably controversial, it is incapable of defining a clear threshold of sufficiency and an alternative test is available which treats children more respectfully. This alternative is a procedura...
Testing for homogeneity of variance in time series: Long memory, wavelets, and the Nile River
Whitcher, B.; Byers, S. D.; Guttorp, P.; Percival, D. B.
2002-05-01
We consider the problem of testing for homogeneity of variance in a time series with long memory structure. We demonstrate that a test whose null hypothesis is designed to be white noise can, in fact, be applied, on a scale by scale basis, to the discrete wavelet transform of long memory processes. In particular, we show that evaluating a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process. The point at which the test statistic, using a nondecimated version of the discrete wavelet transform, achieves its maximum value can be used to estimate the time of the unknown variance change. We apply our proposed test statistic on five time series derived from the historical record of Nile River yearly minimum water levels covering 622-1922 A.D., each series exhibiting various degrees of serial correlation including long memory. In the longest subseries, spanning 622-1284 A.D., the test confirms an inhomogeneity of variance at short time scales and identifies the change point around 720 A.D., which coincides closely with the construction of a new device around 715 A.D. for measuring the Nile River. The test also detects a change in variance for a record of only 36 years.
Genomic variance estimates: With or without disequilibrium covariances?
Lehermeier, C; de Los Campos, G; Wimmer, V; Schön, C-C
2017-06-01
Whole-genome regression methods are often used for estimating genomic heritability: the proportion of phenotypic variance that can be explained by regression on marker genotypes. Recently, there has been an intensive debate on whether and how to account for the contribution of linkage disequilibrium (LD) to genomic variance. Here, we investigate two different methods for genomic variance estimation that differ in their ability to account for LD. By analysing flowering time in a data set on 1,057 fully sequenced Arabidopsis lines with strong evidence for diversifying selection, we observed a large contribution of covariances between quantitative trait loci (QTL) to the genomic variance. The classical estimate of genomic variance that ignores covariances underestimated the genomic variance in the data. The second method accounts for LD explicitly and leads to genomic variance estimates that when added to error variance estimates match the sample variance of phenotypes. This method also allows estimating the covariance between sets of markers when partitioning the genome into subunits. Large covariance estimates between the five Arabidopsis chromosomes indicated that the population structure in the data led to strong LD also between physically unlinked QTL. By consecutively removing population structure from the phenotypic variance using principal component analysis, we show how population structure affects the magnitude of LD contribution and the genomic variance estimates obtained with the two methods. © 2017 Blackwell Verlag GmbH.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.
Geometry of magnetosonic shocks and plane-polarized waves: Coplanarity Variance Analysis (CVA)
Scudder, J. D.
2005-02-01
Minimum Variance Analysis (MVA) is frequently used for the geometrical organization of a time series of vectors. The Coplanarity Variance Analysis (CVA) developed in this paper reproduces the layer geometry involving coplanar magnetosonic shocks or plane-polarized wave trains (including normals and coplanarity directions) 300 times more precisely (CVA technique exploits the eigenvalue degeneracy of the covariance matrix present at planar structures to find a consistent normal to the coplanarity plane of the fluctuations. Although Tangential Discontinuities (TDs) have a coplanarity plane, the eigenvalues of their covariance matrix are usually not degenerate; accordingly, CVA does not misdiagnose TDs as shocks or plane-polarized waves. Together CVA and MVA may be used to sort between the hypotheses that the time series is caused by a one-dimensional current layer that has magnetic disturbances that are (1) coplanar, linearly polarized (shocks/plane waves), (2) intrinsically helical (rotational/tangential discontinuities), or (3) neither 1 nor 2.
Çiğdem Atalayın
2017-05-01
Full Text Available Objective: The aim was to evaluate the effects on intrapulpal temperature change of two different LED light-source modes used during the polymerization of bulk-fill composite resins placed in deep cavities. Materials and Method: Human extracted mandibular molar teeth (n=5 were used to create single-tooth model with an occlusal dentin-thickness of 0.5 mm. Filtek Bulk Fill Posterior (3M ESPE and SDR (Dentsply were applied according to manufacturers’ instructions. A conventional composite resin, Filtek Z250 (3M ESPE was used as control. The soft and turbo modes of LED (Bluephase 20i, Ivoclar Vivadent were used for polymerization. Intrapulpal temperature changes were determined by using a device simulating pulpal blood microcirculation. For each material, initial and maximum temperature was determined during the curing. Difference between the initial and the highest temperature value was considered as the maximum temperature change (Δt. The data were analyzed with two-way variance analysis and post-hoc Tukey test (p<0.05. Results: The turbo mode was found to cause significantly greater temperature rise than the soft mode (p<0.001; Tukey test. When the filling material was taken as the variable, the greatest temperature change was observed in the SDR, whereas the least temperature change was observed in the control (p<0.05; Tukey test. Conclusion: The polymerization of bulk-fill composite resins in the turbo mode of the LED light-source led to greater pulpal temperature rise. The materials’ content and structure also affected the temperature increase. Using the soft mode of a LED light-source for the polymerization of bulk-fill composite resins in deep cavities is preferable to keep the intrapulpal temperature rise minimum.
Natarajan Sripriya
2004-02-01
Full Text Available Abstract Background Gene microarray technology provides the ability to study the regulation of thousands of genes simultaneously, but its potential is limited without an estimate of the statistical significance of the observed changes in gene expression. Due to the large number of genes being tested and the comparatively small number of array replicates (e.g., N = 3, standard statistical methods such as the Student's t-test fail to produce reliable results. Two other statistical approaches commonly used to improve significance estimates are a penalized t-test and a Z-test using intensity-dependent variance estimates. Results The performance of these approaches is compared using a dataset of 23 replicates, and a new implementation of the Z-test is introduced that pools together variance estimates of genes with similar minimum intensity. Significance estimates based on 3 replicate arrays are calculated using each statistical technique, and their accuracy is evaluated by comparing them to a reliable estimate based on the remaining 20 replicates. The reproducibility of each test statistic is evaluated by applying it to multiple, independent sets of 3 replicate arrays. Two implementations of a Z-test using intensity-dependent variance produce more reproducible results than two implementations of a penalized t-test. Furthermore, the minimum intensity-based Z-statistic demonstrates higher accuracy and higher or equal precision than all other statistical techniques tested. Conclusion An intensity-based variance estimation technique provides one simple, effective approach that can improve p-value estimates for differentially regulated genes derived from replicated microarray datasets. Implementations of the Z-test algorithms are available at http://vessels.bwh.harvard.edu/software/papers/bmcg2004.
Minimum Q Electrically Small Antennas
Kim, O. S.
2012-01-01
for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q.......Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for the stored energies obtained through the vector spherical wave theory, it is shown that a magnetic-coated metal core reduces the internal stored energy of both TM1m and TE1m modes simultaneously, so that a self-resonant antenna with the Q approaching the fundamental minimum is created. Numerical results...
Can bulk viscosity drive inflation
Pacher, T.; Stein-Schabes, J.A.; Turner, M.S.
1987-04-01
Contrary to other claims, we argue that, bulk viscosity associated with the interactions of nonrelativistic particles with relativistic particles around the time of the grand unified theory (GUT) phase transition cannot lead to inflation. Simply put, the key ingredient for inflation, negative pressure, cannot arise due to the bulk viscosity effects of a weakly-interacting mixture of relativistic and nonrelativistic particles. 13 refs., 1 fig.
Gene set analysis using variance component tests
2013-01-01
Background Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. Results We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). Conclusion We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data. PMID:23806107
Functional analysis of variance for association studies.
Olga A Vsevolozhskaya
Full Text Available While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1 it tests for a joint effect of gene variants, including both common and rare; (2 it fully utilizes linkage disequilibrium and genetic position information; and (3 allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM, - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.
Savaux, Vincent
2014-01-01
This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr
The determinants of the bias in Minimum Rank Factor Analysis (MRFA)
Socan, G; ten Berge, JMF; Yanai, H; Okada, A; Shigemasu, K; Kano, Y; Meulman, JJ
2003-01-01
Minimum Rank Factor Analysis (MRFA), see Ten Berge (1998), and Ten Berge and Kiers (1991), is a method of common factor analysis which yields, for any given covariance matrix Sigma, a diagonal matrix Psi of unique variances which are nonnegative and which entail a reduced covariance matrix Sigma-Psi
Minimum Thermal Conductivity of Superlattices
Simkin, M. V.; Mahan, G. D.
2000-01-31
The phonon thermal conductivity of a multilayer is calculated for transport perpendicular to the layers. There is a crossover between particle transport for thick layers to wave transport for thin layers. The calculations show that the conductivity has a minimum value for a layer thickness somewhat smaller then the mean free path of the phonons. (c) 2000 The American Physical Society.
Minimum aanlandingsmaat Brasem (Abramis brama)
Hal, van R.; Miller, D.C.M.
2016-01-01
Ter ondersteuning van een besluit aangaande een minimum aanlandingsmaat voor brasem, primair voor het IJsselmeer en Markermeer, heeft het ministerie van Economische Zaken IMARES verzocht een overzicht te geven van aanlandingsmaten voor brasem in andere landen en waar mogelijk de motivatie achter dez
Coupling between minimum scattering antennas
Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans
1974-01-01
Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...
Anatomic variance of the iliopsoas tendon.
Philippon, Marc J; Devitt, Brian M; Campbell, Kevin J; Michalski, Max P; Espinoza, Chris; Wijdicks, Coen A; Laprade, Robert F
2014-04-01
The iliopsoas tendon has been implicated as a generator of hip pain and a cause of labral injury due to impingement. Arthroscopic release of the iliopsoas tendon has become a preferred treatment for internal snapping hips. Traditionally, the iliopsoas tendon has been considered the conjoint tendon of the psoas major and iliacus muscles, although anatomic variance has been reported. The iliopsoas tendon consists of 2 discrete tendons in the majority of cases, arising from both the psoas major and iliacus muscles. Descriptive laboratory study. Fifty-three nonmatched, fresh-frozen, cadaveric hemipelvis specimens (average age, 62 years; range, 47-70 years; 29 male and 24 female) were used in this study. The iliopsoas muscle was exposed via a Smith-Petersen approach. A transverse incision across the entire iliopsoas musculotendinous unit was made at the level of the hip joint. Each distinctly identifiable tendon was recorded, and the distance from the lesser trochanter was recorded. The prevalence of a single-, double-, and triple-banded iliopsoas tendon was 28.3%, 64.2%, and 7.5%, respectively. The psoas major tendon was consistently the most medial tendinous structure, and the primary iliacus tendon was found immediately lateral to the psoas major tendon within the belly of the iliacus muscle. When present, an accessory iliacus tendon was located adjacent to the primary iliacus tendon, lateral to the primary iliacus tendon. Once considered a rare anatomic variant, the finding of ≥2 distinct tendinous components to the iliacus and psoas major muscle groups is an important discovery. It is essential to be cognizant of the possibility that more than 1 tendon may exist to ensure complete release during endoscopy. Arthroscopic release of the iliopsoas tendon is a well-accepted surgical treatment for iliopsoas impingement. The most widely used site for tendon release is at the level of the anterior hip joint. The findings of this novel cadaveric anatomy study suggest that
Gopinath, Vellore Kannan
2017-01-01
The aim of the study was to assess the microleakage of one high-viscosity conventional glass ionomer cement (GIC) and a bulk-fill composite resin, in comparison to a resin-modified GIC in Class II restorations in primary molars. Standardized Class II slot cavity preparations were prepared in exfoliating primary molars. Teeth were restored using one of the three materials tested (n = 10): SonicFill bulk-fill composite resin (SF), EQUIA Fil conventional reinforced GIC (EQF), and Vitremer resin-reinforced GIC (VT). The restorations were then subjected to thermocycling procedure (×2000 5°C-55°C 10 s/min) and soaked in 1% neutralized fuchsin solution (pH: 7.4) for 24 h at 37°C. Teeth were sectioned longitudinally in a mesiodistal direction under continuous cooling into three slabs of 1 mm thickness and studied under a stereomicroscope for dye penetration. Data were evaluated by one-way analysis of variance and the Tukey's multiple comparison test employing 95% (α = 0.05). EQF and SF showed significantly lower microleakage scores and percentage of dye penetration (%RL) when compared to VT resin-reinforced GIC (P < 0.001). SF and EQF produced the minimum microleakage when compared to VT in Class II restorations on primary molars. Fewer application procedures and reduction in treatment time in SF and EQF systems proved advantageous in pediatric dentistry.
Vellore Kannan Gopinath
2017-01-01
Full Text Available Aim: The aim of the study was to assess the microleakage of one high-viscosity conventional glass ionomer cement (GIC and a bulk-fill composite resin, in comparison to a resin-modified GIC in Class II restorations in primary molars. Materials and Method: Standardized Class II slot cavity preparations were prepared in exfoliating primary molars. Teeth were restored using one of the three materials tested (n = 10: SonicFill bulk-fill composite resin (SF, EQUIA Fil conventional reinforced GIC (EQF, and Vitremer resin-reinforced GIC (VT. The restorations were then subjected to thermocycling procedure (×2000 5°C–55°C 10 s/min and soaked in 1% neutralized fuchsin solution (pH: 7.4 for 24 h at 37°C. Teeth were sectioned longitudinally in a mesiodistal direction under continuous cooling into three slabs of 1 mm thickness and studied under a stereomicroscope for dye penetration. Statistical Analysis: Data were evaluated by one-way analysis of variance and the Tukey's multiple comparison test employing 95% (α = 0.05. Results: EQF and SF showed significantly lower microleakage scores and percentage of dye penetration (%RL when compared to VT resin-reinforced GIC (P < 0.001. Conclusion: SF and EQF produced the minimum microleakage when compared to VT in Class II restorations on primary molars. Fewer application procedures and reduction in treatment time in SF and EQF systems proved advantageous in pediatric dentistry.
一般最小方差组合投资权系数%Generalized Minimum-Variance-Portfolio Weights
N.L. Kennedy; 朱允民
2004-01-01
组合投资优化在组合投资管理中被广泛研究,在研究中,一般使用的是拉格朗日乘子法.然而,这一方法有某些限制:其基本假设是回报的方差阵是正定的,这使得该方法不能在一般情况下使用.本文作者的目标是应用二次优化理论以获得一般情况下的最优权系数,所得结果突破了前述的方差阵的限制.%Portfolio weights optimization has been extensively studied in the literature of portfolio management. The commonly used method is the Lagrange multiplier; however, this approach has some limitations: the fundamental assumption in this approach is that the covariance matrix of returns is positive definite, which renders the method not applicable in general. In this paper, the authors aim to use quadratic optimization theory in obtaining generalized optimal weights, whereby, the restriction on the covariance matrix is just a mere special case.
2012-09-01
electroencephalography (EEG) recording systems. The four systems examined, Emotiv’s EPOC , Biosemi’s ActiveTwo, Advanced Brain Monitoring’s B-Alert X10...and Quasar’s prototype represent different approaches to the problem of recording brain activity in human subjects. We found that the EPOC introduces...drift with EPOC system is very large. A) The error between the trigger being logged by the DAQ and when it was sent is on the order of hundreds of
An Adaptive Antenna Utilizing Minimum Norm and LCMV Algorithms
M.E.Ahmed; TANZhanzhong
2005-01-01
This paper introduces a new structure based on the minimum norm and Linearly constrained minimum variance (LCMV) algorithms to highly suppress the jammers to the Global positioning system (GPS) receiver. Minimum norm can assign a deep null blindly to the jammer direction, also it doesn't introduce any false nulls like other algorithms do. So combining it with LCMV algorithm give a structure capable of adjusting the weights of the antenna array in real time to respond to the signals coming from the desired directions while highly suppresses the jammers coming from the other directions. The simulations were performed for fixed and moving jammers. Two jammers are used one of power -100dBW, the other is-120dBW. The nulls depths attained by Minimum norm alone are 88.4dB for the strong jammer and 45dB for the weak one. The simulation indicates that the new structure can give deeper nulls to the jammers directions, more than 114dB nulls depths for both jammer when they come from fixed directions and about 103dB nulls depths when they come from moving directions. The new structure not only improves the nulls depths but also can control the nulls depths. In addition, it can control the antenna gain in the directions of the useful GPS Signals.
40 CFR 190.11 - Variances for unusual operations.
2010-07-01
... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.
Ashby, Neil; Patla, Bijunath
2016-04-01
Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Ashton M Verdery
Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
RR-Interval variance of electrocardiogram for atrial fibrillation detection
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
Maldacena, Juan; Zhiboedov, Alexander
2015-01-01
We consider Lorentzian correlators of local operators. In perturbation theory, singularities occur when we can draw a position-space Landau diagram with null lines. In theories with gravity duals, we can also draw Landau diagrams in the bulk. We argue that certain singularities can arise only from bulk diagrams, not from boundary diagrams. As has been previously observed, these singularities are a clear diagnostic of bulk locality. We analyze some properties of these perturbative singularities and discuss their relation to the OPE and the dimensions of double-trace operators. In the exact nonperturbative theory, we expect no singularity at these locations. We prove this statement in 1+1 dimensions by CFT methods.
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
T.-S. Chin; Lin, C. Y.; Lee, M.C.; R.T. Huang; S. M. Huang
2009-01-01
Bulk metallic glasses (BMGs) Fe–B–Y–Nb–Cu, 2 mm in diameter, were successfully annealed to become bulk nano-crystalline alloys (BNCAs) with α-Fe crystallite 11–13 nm in size. A ‘crystallization-and-stop’ model was proposed to explain this behavior. Following this model, alloy-design criteria were elucidated and confirmed successful on another Fe-based BMG Fe–B–Si–Nb–Cu, 1 mm in diameter, with crystallite sizes 10–40 nm. It was concluded that BNCAs can be designed in general by the proposed cr...
Understanding the Minimum Wage: Issues and Answers.
Employment Policies Inst. Foundation, Washington, DC.
This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
Quantum mechanics the theoretical minimum
Susskind, Leonard
2014-01-01
From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.
Kwee, R E; The ATLAS collaboration
2010-01-01
Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp-collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.09 < |eta| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presen...
Minimum thickness anterior porcelain restorations.
Radz, Gary M
2011-04-01
Porcelain laminate veneers (PLVs) provide the dentist and the patient with an opportunity to enhance the patient's smile in a minimally to virtually noninvasive manner. Today's PLV demonstrates excellent clinical performance and as materials and techniques have evolved, the PLV has become one of the most predictable, most esthetic, and least invasive modalities of treatment. This article explores the latest porcelain materials and their use in minimum thickness restoration.
Fingerprinting with Minimum Distance Decoding
Lin, Shih-Chun; Gamal, Hesham El
2007-01-01
This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\\approx 0.188$ is achievable using an ...
Minimum feature size preserving decompositions
Aloupis, Greg; Demaine, Martin L; Dujmovic, Vida; Iacono, John
2009-01-01
The minimum feature size of a crossing-free straight line drawing is the minimum distance between a vertex and a non-incident edge. This quantity measures the resolution needed to display a figure or the tool size needed to mill the figure. The spread is the ratio of the diameter to the minimum feature size. While many algorithms (particularly in meshing) depend on the spread of the input, none explicitly consider finding a mesh whose spread is similar to the input. When a polygon is partitioned into smaller regions, such as triangles or quadrangles, the degradation is the ratio of original to final spread (the final spread is always greater). Here we present an algorithm to quadrangulate a simple n-gon, while achieving constant degradation. Note that although all faces have a quadrangular shape, the number of edges bounding each face may be larger. This method uses Theta(n) Steiner points and produces Theta(n) quadrangles. In fact to obtain constant degradation, Omega(n) Steiner points are required by any al...
An Analysis of Variance Framework for Matrix Sampling.
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
Error Variance of Rasch Measurement with Logistic Ability Distributions.
Dimitrov, Dimiter M.
Exact formulas for classical error variance are provided for Rasch measurement with logistic distributions. An approximation formula with the normal ability distribution is also provided. With the proposed formulas, the additive contribution of individual items to the population error variance can be determined without knowledge of the other test…
On the Endogeneity of the Mean-Variance Efficient Frontier.
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
Delivery Time Variance Reduction in the Military Supply Chain
2010-03-01
DELIVERY TIME VARIANCE REDUCTION IN THE MILITARY SUPPLY CHAIN THESIS...IN THE MILITARY SUPPLY CHAIN THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering...March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-OR-MS-ENS-10-02 DELIVERY TIME VARIANCE IN THE MILITARY SUPPLY CHAIN Preston
The asymptotic variance of departures in critically loaded queues
A. Al Hanbali; M.R.H. Mandjes (Michel); Y. Nazarathy (Yoni); W. Whitt
2010-01-01
htmlabstractWe consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case that the system load rho equals 1, and prove that the asymptotic variance rate satisfies lim_t Var D(t)/t = lambda
76 FR 78698 - Proposed Revocation of Permanent Variances
2011-12-19
... Occupational Safety and Health Administration Proposed Revocation of Permanent Variances AGENCY: Occupational... short and plain statement detailing (1) how the proposed revocation would affect the requesting party..., subpart L. The following table provides information about the variances proposed for revocation by...
Adjustment for heterogeneous variances due to days in milk and ...
ARC-IRENE
Adjustment of heterogeneous variances and a calving year effect in test-day ... Regression Test-Day Model (FRTDM), which assumes equal variances of the response variable at different .... random residual error .... records were included in the selection, while in the unadjusted data set, lactations consisting of six and more.
Characterizing the evolution of genetic variance using genetic covariance tensors.
Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W
2009-06-12
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.
Productive Failure in Learning the Concept of Variance
Kapur, Manu
2012-01-01
In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…
Time variance effects and measurement error indications for MLS measurements
Liu, Jiyuan
1999-01-01
Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....
Performance of medical students admitted via regular and admission-variance routes.
Simon, H J; Covell, J W
1975-03-01
Twenty-three medical students from socioeconomically disadvantaged backgrounds and drawn chiefly from Chicano and black racial minority groups were granted admission variances to the University of California, San Diego, School of Medicine in 1970 and 1971. This group was compared with 21 regularly admitted junior and senoir medical students with respect to: specific admissions criteria (Medical College Admission Test scores, grade-point average, and college rating score); scores, on Part I of the examinations of the National Board of Medical Examiners (NBME); and performance in at least two of the medicine, surgery, and pediatrics clerkships. The two populations differed markedly on admission. The usual screen would have precluded admission of all but one of the students granted variances. At the end of the second year, average NBME Part I scores again identified two distinct populations, but the average scores of both groups were clearly above the minimum passing level. The groups still differ on analysis of their aggregate performances on the clinical services, but the difference following completion of two of three major clinical clerkships has become the distinction between a "slightly above average" level of performance for the regularly admitted students and an "average" level for students admitted on variances.
Longitudinal bulk acoustic mass sensor
Hales, Jan Harry; Teva, Jordi; Boisen, Anja
2009-01-01
A polycrystalline silicon longitudinal bulk acoustic cantilever is fabricated and operated in air at 51 MHz. A mass sensitivity of 100 Hz/fg (1 fg=10(-15) g) is obtained from the preliminary experiments where a minute mass is deposited on the device by means of focused ion beam. The total noise...
Bulk viscosity and deflationary universes
Lima, J A S; Waga, I
2007-01-01
We analyze the conditions that make possible the description of entropy generation in the new inflationary model by means of a nearequilibrium process. We show that there are situations in which the bulk viscosity cannot describe particle production during the coherent field oscillations phase.
The Universe With Bulk Viscosity
无
2003-01-01
Exact solutions for a model with variable G, A and bulk viscosity areobtained. Inflationary solutions with constant (de Sitter-type) and variable energydensity are found. An expanding anisotropic universe is found to isotropize duringits expansion but a static universe cannot isotropize. The gravitational constant isfound to increase with time and the cosmological constant decreases with time asAo∝t-2.
Confidence Intervals of Variance Functions in Generalized Linear Model
Yong Zhou; Dao-ji Li
2006-01-01
In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.
Research on variance of subnets in network sampling
Qi Gao; Xiaoting Li; Feng Pan
2014-01-01
In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sam
Ceramic veneers with minimum preparation.
da Cunha, Leonardo Fernandes; Reis, Rachelle; Santana, Lino; Romanini, Jose Carlos; Carvalho, Ricardo Marins; Furuse, Adilson Yoshio
2013-10-01
The aim of this article is to describe the possibility of improving dental esthetics with low-thickness glass ceramics without major tooth preparation for patients with small to moderate anterior dental wear and little discoloration. For this purpose, a carefully defined treatment planning and a good communication between the clinician and the dental technician helped to maximize enamel preservation, and offered a good treatment option. Moreover, besides restoring esthetics, the restorative treatment also improved the function of the anterior guidance. It can be concluded that the conservative use of minimum thickness ceramic laminate veneers may provide satisfactory esthetic outcomes while preserving the dental structure.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric ...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation......We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...
Filtered kriging for spatial data with heterogeneous measurement error variances.
Christensen, William F
2011-09-01
When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.
Meta-analysis of ratios of sample variances.
Prendergast, Luke A; Staudte, Robert G
2016-05-20
When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set.
Solid-State Explosive Reaction for Nanoporous Bulk Thermoelectric Materials.
Zhao, Kunpeng; Duan, Haozhi; Raghavendra, Nunna; Qiu, Pengfei; Zeng, Yi; Zhang, Wenqing; Yang, Jihui; Shi, Xun; Chen, Lidong
2017-09-29
High-performance thermoelectric materials require ultralow lattice thermal conductivity typically through either shortening the phonon mean free path or reducing the specific heat. Beyond these two approaches, a new unique, simple, yet ultrafast solid-state explosive reaction is proposed to fabricate nanoporous bulk thermoelectric materials with well-controlled pore sizes and distributions to suppress thermal conductivity. By investigating a wide variety of functional materials, general criteria for solid-state explosive reactions are built upon both thermodynamics and kinetics, and then successfully used to tailor material's microstructures and porosity. A drastic decrease in lattice thermal conductivity down below the minimum value of the fully densified materials and enhancement in thermoelectric figure of merit are achieved in porous bulk materials. This work demonstrates that controlling materials' porosity is a very effective strategy and is easy to be combined with other approaches for optimizing thermoelectric performance. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.
Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H
2014-06-01
Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Variance decomposition of apolipoproteins and lipids in Danish twins
Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A
2007-01-01
OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...... been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...
Cosmic bulk viscosity through backreaction
Barbosa, Rodrigo M; Zimdahl, Winfried; Piattella, Oliver F
2015-01-01
We consider an effective viscous pressure as the result of a backreaction of inhomogeneities within Buchert's formalism. The use of an effective metric with a time-dependent curvature radius allows us to calculate the luminosity distance of the backreaction model. This quantity is different from its counterpart for a "conventional" spatially flat bulk viscous fluid universe. Both expressions are tested against the SNIa data of the Union2.1 sample with only marginally different results.
Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model
Leunglung Chan; Eckhard Platen
2015-01-01
This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
Detecting Pulsars with Interstellar Scintillation in Variance Images
Dai, S; Bell, M E; Coles, W A; Hobbs, G; Ekers, R D; Lenc, E
2016-01-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show th...
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
40 CFR 141.4 - Variances and exemptions.
2010-07-01
... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... maintenance of the distribution system. ...
Fundamental Indexes As Proxies For Mean-Variance Efficient Portfolios
Kathleen Hodnett; Gearé Botes; Khumbudzo Daswa; Kimberly Davids; Emmanuel Che Fongwa; Candice Fortuin
2014-01-01
Mean-variance efficiency was first explained by Markowitz (1952) who derived an efficient frontier comprised of portfolios with the highest expected returns for a given level of risk borne by the investor...
TESTS FOR VARIANCE COMPONENTS IN VARYING COEFFICIENT MIXED MODELS
Zaixing Li; Yuedong Wang; Ping Wu; Wangli Xu; Lixing Zhu
2012-01-01
.... To address the question of whether a varying coefficient mixed model can be reduced to a simpler varying coefficient model, we develop one-sided tests for the null hypothesis that all the variance components are zero...
Estimating the generalized concordance correlation coefficient through variance components.
Carrasco, Josep L; Jover, Lluís
2003-12-01
The intraclass correlation coefficient (ICC) and the concordance correlation coefficient (CCC) are two of the most popular measures of agreement for variables measured on a continuous scale. Here, we demonstrate that ICC and CCC are the same measure of agreement estimated in two ways: by the variance components procedure and by the moment method. We propose estimating the CCC using variance components of a mixed effects model, instead of the common method of moments. With the variance components approach, the CCC can easily be extended to more than two observers, and adjusted using confounding covariates, by incorporating them in the mixed model. A simulation study is carried out to compare the variance components approach with the moment method. The importance of adjusting by confounding covariates is illustrated with a case example.
Variance estimation in neutron coincidence counting using the bootstrap method
Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)
2015-09-11
In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.
Asymmetric k-Center with Minimum Coverage
Gørtz, Inge Li
2008-01-01
In this paper we give approximation algorithms and inapproximability results for various asymmetric k-center with minimum coverage problems. In the k-center with minimum coverage problem, each center is required to serve a minimum number of clients. These problems have been studied by Lim et al. [A....... Lim, B. Rodrigues, F. Wang, Z. Xu, k-center problems with minimum coverage, Theoret. Comput. Sci. 332 (1–3) (2005) 1–17] in the symmetric setting....
Dimension free and infinite variance tail estimates on Poisson space
Breton, J. C.; Houdré, C.; Privault, N.
2004-01-01
Concentration inequalities are obtained on Poisson space, for random functionals with finite or infinite variance. In particular, dimension free tail estimates and exponential integrability results are given for the Euclidean norm of vectors of independent functionals. In the finite variance case these results are applied to infinitely divisible random variables such as quadratic Wiener functionals, including L\\'evy's stochastic area and the square norm of Brownian paths. In the infinite vari...
The asymptotic variance of departures in critically loaded queues
Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.
2011-01-01
We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +
Wavelet Variance Analysis of EEG Based on Window Function
ZHENG Yuan-zhuang; YOU Rong-yi
2014-01-01
A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Multiperiod mean-variance efficient portfolios with endogenous liabilities
Markus LEIPPOLD; Trojani, Fabio; Vanini, Paolo
2011-01-01
We study the optimal policies and mean-variance frontiers (MVF) of a multiperiod mean-variance optimization of assets and liabilities (AL). This makes the analysis more challenging than for a setting based on purely exogenous liabilities, in which the optimization is only performed on the assets while keeping liabilities fixed. We show that, under general conditions for the joint AL dynamics, the optimal policies and the MVF can be decomposed into an orthogonal set of basis returns using exte...
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Estimating Income Variances by Probability Sampling: A Case Study
Akbar Ali Shah
2010-08-01
Full Text Available The main focus of the study is to estimate variability in income distribution of households by conducting a survey. The variances in income distribution have been calculated by probability sampling techniques. The variances are compared and relative gains are also obtained. It is concluded that the income distribution has been better as compared to first Household Income and Expenditure Survey (HIES conducted in Pakistan 1993-94.
Testing for Causality in Variance Usinf Multivariate GARCH Models
Christian M. Hafner; Herwartz, Helmut
2008-01-01
Tests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently, little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causality in var...
Testing for causality in variance using multivariate GARCH models
Hafner, Christian; Herwartz, H.
2004-01-01
textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causa...
Minimum Delay Moving Object Detection
Lao, Dong
2017-01-08
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Minimum Competency Testing and the Handicapped.
Wildemuth, Barbara M.
This brief overview of minimum competency testing and disabled high school students discusses: the inclusion or exclusion of handicapped students in minimum competency testing programs; approaches to accommodating the individual needs of handicapped students; and legal issues. Surveys of states that have mandated minimum competency tests indicate…
Do Some Workers Have Minimum Wage Careers?
Carrington, William J.; Fallick, Bruce C.
2001-01-01
Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…
Does the Minimum Wage Affect Welfare Caseloads?
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
Minimum income protection in the Netherlands
van Peijpe, T.
2009-01-01
This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its effect
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
Variance estimation in the analysis of microarray data
Wang, Yuedong
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
The phenome-wide distribution of genetic variance.
Blows, Mark W; Allen, Scott L; Collet, Julie M; Chenoweth, Stephen F; McGuigan, Katrina
2015-07-01
A general observation emerging from estimates of additive genetic variance in sets of functionally or developmentally related traits is that much of the genetic variance is restricted to few trait combinations as a consequence of genetic covariance among traits. While this biased distribution of genetic variance among functionally related traits is now well documented, how it translates to the broader phenome and therefore any trait combination under selection in a given environment is unknown. We show that 8,750 gene expression traits measured in adult male Drosophila serrata exhibit widespread genetic covariance among random sets of five traits, implying that pleiotropy is common. Ultimately, to understand the phenome-wide distribution of genetic variance, very large additive genetic variance-covariance matrices (G) are required to be estimated. We draw upon recent advances in matrix theory for completing high-dimensional matrices to estimate the 8,750-trait G and show that large numbers of gene expression traits genetically covary as a consequence of a single genetic factor. Using gene ontology term enrichment analysis, we show that the major axis of genetic variance among expression traits successfully identified genetic covariance among genes involved in multiple modes of transcriptional regulation. Our approach provides a practical empirical framework for the genetic analysis of high-dimensional phenome-wide trait sets and for the investigation of the extent of high-dimensional genetic constraint.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. © 2011, The International Biometric Society.
Analytic variance estimates of Swank and Fano factors.
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank
2014-07-01
Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
Analytic variance estimates of Swank and Fano factors
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov [US Food and Drug Administration, Silver Spring, Maryland 20993 (United States)
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Genetic heterogeneity of residual variance in broiler chickens
Hill William G
2006-11-01
Full Text Available Abstract Aims were to estimate the extent of genetic heterogeneity in environmental variance. Data comprised 99 535 records of 35-day body weights from broiler chickens reared in a controlled environment. Residual variance within dam families was estimated using ASREML, after fitting fixed effects such as genetic groups and hatches, for each of 377 genetically contemporary sires with a large number of progeny (> 100 males or females each. Residual variance was computed separately for male and female offspring, and after correction for sampling, strong evidence for heterogeneity was found, the standard deviation between sires in within variance amounting to 15–18% of its mean. Reanalysis using log-transformed data gave similar results, and elimination of 2–3% of outlier data reduced the heterogeneity but it was still over 10%. The correlation between estimates for males and females was low, however. The correlation between sire effects on progeny mean and residual variance for body weight was small and negative (-0.1. Using a data set bigger than any yet presented and on a trait measurable in both sexes, this study has shown evidence for heterogeneity in the residual variance, which could not be explained by segregation of major genes unless very few determined the trait.
Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard
2016-09-01
One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.
Bulk Moisture and Salinity Sensor
Nurge, Mark; Monje, Oscar; Prenger, Jessica; Catechis, John
2013-01-01
Measurement and feedback control of nutrient solutions in plant root zones is critical to the development of healthy plants in both terrestrial and reduced-gravity environments. In addition to the water content, the amount of fertilizer in the nutrient solution is important to plant health. This typically requires a separate set of sensors to accomplish. A combination bulk moisture and salinity sensor has been designed, built, and tested with different nutrient solutions in several substrates. The substrates include glass beads, a clay-like substrate, and a nutrient-enriched substrate with the presence of plant roots. By measuring two key parameters, the sensor is able to monitor both the volumetric water content and salinity of the nutrient solution in bulk media. Many commercially available moisture sensors are point sensors, making localized measurements over a small volume at the point of insertion. Consequently, they are more prone to suffer from interferences with air bubbles, contact area of media, and root growth. This makes it difficult to get an accurate representation of true moisture content and distribution in the bulk media. Additionally, a network of point sensors is required, increasing the cabling, data acquisition, and calibration requirements. measure the dielectric properties of a material in the annular space of the vessel. Because the pore water in the media often has high salinity, a method to measure the media moisture content and salinity simultaneously was devised. Characterization of the frequency response for capacitance and conductance across the electrodes was completed for 2-mm glass bead media, 1- to 2-mm Turface (a clay like media), and 1- to 2-mm fertilized Turface with the presence of root mass. These measurements were then used to find empirical relationships among capacitance (C), the dissipation factor (D), the volumetric water content, and the pore water salinity.
Dynamic preconditioning of the September sea-ice extent minimum
Williams, James; Tremblay, Bruno; Newton, Robert; Allard, Richard
2016-04-01
There has been an increased interest in seasonal forecasting of the sea-ice extent in recent years, in particular the minimum sea-ice extent. We propose a dynamical mechanism, based on winter preconditioning through first year ice formation, that explains a significant fraction of the variance in the anomaly of the September sea-ice extent from the long-term linear trend. To this end, we use a Lagrangian trajectory model to backtrack the September sea-ice edge to any time during the previous winter and quantify the amount of sea-ice divergence along the Eurasian and Alaskan coastlines as well as the Fram Strait sea-ice export. We find that coastal divergence that occurs later in the winter (March, April and May) is highly correlated with the following September sea-ice extent minimum (r = -0.73). This is because the newly formed first year ice will melt earlier allowing for other feedbacks (e.g. ice albedo feedback) to start amplifying the signal early in the melt season when the solar input is large. We find that the winter mean Fram Strait sea-ice export anomaly is also correlated with the minimum sea-ice extent the following summer. Next we backtrack a synthetic ice edge initialized at the beginning of the melt season (June 1st) in order to develop hindcast models of the September sea-ice extent that do not rely on a-priori knowledge of the minimum sea-ice extent. We find that using a multi-variate regression model of the September sea-ice extent anomaly based on coastal divergence and Fram Strait ice export as predictors reduces the error by 41%. A hindcast model based on the mean DJFMA Arctic Oscillation index alone reduces the error by 24%.
Toughness of Bulk Metallic Glasses
Shantanu V. Madge
2015-07-01
Full Text Available Bulk metallic glasses (BMGs have desirable properties like high strength and low modulus, but their toughness can show much variation, depending on the kind of test as well as alloy chemistry. This article reviews the type of toughness tests commonly performed and the factors influencing the data obtained. It appears that even the less-tough metallic glasses are tougher than oxide glasses. The current theories describing the links between toughness and material parameters, including elastic constants and alloy chemistry (ordering in the glass, are discussed. Based on the current literature, a few important issues for further work are identified.
CMB-S4 and the hemispherical variance anomaly
O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.
2017-09-01
Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.
Huang, Huan; Zheng, Jun; Zheng, Botian; Qian, Nan; Li, Haitao; Li, Jipeng; Deng, Zigang
2017-06-01
In order to clarify the correlations between magnetic flux and levitation force of the high-temperature superconducting (HTS) bulk, we measured the magnetic flux density on bottom and top surfaces of a bulk superconductor while vertically moving above a permanent magnet guideway (PMG). The levitation force of the bulk superconductor was measured simultaneously. In this study, the HTS bulk was moved down and up for three times between field-cooling position and working position above the PMG, followed by a relaxation measurement of 300 s at the minimum height position. During the whole processes, the magnetic flux density and levitation force of the bulk superconductor were recorded and collected by a multipoint magnetic field measurement platform and a self-developed maglev measurement system, respectively. The magnetic flux density on the bottom surface reflected the induced field in the superconductor bulk, while on the top, it reveals the penetrated magnetic flux. The results show that the magnetic flux density and levitation force of the bulk superconductor are in direct correlation from the viewpoint of inner supercurrent. In general, this work is instructive for understanding the connection of the magnetic flux density, the inner current density and the levitation behavior of HTS bulk employed in a maglev system. Meanwhile, this magnetic flux density measurement method has enriched present experimental evaluation methods of maglev system.
Minimum Delay Moving Object Detection
Lao, Dong
2017-05-14
This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Dose variation during solar minimum
Gussenhoven, M.S.; Mullen, E.G.; Brautigam, D.H. (Phillips Lab., Geophysics Directorate, Hanscom Air Force Base, MA (US)); Holeman, E. (Boston Univ., MA (United States). Dept. of Physics)
1991-12-01
In this paper, the authors use direct measurement of dose to show the variation in inner and outer radiation belt populations at low altitude from 1984 to 1987. This period includes the recent solar minimum that occurred in September 1986. The dose is measured behind four thicknesses of aluminum shielding and for two thresholds of energy deposition, designated HILET and LOLET. The authors calculate an average dose per day for each month of satellite operation. The authors find that the average proton (HILET) dose per day (obtained primarily in the inner belt) increased systematically from 1984 to 1987, and has a high anticorrelation with sunspot number when offset by 13 months. The average LOLET dose per day behind the thinnest shielding is produced almost entirely by outer zone electrons and varies greatly over the period of interest. If any trend can be discerned over the 4 year period it is a decreasing one. For shielding of 1.55 gm/cm{sup 2} (227 mil) Al or more, the LOLET dose is complicated by contributions from {gt} 100 MeV protons and bremsstrahlung.
How does variance in fertility change over the demographic transition?
Hruschka, Daniel J; Burger, Oskar
2016-04-19
Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).
Variance-based fingerprint distance adjustment algorithm for indoor localization
Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang
2015-01-01
The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
Calus, M.P.L.; Janss, L.L.G.; Veerkamp, R.F.
2006-01-01
The objective of this paper was to investigate the importance of a genotype x environment interaction (G x E) for somatic cell score (SCS) across levels of bulk milk somatic cell count (BMSCC), number of days in milk (DIM), and their interaction. Variance components were estimated with a model inclu
Dohm, Volker
2014-09-01
Thermodynamic Casimir forces of film systems in the O(n) universality classes with Dirichlet boundary conditions are studied below bulk criticality. Substantial progress is achieved in resolving the long-standing problem of describing analytically the pronounced minimum of the scaling function observed experimentally in ^{4}He films (n=2) by Garcia and Chan [Phys. Rev. Lett. 83, 1187 (1999)] and in Monte Carlo simulations for the three-dimensional Ising model (n=1) by O. Vasilyev et al. [Europhys. Lett. 80, 60009 (2007)]. Our finite-size renormalization-group approach describes the film systems as the limit of finite-slab systems with vanishing aspect ratio. This yields excellent agreement with the depth and the position of the minimum for n=1 and semiquantitative agreement with the minimum for n=2. Our theory also predicts a pronounced minimum for the n=3 Heisenberg universality class.
Handling of bulk solids theory and practice
Shamlou, P A
1990-01-01
Handling of Bulk Solids provides a comprehensive discussion of the field of solids flow and handling in the process industries. Presentation of the subject follows classical lines of separate discussions for each topic, so each chapter is self-contained and can be read on its own. Topics discussed include bulk solids flow and handling properties; pressure profiles in bulk solids storage vessels; the design of storage silos for reliable discharge of bulk materials; gravity flow of particulate materials from storage vessels; pneumatic transportation of bulk solids; and the hazards of solid-mater
QI Wen-Juan; ZHANG Peng; DENG Zi-Li
2014-01-01
This paper deals with the problem of designing robust sequential covariance intersection (SCI) fusion Kalman filter for the clustering multi-agent sensor network system with measurement delays and uncertain noise variances. The sensor network is partitioned into clusters by the nearest neighbor rule. Using the minimax robust estimation principle, based on the worst-case conservative sensor network system with conservative upper bounds of noise variances, and applying the unbiased linear minimum variance (ULMV) optimal estimation rule, we present the two-layer SCI fusion robust steady-state Kalman filter which can reduce communication and computation burdens and save energy sources, and guarantee that the actual filtering error variances have a less-conservative upper-bound. A Lyapunov equation method for robustness analysis is proposed, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented and the robust accuracy relations of the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the global SCI fuser is higher than those of the local SCI fusers and the robust accuracies of all SCI fusers are higher than that of each local robust Kalman filter. A simulation example for a tracking system verifies the robustness and robust accuracy relations.
Lee, Kenneth K. C.; Mariampillai, Adrian; Yu, Joe X. Z.; Cadotte, David W.; Wilson, Brian C.; Standish, Beau A.; Yang, Victor X. D.
2012-01-01
Abstract: Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second. PMID:22808428
Sensitivity to Estimation Errors in Mean-variance Models
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
Expectation Values and Variance Based on Lp-Norms
George Livadiotis
2012-11-01
Full Text Available This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose the Lp variance. These Lp expectation values embody a generic formal scheme of means characterization. Having the p-norm as a free parameter, both the Lp-normed expectation values and their variance are flexible to analyze new phenomena that cannot be described under the notions of classical statistics based on Euclidean norms. The new statistical approach provides insights into regression theory and Statistical Physics. Several illuminating examples are examined.
CMB-S4 and the Hemispherical Variance Anomaly
O'Dwyer, Marcio; Knox, Lloyd; Starkman, Glenn D
2016-01-01
Cosmic Microwave Background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the northern and southern Ecliptic hemispheres. In this context, the northern hemisphere displays an anomalously low variance while the southern hemisphere appears unremarkable (consistent with expectations from the best-fitting theory, $\\Lambda$CDM). While this is a well established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground ba...
Variance inflation in high dimensional Support Vector Machines
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... is not the full input space. Hence, when applying the model to future data the model is effectively blind to the missed orthogonal subspace. This can lead to an inflated variance of hidden variables estimated in the training set and when the model is applied to test data we may find that the hidden variables...... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Variance swap payoffs, risk premia and extreme market conditions
Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco
This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic...... constraint. Our approach, only requiring option implied volatilities and daily returns for the underlying, provides measurement error free estimates of the part of the VRP related to normal market conditions, and allows constructing variables indicating agents' expectations under extreme market conditions....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....
Saturation of number variance in embedded random-matrix ensembles.
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
The positioning algorithm based on feature variance of billet character
Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang
2015-12-01
In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.
Saturation of number variance in embedded random-matrix ensembles
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
de Brito, K P S
2016-01-01
Spinor fields on 5-dimensional Lorentzian manifolds are classified, according to the geometric Fierz identities that involve their bilinear covariants. Based upon this classification that generalises the celebrated 4-dimensional Lounesto classification of spinor fields, new non-trivial classes of 5-dimensional spinor fields are, hence, found, with important potential applications regarding bulk fermions and their subsequent localisation on brane-worlds. In addition, quaternionic bilinear covariants are used to derive the quaternionic spin density, through the truncated exterior bundle. In order to accomplish a realisation of these new spinors, a Killing vector field is constructed on the horizon of 5-dimensional Kerr black holes. This Killing vector field is shown to reach the time-like Killing vector field at the spatial infinity, through a current 1-form density, constructed with the derived new spinor fields. The current density is, moreover, expressed as the f\\"unfbein components, assuming a condensed for...
de Brito, K. P. S.; da Rocha, Roldão
2016-10-01
The spinor fields on 5-dimensional Lorentzian manifolds are classified according to the geometric Fierz identities, which involve their bilinear covariants. Based upon this classification, which generalises the celebrated 4-dimensional Lounesto classification of spinor fields, new non-trivial classes of 5-dimensional spinor fields are hence found, with important potential applications regarding bulk fermions and their subsequent localisation on brane-worlds. In addition, quaternionic bilinear covariants are used to derive the quaternionic spin density through the truncated exterior bundle. In order to accomplish the realisation of these new spinors, a Killing vector field is constructed on the horizon of a 5-dimensional Kerr black hole. This Killing vector field is shown to reach the time-like Killing vector field at spatial infinity through a current 1-form density, constructed with the new derived spinor fields. The current density is, moreover, expressed as the fünfbein component, assuming a condensed form.
Nanofluidics, from bulk to interfaces.
Bocquet, Lydéric; Charlaix, Elisabeth
2010-03-01
Nanofluidics has emerged recently in the footsteps of microfluidics, following the quest for scale reduction inherent to nanotechnologies. By definition, nanofluidics explores transport phenomena of fluids at nanometer scales. Why is the nanometer scale specific? What fluid properties are probed at nanometric scales? In other words, why does 'nanofluidics' deserve its own brand name? In this critical review, we will explore the vast manifold of length scales emerging for fluid behavior at the nanoscale, as well as the associated mechanisms and corresponding applications. We will in particular explore the interplay between bulk and interface phenomena. The limit of validity of the continuum approaches will be discussed, as well as the numerous surface induced effects occurring at these scales, from hydrodynamic slippage to the various electro-kinetic phenomena originating from the couplings between hydrodynamics and electrostatics. An enlightening analogy between ion transport in nanochannels and transport in doped semi-conductors will be discussed (156 references).
Variance squeezing and entanglement of the XX central spin model
El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)
2011-01-21
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Recursive identification for multidimensional ARMA processes with increasing variances
CHEN Hanfu
2005-01-01
In time series analysis, almost all existing results are derived for the case where the driven noise {wn} in the MA part is with bounded variance (or conditional variance). In contrast to this, the paper discusses how to identify coefficients in a multidimensional ARMA process with fixed orders, but in its MA part the conditional moment E(‖wn‖β| Fn-1), β＞ 2 Is possible to grow up at a rate of a power of logn. The wellknown stochastic gradient (SG) algorithm is applied to estimating the matrix coefficients of the ARMA process, and the reasonable conditions are given to guarantee the estimate to be strongly consistent.
Levine's guide to SPSS for analysis of variance
Braver, Sanford L; Page, Melanie
2003-01-01
A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi
Variance components for body weight in Japanese quails (Coturnix japonica
RO Resende
2005-03-01
Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.
Asymptotic variance of grey-scale surface area estimators
Svane, Anne Marie
Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....
Precise Asymptotics of Error Variance Estimator in Partially Linear Models
Shao-jun Guo; Min Chen; Feng Liu
2008-01-01
In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known principle of least-squares. With this method the estimation of the (co)variance components is based on a linear model of observation equations. The method is flexible since it works with a user-defined we...
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.
On Variance and Covariance for Bounded Linear Operators
Chia Shiang LIN
2001-01-01
In this paper we initiate a study of covariance and variance for two operators on a Hilbert space, proving that the c-v (covariance-variance) inequality holds, which is equivalent to the CauchySchwarz inequality. As for applications of the c-v inequality we prove uniformly the Bernstein-type incqualities and equalities, and show the generalized Heinz-Kato-Furuta-type inequalities and equalities,from which a generalization and sharpening of Reid's inequality is obtained. We show that every operator can be expressed as a p-hyponormal-type, and a hyponornal-type operator. Finally, some new characterizations of the Furuta inequality are given.
How Do Alternative Minimum Wage Variables Compare?
Sara Lemos
2005-01-01
Several minimum wage variables have been suggested in the literature. Such a variety of variables makes it difficult to compare the associated estimates across studies. One problem is that these estimates are not always calibrated to represent the effect of a 10% increase in the minimum wage. Another problem is that these estimates measure the effect of the minimum wage on the employment of different groups of workers. In this paper we critically compare employment effect estimates using five...
Minimum wages, globalization and poverty in Honduras
Gindling, T. H.; Terrell, Katherine
2008-01-01
To be competitive in the global economy, some argue that Latin American countries need to reduce or eliminate labour market regulations such as minimum wage legislation because they constrain job creation and hence increase poverty. On the other hand, minimum wage increases can have a direct positive impact on family income and may therefore help to reduce poverty. We take advantage of a complex minimum wage system in a poor country that has been exposed to the forces of globalization to test...
Tracking error with minimum guarantee constraints
Diana Barro; Elio Canestrelli
2008-01-01
In recent years the popularity of indexing has greatly increased in financial markets and many different families of products have been introduced. Often these products also have a minimum guarantee in the form of a minimum rate of return at specified dates or a minimum level of wealth at the end of the horizon. Period of declining stock market returns together with low interest rate levels on Treasury bonds make it more difficult to meet these liabilities. We formulate a dynamic asset alloca...
Farahat F
2016-09-01
Full Text Available Statement of Problem: For many years, application of the composite restoration with a thickness less than 2 mm for achieving the minimum polymerization contraction and stress has been accepted as a principle. But through the recent development in dental material a group of resin based composites (RBCs called Bulk Fill is introduced whose producers claim the possibility of achieving a good restoration in bulks with depths of 4 or even 5 mm. Objectives: To evaluate the effect of irradiation times and bulk depths on the degree of cure (DC of a bulk fill composite and compare it with the universal type. Materials and Methods: This study was conducted on two groups of dental RBCs including Tetric N Ceram Bulk Fill and Tetric N Ceram Universal. The composite samples were prepared in Teflon moulds with a diameter of 5 mm and height of 2, 4 and 6 mm. Then, half of the samples in each depth were cured from the upper side of the mould for 20s by LED light curing unit. The irradiation time for other specimens was 40s. After 24 hours of storage in distilled water, the microhardness of the top and bottom of the samples was measured using a Future Tech (Japan- Model FM 700 Vickers hardness testing machine. Data were analyzed statistically using the one and multi way ANOVAand Tukey’s test (p = 0.050. Results: The DC of Tetric N Ceram Bulk Fill in defined irradiation time and bulk depth was significantly more than the universal type (p < 0.001. Also, the DC of both composites studied was significantly (p < 0.001 reduced by increasing the bulk depths. Increasing the curing time from 20 to 40 seconds had a marginally significant effect (p ≤ 0.040 on the DC of both bulk fill and universal studied RBC samples. Conclusions: The DC of the investigated bulk fill composite was better than the universal type in all the irradiation times and bulk depths. The studied universal and bulk fill RBCs had an appropriate DC at the 2 and 4 mm bulk depths respectively and
Effect of Pressure on Minimum Fluidization Velocity
Zhu Zhiping; Na Yongjie; Lu Qinggang
2007-01-01
Minimum fluidization velocity of quartz sand and glass bead under different pressures of 0.5, 1.0, 1.5 and 2.0 Mpa were investigated. The minimum fluidization velocity decreases with the increasing of pressure. The influence of pressure to the minimum fluidization velocities is stronger for larger particles than for smaller ones.Based on the test results and Ergun equation, an experience equation of minimum fluidization velocity is proposed and the calculation results are comparable to other researchers' results.
7 CFR 35.11 - Minimum requirements.
2010-01-01
..., Denmark, East Germany, England, Finland, France, Greece, Hungary, Iceland, Ireland, Italy, Liechtenstein..., Switzerland, Wales, West Germany, Yugoslavia), or Greenland shall meet each applicable minimum requirement...
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
An entropy approach to size and variance heterogeneity
Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.
2012-01-01
In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity si
Analysis of Variance: What Is Your Statistical Software Actually Doing?
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Gender variance in Asia: discursive contestations and legal implications
Wieringa, S.E.
2010-01-01
A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the
Permutation tests for multi-factorial analysis of variance
Anderson, M.J.; Braak, ter C.J.F.
2003-01-01
Several permutation strategies are often possible for tests of individual terms in analysis-of-variance (ANOVA) designs. These include restricted permutations, permutation of whole groups of units, permutation of some form of residuals or some combination of these. It is unclear, especially for
A Hold-out method to correct PCA variance inflation
Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai
2012-01-01
In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure was int...
Similarities Derived from 3-D Nonlinear Psychophysics: Variance Distributions.
Gregson, Robert A. M.
1994-01-01
The derivation of the variance of similarity judgments is made from the 3-D process in nonlinear psychophysics. The idea of separability of dimensions in metric space theories of similarity is replaced by one parameter that represents the degree of a form of interdimensional cross-sampling. (SLD)
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative
20 CFR 901.40 - Proof; variance; amendment of pleadings.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...
Multivariate Variance Targeting in the BEKK-GARCH Model
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Vertical velocity variances and Reynold stresses at Brookhaven
Busch, Niels E.; Brown, R.M.; Frizzola, J.A.
1970-01-01
Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...... velocity component....
Estimation of dominance variance in purebred Yorkshire swine.
Culbertson, M S; Mabry, J W; Misztal, I; Gengler, N; Bertrand, J K; Varona, L
1998-02-01
We used 179,485 Yorkshire reproductive and 239,354 Yorkshire growth records to estimate additive and dominance variances by Method Fraktur R. Estimates were obtained for number born alive (NBA), 21-d litter weight (LWT), days to 104.5 kg (DAYS), and backfat at 104.5 kg (BF). The single-trait models for NBA and LWT included the fixed effects of contemporary group and regression on inbreeding percentage and the random effects mate within contemporary group, animal permanent environment, animal additive, and parental dominance. The single-trait models for DAYS and BF included the fixed effects of contemporary group, sex, and regression on inbreeding percentage and the random effects litter of birth, dam permanent environment, animal additive, and parental dominance. Final estimates were obtained from six samples for each trait. Regression coefficients for 10% inbreeding were found to be -.23 for NBA, -.52 kg for LWT, 2.1 d for DAYS, and 0 mm for BF. Estimates of additive and dominance variances expressed as a percentage of phenotypic variances were, respectively, 8.8 +/- .5 and 2.2 +/- .7 for NBA, 8.1 +/- 1.1 and 6.3 +/- .9 for LWT, 33.2 +/- .4 and 10.3 +/- 1.5 for DAYS, and 43.6 +/- .9 and 4.8 +/- .7 for BF. The ratio of dominance to additive variances ranged from .78 to .11.
Common Persistence and Error-Correction Mode in Conditional Variance
LI Han-dong; ZHANG Shi-ying
2001-01-01
We firstly define the persistence and common persistence of vector GARCH process from the point of view of the integration, and then discuss the sufficient and necessary condition of the copersistence in variance. In the end of this paper, we give the properties and the error correction model of vector GARCH process under the condition of the co-persistence.
Bounds for Tail Probabilities of the Sample Variance
V. Bentkus
2009-01-01
Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.
Variance Ranklets : Orientation-selective rank features for contrast modulations
Azzopardi, George; Smeraldi, Fabrizio
2009-01-01
We introduce a novel type of orientation–selective rank features that are sensitive to contrast modulations (second–order stimuli). Variance Ranklets are designed in close analogy with the standard Ranklets, but use the Siegel–Tukey statistics for dispersion instead of the Wilcoxon statistics. Their
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative s
Average local values and local variances in quantum mechanics
Muga, J G; Sala, P R
1998-01-01
Several definitions for the average local value and local variance of a quantum observable are examined and compared with their classical counterparts. An explicit way to construct an infinite number of these quantities is provided. It is found that different classical conditions may be satisfied by different definitions, but none of the quantum definitions examined is entirely consistent with all classical requirements.
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known
Multivariate Variance Targeting in the BEKK-GARCH Model
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Multivariate variance targeting in the BEKK-GARCH model
Pedersen, Rasmus S.; Rahbæk, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...
CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset
Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
2012-01-01
We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var
Gender variance in Asia: discursive contestations and legal implications
Wieringa, S.E.
2010-01-01
A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the impli
CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset
Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
2012-01-01
We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var
Infinite variance in fermion quantum Monte Carlo calculations
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Testing for causality in variance using multivariate GARCH models
C.M. Hafner (Christian); H. Herwartz
2004-01-01
textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual
Variance Components for NLS: Partitioning the Design Effect.
Folsom, Ralph E., Jr.
This memorandum demonstrates a variance components methodology for partitioning the overall design effect (D) for a ratio mean into stratification (S), unequal weighting (W), and clustering (C) effects, so that D = WSC. In section 2, a sample selection scheme modeled after the National Longitudinal Study of the High School Class of 1972 (NKS)…
Perspective projection for variance pose face recognition from camera calibration
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
QSAR modeling of estrogenic alkylphenols using bulk and electronic parameters
Mukherjee S
2007-01-01
Full Text Available Broad range of structurally diverse alkylphenols has been found to be considerably potential estrogenic agents in combating estrogen-linked pathologies, but their mechanism of action in mimicking responses of endogenous hormones is still to be understood. The present work explores pharmacophore signals of some varied alkylphenols and predicts estrogenic activities through generated linear relations implementing theoretical molecular modeling techniques. The binding affinity to estrogen receptor of alkylphenols has been modeled investigating large data set of whole molecular and atomic descriptors. Univariate and multivariate relationships were estimated using correlation analysis and statistical significance of the generated relations assessed. The predictive ability of the generated models was further verified using ′Leave-One-Out′ cross validation. The relationships with molecular properties could be developed with a maximum correlation exceeding 94%, with explained variance as high as 87% and cross-validated variances> 0.8. It was inferred that increased molecular bulk, enhanced molecular ionization potential, presence of electron donating groups in para position and branched chain terminal atoms might have influence on binding affinity to the receptor.
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic
Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type
Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.
2016-01-01
Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international data sets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon type bulk densities using local known bulk density data sets. Then the best performing of the pedotransfer functions were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data were missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known data sets had the best predictions, using the recalibrated and validated pedotransfer functions.
Carrier Bulk-Lifetime Measurements
M. Solcansky
2012-01-01
Full Text Available For the measurement of the minority carrier bulk-lifetime the characterization method MW-PCD is used, where the result of measurement is the effective carrier lifetime, which is very dependent on the surface recombination velocity and therefore on the quality of a silicon surface passivation. This work deals with an examination of a different solution types for the chemical passivation of a silicon surface. Various solutions are tested on silicon wafers for their consequent comparison. The main purpose is to find optimal solution, which suits the requirements of a time stability and start-up velocity of passivation, reproducibility of the measurements and a possibility of a perfect cleaning of a passivating solution remains from a silicon surface, so that the parameters of a measured silicon wafer will not worsen and there will not be any contamination of the other wafers series in the production after a repetitive return of the measured wafer into the production process. The cleaning process itself is also a subject of a development.
Stochastic variational approach to minimum uncertainty states
Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)
1995-05-21
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
2007-01-01
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
New Minimum Wage Research: A Symposium.
Ehrenberg, Ronald G.; And Others
1992-01-01
Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…
5 CFR 630.206 - Minimum charge.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum charge. 630.206 Section 630.206 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ABSENCE AND LEAVE Definitions and General Provisions for Annual and Sick Leave § 630.206 Minimum charge. (a) Unless an agency...
Stochastic variational approach to minimum uncertainty states
Illuminati, F; Illuminati, F; Viola, L
1995-01-01
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schr\\"{o}dinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials.
Monotonic Stable Solutions for Minimum Coloring Games
Hamers, H.J.M.; Miquel, S.; Norde, H.W.
2011-01-01
For the class of minimum coloring games (introduced by Deng et al. (1999)) we investigate the existence of population monotonic allocation schemes (introduced by Sprumont (1990)). We show that a minimum coloring game on a graph G has a population monotonic allocation scheme if and only if G is (P4,
Coupling brane fields to bulk supergravity
Parameswaran, Susha L. [Uppsala Univ. (Sweden). Theoretical Physics; Schmidt, Jonas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2010-12-15
In this note we present a simple, general prescription for coupling brane localized fields to bulk supergravity. We illustrate the procedure by considering 6D N=2 bulk supergravity on a 2D orbifold, with brane fields localized at the fixed points. The resulting action enjoys the full 6D N=2 symmetries in the bulk, and those of 4D N=1 supergravity at the brane positions. (orig.)
Relative entropy equals bulk relative entropy
Jafferis, Daniel L; Maldacena, Juan; Suh, S Josephine
2015-01-01
We consider the gravity dual of the modular Hamiltonian associated to a general subregion of a boundary theory. We use it to argue that the relative entropy of nearby states is given by the relative entropy in the bulk, to leading order in the bulk gravitational coupling. We also argue that the boundary modular flow is dual to the bulk modular flow in the entanglement wedge, with implications for entanglement wedge reconstruction.
33 CFR 127.313 - Bulk storage.
2010-07-01
...) WATERFRONT FACILITIES WATERFRONT FACILITIES HANDLING LIQUEFIED NATURAL GAS AND LIQUEFIED HAZARDOUS GAS Waterfront Facilities Handling Liquefied Natural Gas Operations § 127.313 Bulk storage. (a) The...
Applications of bulk high-temperature superconductors
Hull, J. R.
The development of high-temperature superconductors (HTS's) can be broadly generalized into thin-film electronics, wire applications, and bulk applications. We consider bulk HTS's to include sintered or crystallized forms that do not take the geometry of filaments or tapes, and we discuss major applications for these materials. For the most part applications may be realized with the HTS's cooled to 77 K, and the properties of the bulk HTS's are often already sufficient for commercial use. A non-exhaustive list of applications for bulk HTS's includes trapped field magnets, hysteresis motors, magnetic shielding, current leads, and magnetic bearings. These applications are briefly discussed in this paper.
Hyperon bulk viscosity in strong magnetic fields
Sinha, Monika
2008-01-01
We study bulk viscosity in neutron star matter including $\\Lambda$ hyperons in the presence of quantizing magnetic fields. Relaxation time and bulk viscosity due to both the non-leptonic weak process involving $\\Lambda$ hyperons and the direct Urca (dUrca) process are calculated here. In the presence of a strong magnetic field, bulk viscosity coefficients are enhanced when protons, electrons and muons are populated in their respective zeroth Landau levels compared with the field free cases. The enhancement of bulk viscosity coefficient is larger for the dUrca case.
Hydrotropy: monomer-micelle equilibrium and minimum hydrotrope concentration.
Shimizu, Seishi; Matubayasi, Nobuyuki
2014-09-01
Drug molecules with low aqueous solubility can be solubilized by a class of cosolvents, known as hydrotropes. Their action has often been explained by an analogy with micelle formation, which exhibits critical micelle concentration (CMC). Indeed, hydrotropes also exhibit "minimum hydrotrope concentration" (MHC), a threshold concentration for solubilization. However, MHC is observed even for nonaggregating monomeric hydrotropes (such as urea); this raises questions over the validity of this analogy. Here we clarify the effect of micellization on hydrotropy, as well as the origin of MHC when micellization is not accompanied. On the basis of the rigorous Kirkwood-Buff (KB) theory of solutions, we show that (i) micellar hydrotropy is explained also from preferential drug-hydrotrope interaction; (ii) yet micelle formation reduces solubilization effeciency per hydrotrope molecule; (iii) MHC is caused by hydrotrope-hydrotrope self-association induced by the solute (drug) molecule; and (iv) MHC is prevented by hydrotrope self-aggregation in the bulk solution. We thus need a departure from the traditional view; the structure of hydrotrope-water mixture around the drug molecule, not the structure of the aqueous hydrotrope solutions in the bulk phase, is the true key toward understanding the origin of MHC.
Convergence of Recursive Identification for ARMAX Process with Increasing Variances
JIN Ya; LUO Guiming
2007-01-01
The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.
Climate variance influence on the non-stationary plankton dynamics.
Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine
2013-08-01
We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.
Multivariate Variance Targeting in the BEKK-GARCH Model
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding to these ......This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...... to these two steps. Strong consistency is established under weak moment conditions, while sixth order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are indeed necessary....
Response variance in functional maps: neural darwinism revisited.
Hirokazu Takahashi
Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Validation technique using mean and variance of kriging model
Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)
2007-07-01
To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.
Explaining the Prevalence, Scaling and Variance of Urban Phenomena
Gomez-Lievano, Andres; Hausmann, Ricardo
2016-01-01
The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Sample variance and Lyman-alpha forest transmission statistics
Rollinde, Emmanuel; Schaye, Joop; Pâris, Isabelle; Petitjean, Patrick
2012-01-01
We compare the observed probability distribution function of the transmission in the \\HI\\ Lyman-alpha forest, measured from the UVES 'Large Programme' sample at redshifts z=[2,2.5,3], to results from the GIMIC cosmological simulations. Our measured values for the mean transmission and its PDF are in good agreement with published results. Errors on statistics measured from high-resolution data are typically estimated using bootstrap or jack-knife resampling techniques after splitting the spectra into chunks. We demonstrate that these methods tend to underestimate the sample variance unless the chunk size is much larger than is commonly the case. We therefore estimate the sample variance from the simulations. We conclude that observed and simulated transmission statistics are in good agreement, in particular, we do not require the temperature-density relation to be 'inverted'.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Automated Extraction of Archaeological Traces by a Modified Variance Analysis
Tiziana D'Orazio
2015-03-01
Full Text Available This paper considers the problem of detecting archaeological traces in digital aerial images by analyzing the pixel variance over regions around selected points. In order to decide if a point belongs to an archaeological trace or not, its surrounding regions are considered. The one-way ANalysis Of VAriance (ANOVA is applied several times to detect the differences among these regions; in particular the expected shape of the mark to be detected is used in each region. Furthermore, an effect size parameter is defined by comparing the statistics of these regions with the statistics of the entire population in order to measure how strongly the trace is appreciable. Experiments on synthetic and real images demonstrate the effectiveness of the proposed approach with respect to some state-of-the-art methodologies.
Analysis of variance in spectroscopic imaging data from human tissues.
Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit
2012-01-17
The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.
The return of the variance: intraspecific variability in community ecology.
Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie
2012-04-01
Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.
The minimum work requirement for distillation processes
Yunus, Cerci; Yunus, A. Cengel; Byard, Wood [Nevada Univ., Las Vegas, NV (United States). Dept. of Mechanical Engineering
2000-07-01
A typical ideal distillation process is proposed and analyzed using the first and second-laws of thermodynamics with particular attention to the minimum work requirement for individual processes. The distillation process consists of an evaporator, a condenser, a heat exchanger, and a number of heaters and coolers. Several Carnot engines are also employed to perform heat interactions of the distillation process with the surroundings and determine the minimum work requirement for processes. The Carnot engines give the maximum possible work output or the minimum work input associated with the processes, and therefore the net result of these inputs and outputs leads to the minimum work requirement for the entire distillation process. It is shown that the minimum work relation for the distillation process is the same as the minimum work input relation found by Cerci et al [1] for an incomplete separation of incoming saline water, and depends only on the properties of the incoming saline water and the outgoing pure water and brine. Also, certain aspects of the minimum work relation found are discussed briefly. (authors)
EXPERIMENTAL STUDY OF MINIMUM IGNITION TEMPERATURE
Igor WACHTER
2015-12-01
Full Text Available The aim of this scientific paper is an analysis of the minimum ignition temperature of dust layer and the minimum ignition temperatures of dust clouds. It could be used to identify the threats in industrial production and civil engineering, on which a layer of combustible dust could occure. Research was performed on spent coffee grounds. Tests were performed according to EN 50281-2-1:2002 Methods for determining the minimum ignition temperatures of dust (Method A. Objective of method A is to determine the minimum temperature at which ignition or decomposition of dust occurs during thermal straining on a hot plate at a constant temperature. The highest minimum smouldering and carbonating temperature of spent coffee grounds for 5 mm high layer was determined at the interval from 280 °C to 310 °C during 600 seconds. Method B is used to determine the minimum ignition temperature of a dust cloud. Minimum ignition temperature of studied dust was determined to 470 °C (air pressure – 50 kPa, sample weight 0.3 g.
Analysis of Variance in the Modern Design of Experiments
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
Seasonal variance in P system models for metapopulations
Daniela Besozzi; Paolo Cazzaniga; Dario Pescini; Giancarlo Mauri
2007-01-01
Metapopulations are ecological models describing the interactions and the behavior of populations living in fragmented habitats. In this paper, metapopulations are modelled by means of dynamical probabilistic P systems, where additional structural features have been defined (e. g., a weighted graph associated with the membrane structure and the reduction of maximal parallelism). In particular, we investigate the influence of stochastic and periodic resource feeding processes, owing to seasonal variance, on emergent metapopulation dynamics.
Estimating High-Frequency Based (Co-) Variances: A Unified Approach
Voev, Valeri; Nolte, Ingmar
We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... frequency derived in Bandi & Russell (2005a) and Bandi & Russell (2005b). For a realistic trading scenario, the efficiency gains resulting from our approach are in the range of 35% to 50%....
VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM
RANJU KANWAR; SAMEKSHA BHASKAR
2013-01-01
In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through th...
Recombining binomial tree for constant elasticity of variance process
Hi Jun Choe; Jeong Ho Chu; So Jeong Shin
2014-01-01
The theme in this paper is the recombining binomial tree to price American put option when the underlying stock follows constant elasticity of variance(CEV) process. Recombining nodes of binomial tree are decided from finite difference scheme to emulate CEV process and the tree has a linear complexity. Also it is derived from the differential equation the asymptotic envelope of the boundary of tree. Conducting numerical experiments, we confirm the convergence and accuracy of the pricing by ou...
PARAMETER-ESTIMATION FOR ARMA MODELS WITH INFINITE VARIANCE INNOVATIONS
MIKOSCH, T; GADRICH, T; KLUPPELBERG, C; ADLER, RJ
We consider a standard ARMA process of the form phi(B)X(t) = B(B)Z(t), where the innovations Z(t) belong to the domain of attraction of a stable law, so that neither the Z(t) nor the X(t) have a finite variance. Our aim is to estimate the coefficients of phi and theta. Since maximum likelihood
Atalay, C; Yazici, A R; Horuztepe, A; Nagas, E; Ertan, A; Ozgunaltay, G
2016-01-01
The aim of this in vitro study was to evaluate the fracture resistance of endodontically treated teeth restored with different types of restorative resins. Seventy-two sound maxillary premolar teeth were randomly divided into six groups (n=12). The teeth in the first group were left intact and tested as unprepared negative control (group I) specimens. The teeth in the remaining five groups were prepared with MOD cavities and endodontically treated. The teeth in one of the five groups (positive control group II) were unrestored. The rest of the prepared cavities were restored as follows: group III: bulk fill resin composite/Filtek Bulk Fill (3M ESPE); group IV: bulk fill flowable resin composite + nanohybrid/SureFil SDR Flow + Ceram.X Mono (Dentsply); group V: fiber-reinforced composite + posterior resin composite/GC everX posterior + G-aenial posterior (GC Corp.); and group VI: nanohybrid resin composite/Tetric N-Ceram (Ivoclar/Vivadent). Each restorative material was used with its respective adhesive system. The restored teeth were stored in distilled water for 24 hours at 37°C and were then thermocycled (5-55°C, 1000×). Specimens were subjected to a compressive load until fracture at a crosshead speed of 0.5 mm/min. The data were analyzed using one-way analysis of variance followed by the post hoc Tukey honestly significantly different test (p0.05). The lowest values were obtained in the positive control group (group II); these values were significantly lower than those of the other groups (pcomposite were not different from those restored with conventional nanohybrid resin composite.
Relationship between Allan variances and Kalman Filter parameters
Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.
1984-01-01
A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.
Dynamic Programming Using Polar Variance for Image Segmentation.
Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J
2016-10-06
When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.
Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse
Hao Qin; Xin Sun; Jun Yan; Qi-Ming Hou; Zhong Ren; Kun Zhou
2016-01-01
In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.
Variance optimal sampling based estimation of subset sums
Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel
2008-01-01
From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.
Measuring primordial non-gaussianity without cosmic variance
Seljak, Uros
2008-01-01
Non-gaussianity in the initial conditions of the universe is one of the most powerful mechanisms to discriminate among the competing theories of the early universe. Measurements using bispectrum of cosmic microwave background anisotropies are limited by the cosmic variance, i.e. available number of modes. Recent work has emphasized the possibility to probe non-gaussianity of local type using the scale dependence of large scale bias from highly biased tracers of large scale structure. However, this power spectrum method is also limited by cosmic variance, finite number of structures on the largest scales, and by the partial degeneracy with other cosmological parameters that can mimic the same effect. Here we propose an alternative method that solves both of these problems. It is based on the idea that on large scales halos are biased, but not stochastic, tracers of dark matter: by correlating a highly biased tracer of large scale structure against an unbiased tracer one eliminates the cosmic variance error, wh...
Modality-Driven Classification and Visualization of Ensemble Variance
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Does the Minimum Wage Cause Inefficient Rationing?
何满辉; 梁明秋
2008-01-01
By not allowing wages to dearthe labor market,the minimum wage could cause workers with low reservation wages to be rationed out while equally skilled woTkers with higher reservation wages are employed.I find that proxies for reservation wages of unskilled workers in high-impact stales did not rise relative to reservation wages in other states,suggesting that the increase in the minimum wage did not cause jobs to be allocated less efficiently.However,even if rationing is efficient,the minimum wage can still entail other efficiency costs.
Minimum emittance in TBA and MBA lattices
Xu, Gang; Peng, Yue-Mei
2015-03-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.
27 CFR 20.191 - Bulk articles.
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Bulk articles. 20.191... Users of Specially Denatured Spirits Operations by Users § 20.191 Bulk articles. Users who convey articles in containers exceeding one gallon may provide the recipient with a photocopy of subpart G of...
A contribution to problems of clean transport of bulk materials
Fedora Jaroslav
1996-03-01
Full Text Available The lecture analyses the problem of development of the pipe conveyor with a rubber belt, the facitities of its application in the practice and environmental aspects resulting from its application. The pipe conveyor is a new perspective transport system. It enables ransporting bulk materials (coal, crushed, rock, coke, plant ash, fertilisers, limestones, time in a specific operations (power plants, heating plants.cellulose, salt, sugar, wheat and other materials with a minimum effect on the environment. The transported material is enclosed in the pipeline so that there is no escape of dust, smell or of the transported material itself. The lecture is aimed at: - the short description of the operating principle and design of the pipe conveyor which was developed in the firm Matador Púchov in cooperation with the firm TEDO, - the analysis of experiencie in working some pipe conveyors which were under operation for a certain
Factors affecting characterization of bulk high-temperature superconductors
Hull, J.R. [Argonne National Lab., IL (United States). Energy Technology Div.
1997-11-01
Three major factors affect the characterization of bulk high-temperature superconductors in terms of their levitation properties during interaction with permanent magnets. First, the appropriate parameter for the permanent magnet is internal magnetization, not the value of the magnetic field measured at the magnet`s surface. Second, although levitation force grows with superconductor thickness and surface area, for a given permanent magnet size, comparison of levitation force between samples is meaningful when minimum values are assigned to the superconductor size parameters. Finally, the effect of force creep must be considered when time-averaging the force measurements. In addition to levitational force, the coefficient of friction of a levitated rotating permanent magnet may be used to characterize the superconductor.
Bulk equations of motion from CFT correlators
Kabat, Daniel
2015-01-01
To O(1/N) we derive, purely from CFT data, the bulk equations of motion for interacting scalar fields and for scalars coupled to gauge fields and gravity. We first uplift CFT operators to mimic local AdS fields by imposing bulk microcausality. This requires adding an infinite tower of smeared higher-dimension double-trace operators to the CFT definition of a bulk field, with coefficients that we explicitly compute. By summing the contribution of the higher-dimension operators we derive the equations of motion satisfied by these uplifted CFT operators and show that we precisely recover the expected bulk equations of motion. We exhibit the freedom in the CFT construction which corresponds to bulk field redefinitions.
Bulk equations of motion from CFT correlators
Kabat, Daniel [Department of Physics and Astronomy,Lehman College, City University of New York, Bronx NY 10468 (United States); Lifschytz, Gilad [Department of Physics and Astronomy,Lehman College, City University of New York, Bronx NY 10468 (United States); Physics Department,City College, City University of New York, New York NY 10031 (United States); Department of Mathematics and Physics,University of Haifa at Oranim, Kiryat Tivon 36006 (Israel)
2015-09-10
To O(1/N) we derive, purely from CFT data, the bulk equations of motion for interacting scalar fields and for scalars coupled to gauge fields and gravity. We first uplift CFT operators to mimic local AdS fields by imposing bulk microcausality. This requires adding an infinite tower of smeared higher-dimension double-trace operators to the CFT definition of a bulk field, with coefficients that we explicitly compute. By summing the contribution of the higher-dimension operators we derive the equations of motion satisfied by these uplifted CFT operators and show that we precisely recover the expected bulk equations of motion. We exhibit the freedom in the CFT construction which corresponds to bulk field redefinitions.
Automated Variance Reduction Applied to Nuclear Well-Logging Problems
Wagner, John C [ORNL; Peplow, Douglas E. [ORNL; Evans, Thomas M [ORNL
2009-01-01
The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is
A proxy for variance in dense matching over homogeneous terrain
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Estimation of noise-free variance to measure heterogeneity.
Winkler, Tilo; Melo, Marcos F Vidal; Degani-Costa, Luiza H; Harris, R Scott; Correia, John A; Musch, Guido; Venegas, Jose G
2015-01-01
Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET) scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2)). The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r)(2)) for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t)(2)). We found that CV(t)(2) was only 5.4% higher than CV(r)2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13)NN-saline injection. The mean CV(t)(2) was 0.10 (range: 0.03-0.30), while the mean CV(2) including noise was 0.24 (range: 0.10-0.59). CV(t)(2) was in average 41.5% of the CV(2) measured including noise (range: 17.8-71.2%). The reproducibility of CV(t)(2) was evaluated using three repeated PET scans from five subjects. Individual CV(t)(2) were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t)(2) in PET scans, and may be useful for similar statistical problems in experimental data.
Merino Aldecoa, I.; Arevalo, L. F.; Romero, F.
2003-07-01
A mathematical model employing 30 parameters was used over a period of 30 months to find the origin of bulking at the Muskiz waste water treatment plant. The plant includes a pre-treatment unit, a biological reactor (activated sludge) and clarification. The monitoring parameters (Affluent, mixed liquor, process and sludge recycling) and the statistical techniques (conventional, multivariate) used are listed. The sludge volumetric index was employed as the descriptive parameter. The results include 81 decenal sets of data as well as the evolution of the physico-chemical parameters over time and the frequency and the factors linked to bulking. The correlation of the monitoring parameters with each other and with the SVI was analysed. The multivariate statistic included cluster analysis and multiple linear regression. The regression equations were calculated in four successive stages, which explants the SVI variance of 77.8%. (Author) 9 refs.
Long Term Care Minimum Data Set (MDS)
U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...
Quantitative Research on the Minimum Wage
Goldfarb, Robert S.
1975-01-01
The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)
Impact of the Minimum Wage on Compression.
Wolfe, Michael N.; Candland, Charles W.
1979-01-01
Assesses the impact of increases in the minimum wage on salary schedules, provides guidelines for creating a philosophy to deal with the impact, and outlines options and presents recommendations. (IRT)
Long Term Care Minimum Data Set (MDS)
U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...
Minimum wages and employment in China
Fang, Tony; Lin, Carl
2015-01-01
... that minimum wage changes led to significant adverse effects on employment in the Eastern and Central regions of China, and resulted in disemployment for females, young adults, and low-skilled workers...
Minimum Wage Policy and Country's Technical Efficiency
Mohd Zaini Abd Karim; Sok-Gee Chan; Sallahuddin Hassan
2016-01-01
.... However, some quarters argued against the idea of a nationwide minimum wage asserting that it will lead to an increase in the cost of doing business and thus will hurt Malaysian competitiveness...
Graph theory for FPGA minimum configurations
Ruan Aiwu; Li Wenchang; Xiang Chuanyin; Song Jiangmin; Kang Shi; Liao Yongbo
2011-01-01
A traditional bottom-up modeling method for minimum configuration numbers is adopted for the study of FPGA minimum configurations.This method is limited ifa large number of LUTs and multiplexers are presented.Since graph theory has been extensively applied to circuit analysis and test,this paper focuses on the modeling FPGA configurations.In our study,an internal logic block and interconnections of an FPGA are considered as a vertex and an edge connecting two vertices in the graph,respectively.A top-down modeling method is proposed in the paper to achieve minimum configuration numbers for CLB and IOB.Based on the proposed modeling approach and exhaustive analysis,the minimum configuration numbers for CLB and IOB are five and three,respectively.
Price pass-through and minimum wages
Daniel Aaronson
1997-01-01
A textbook consequence of competitive markets is that an industry-wide increase in the price of inputs will be passed on to consumers through an increase in prices. This fundamental implication has been explored by researchers interested in who bears the burden of taxation and exchange rate fluctuations. However, little attention has focused on the price implications of minimum wage hikes. From a policy perspective, this is an oversight. Welfare analysis of minimum wage laws should not ignore...
The minimum wage and restaurant prices
Daniel Aaronson; Eric French; MacDonald, James M.
2004-01-01
Using both store-level and aggregated price data from the food away from home component of the Consumer Price Index survey, we show that restaurant prices rise in response to an increase in the minimum wage. These results hold up when using several different sources of variation in the data. We interpret these findings within a model of employment determination. The model implies that minimum wage hikes cause employment to fall and prices to rise if labor markets are competitive but potential...
Minimum Dominating Tree Problem for Graphs
LIN Hao; LIN Lan
2014-01-01
A dominating tree T of a graph G is a subtree of G which contains at least one neighbor of each vertex of G. The minimum dominating tree problem is to find a dominating tree of G with minimum number of vertices, which is an NP-hard problem. This paper studies some polynomially solvable cases, including interval graphs, Halin graphs, special outer-planar graphs and others.
Electrochemical corrosion behavior of carbon steel with bulk coating holidays
无
2006-01-01
With epoxy coal tar as the coating material, the electrochemical corrosion behavior of Q235 with different kinds of bulk coating holidays has been investigated with EIS (Electrochemical Impedance Spectroscopy) in a 3.5vol% NaCl aqueous solution.The area ratio of bulk coating holiday to total coating area of steel is 4.91%. The experimental results showed that at free corrosionpotential, the corrosion of carbon steel with disbonded coating holiday is heavier than that with broken holiday and disbonded & broken holiday with time; Moreover, the effectiveness of Cathodic Protection (CP) of carbon steel with broken holiday is better than that with disbonded holiday and disbonded & broken holiday on CP potential -850 mV (vs CSE). Further analysis indicated that the two main reasons for corrosion are electrolyte solution slowly penetrating the coating, and crevice corrosion at steel/coating interface near holidays. The ratio of impedance amplitude (Z) of different frequency to minimum frequency is defined as K value. The change rate of K with frequency is related to the type of coating holiday.
Sørensen, Anders Christian; Kristensen, Torsten Nygård; Loeschcke, Volker
2007-01-01
quantitative genetics model based on the infinitesimal model, and an extension of this model. In the extended model it is assumed that each individual has its own environmental variance and that this heterogeneity of variance has a genetic component. The heterogeneous variance model was favoured by the data......, indicating that the environmental variance is partly under genetic control. If this heterogeneous variance model also applies to livestock, it would be possible to select for animals with a higher uniformity of products across environmental regimes. Also for evolutionary biology the results are of interest...
Loberg, A; Dürr, J W; Fikse, W F; Jorjani, H; Crooks, L
2015-10-01
The amount of variance captured in genetic estimations may depend on whether a pedigree-based or genomic relationship matrix is used. The purpose of this study was to investigate the genetic variance as well as the variance of predicted genetic merits (PGM) using pedigree-based or genomic relationship matrices in Brown Swiss cattle. We examined a range of traits in six populations amounting to 173 population-trait combinations. A main aim was to determine how using different relationship matrices affect variance estimation. We calculated ratios between different types of estimates and analysed the impact of trait heritability and population size. The genetic variances estimated by REML using a genomic relationship matrix were always smaller than the variances that were similarly estimated using a pedigree-based relationship matrix. The variances from the genomic relationship matrix became closer to estimates from a pedigree relationship matrix as heritability and population size increased. In contrast, variances of predicted genetic merits obtained using a genomic relationship matrix were mostly larger than variances of genetic merit predicted using pedigree-based relationship matrix. The ratio of the genomic to pedigree-based PGM variances decreased as heritability and population size rose. The increased variance among predicted genetic merits is important for animal breeding because this is one of the factors influencing genetic progress. © 2015 Blackwell Verlag GmbH.
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata
Sztepanacz, Jacqueline L.; Blows, Mark W.
2015-01-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700
78 FR 11793 - Minimum Internal Control Standards
2013-02-20
... that govern cash handling, documentation, game integrity, auditing, surveillance, and variances, as... recently on September 21, 2012. 77 FR 58708. III. Development of the Proposed Rule On September 21, 2012... drop figures must be transferred via direct communications line or computer storage media to the...
Holographic representation of local bulk operators
Hamilton, A; Lifschytz, G; Lowe, D A; Hamilton, Alex; Kabat, Daniel; Lifschytz, Gilad; Lowe, David A.
2006-01-01
The Lorentzian AdS/CFT correspondence implies a map between local operators in supergravity and non-local operators in the CFT. By explicit computation we construct CFT operators which are dual to local bulk fields in the semiclassical limit. The computation is done for general dimension in global, Poincare and Rindler coordinates. We find that the CFT operators can be taken to have compact support in a region of the complexified boundary whose size is set by the bulk radial position. We show that at finite N the number of independent commuting operators localized within a bulk volume saturates the holographic bound.
Regression between earthquake magnitudes having errors with known variances
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
Critical points of multidimensional random Fourier series: Variance estimates
Nicolaescu, Liviu I.
2016-08-01
We investigate the number of critical points of a Gaussian random smooth function uɛ on the m-torus Tm ≔ ℝm/ℤm approximating the Gaussian white noise as ɛ → 0. Let N(uɛ) denote the number of critical points of uɛ. We prove the existence of constants C, C' such that as ɛ goes to zero, the expectation of the random variable ɛmN(uɛ) converges to C, while its variance is extremely small and behaves like C'ɛm.
Stable limits for sums of dependent infinite variance random variables
Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas;
2011-01-01
The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most...... of these results are qualitative in the sense that the parameters of the limit distribution are expressed in terms of some limiting point process. In this paper we will be able to determine the parameters of the limiting stable distribution in terms of some tail characteristics of the underlying stationary...
Computing the Expected Value and Variance of Geometric Measures
Staals, Frank; Tsirogiannis, Constantinos
2017-01-01
points in P. This problem is a crucial part of modern ecological analyses; each point in P represents a species in d-dimensional trait space, and the goal is to compute the statistics of a geometric measure on this trait space, when subsets of species are selected under random processes. We present...... efficient exact algorithms for computing the mean and variance of several geometric measures when point sets are selected under one of the described random distributions. More specifically, we provide algorithms for the following measures: the bounding box volume, the convex hull volume, the mean pairwise...
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Multivariate variance targeting in the BEKK-GARCH model
Pedersen, Rasmus S.; Rahbæk, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...... to these two steps. Strong consis-tency is established under weak moment conditions, while sixth-order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are necessary...
A guide to SPSS for analysis of variance
Levine, Gustav
2013-01-01
This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce
Variance-optimal hedging for processes with stationary independent increments
Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.
We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...... show that for this class of processes the optimal endowment and strategy can be expressed more explicitly. The corresponding formulas involve the moment resp. cumulant generating function of the underlying process and a Laplace- or Fourier-type representation of the contingent claim. An example...
Two-dimensional finite-element temperature variance analysis
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
Local orbitals by minimizing powers of the orbital variance
Jansik, Branislav; Høst, Stinne; Kristensen, Kasper;
2011-01-01
It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... is obtained. For increasing power exponents, the penalty for delocalized orbitals is increased and smaller maximum orbital spreads are encountered. Calculations on superbenzene, C60, and a fragment of the titin protein show that for a power exponent equal to one, delocalized outlier orbitals may...
A Mean-Variance Portfolio Optimal Under Utility Pricing
HÃ¼rlimann Werner
2006-01-01
Full Text Available An expected utility model of asset choice, which takes into account asset pricing, is considered. The obtained portfolio selection problem under utility pricing is solved under several assumptions including quadratic utility, exponential utility and multivariate symmetric elliptical returns. The obtained unique solution, called optimal utility portfolio, is shown mean-variance efficient in the classical sense. Various questions, including conditions for complete diversification and the behavior of the optimal portfolio under univariate and multivariate ordering of risks as well as risk-adjusted performance measurement, are discussed.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
LI Yan; SHI Zhou; WU Ci-fang; LI Feng; LI Hong-yi
2007-01-01
The acquisition of precise soil data representative of the entire survey area,is a critical issue for many treatments such as irrigation or fertilization in precision agriculture.The aim of this study was to investigate the spatial variability of soil bulk electrical conductivity(ECb)in a coastal saline field and design an optimized spatial sampling scheme of ECb based on a sampling design algorithm,the variance quad-tree(VQT)method.Soil ECb data were collected from the field at 20m interval in a regular grid scheme.The smooth contour map of the whole field was obtained by ordinary kriging interpolation,VQT algorithm was then used to split the smooth contour map into strata of different number desired,the sampling locations can be selected within each stratum in subsequent sampling.The result indicated that the probability of choosing representative sampling sites was increased significantly by using VQT method with the sampling number being greatly reduced compared to grid sampling design while retaining the same prediction accuracy.The advantage of the VQT method is that this scheme samples sparsely in fields where the spatial variability is relatively uniform and more intensive where the variability is large.Thus the sampling efficiency can be improved,hence facilitate an assessment methodology that can be applied in a rapid,practical and cost-effective manner.
Superconducting bulk magnets for magnetic levitation systems
Fujimoto, H.; Kamijo, H.
2000-06-01
The major applications of high-temperature superconductors have mostly been confined to products in the form of wires and thin films. However, recent developments show that rare-earth REBa 2Cu 3O 7- x and light rare-earth LREBa 2Cu 3O 7- x superconductors prepared by melt processes have a high critical-current density at 77 K and high magnetic fields. These superconductors will promote the application of bulk high-temperature superconductors in high magnetic fields; the superconducting bulk magnet for the Maglev train is one possible application. We investigated the possibility of using bulk magnets in the Maglev system, and examined flux-trapping characteristics of multi-superconducting bulks arranged in array.
Nowcasting daily minimum air and grass temperature
Savage, M. J.
2016-02-01
Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) errors for grass minimum temperature and the 4-h nowcasts.
Replica approach to mean-variance portfolio optimization
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r = 1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1 - r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE
I GEDE ERY NISCAHYANA
2016-08-01
Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution. The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of BNLI stock, 0% of SMDM stock, 1% of SMGR stock.
Facial Feature Extraction Method Based on Coefficients of Variances
Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang
2007-01-01
Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.
Cosmic variance of the galaxy cluster weak lensing signal
Gruen, D; Becker, M R; Friedrich, O; Mana, A
2015-01-01
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_200m=10^14...10^15 h^-1 M_sol, z=0.25...0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ~20 per cent uncertainty from cosmic variance alone at M_200m=10^15 h^-1 M_sol and z=0.25), but significant also...
Mean-Variance-Validation Technique for Sequential Kriging Metamodels
Lee, Tae Hee; Kim, Ho Sung [Hanyang University, Seoul (Korea, Republic of)
2010-05-15
The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean{sub 0} validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean{sub 0} validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels.
Infinite Variance in Fermion Quantum Monte Carlo Calculations
Shi, Hao
2015-01-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties, without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, lattice QCD calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied upon to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple sub-areas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations turn out to have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calc...
Deterministic mean-variance-optimal consumption and investment
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
The Variance of Energy Estimates for the Product Model
David Smallwood
2003-01-01
, is the product of a slowly varying random window, {w(t}, and a stationary random process, {g(t}, is defined. A single realization of the process will be defined as x(t. This is slightly different from the usual definition of the product model where the window is typically defined as deterministic. An estimate of the energy (the zero order temporal moment, only in special cases is this physical energy of the random process, {x(t}, is defined as m0=∫∞∞|x(t|2dt=∫−∞∞|w(tg(t|2dt Relationships for the mean and variance of the energy estimates, m0, are then developed. It is shown that for many cases the uncertainty (4π times the product of rms duration, Dt, and rms bandwidth, Df is approximately the inverse of the normalized variance of the energy. The uncertainty is a quantitative measure of the expected error in the energy estimate. If a transient has a significant random component, a small uncertainty parameter implies large error in the energy estimate. Attempts to resolve a time/frequency spectrum near the uncertainty limits of a transient with a significant random component will result in large errors in the spectral estimates.
Cosmic variance in the nanohertz gravitational wave background
Roebber, Elinore; Holz, Daniel; Warren, Michael
2015-01-01
We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes to estimate the formation rates of supermassive black hole binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive ($\\lesssim10\\%$) to cosmological parameters, with only slight variation between WMAP5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of $\\sim 2$. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly-generated populations of supermassive black holes, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of $\\sim 10$ near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty ...
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
Worldwide variance in the potential utilization of Gamma Knife radiosurgery.
Hamilton, Travis; Dade Lunsford, L
2016-12-01
OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.
VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM
RANJU KANWAR
2013-04-01
Full Text Available In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through this work, it is investigated that for longer transmission distance, 40-Gb/s systems are more sensitive to nonlinear phase noise as compared to 50-Gb/s systems. Also, when transmitting the data through the fiber optic link, bit errors are produced due to various effects such as noise from optical amplifiers and nonlinearity occurring in fiber. On the basis of the simulation results , we have compared the bit error rate based on 8-PSK with theoretical results, and result shows that in real time approach, the bit error rate is high for the same signal to noise ratio. MATLAB software is used to validate the analytical expressions for the variance of nonlinear phase noise.
Hidden temporal order unveiled in stock market volatility variance
Y. Shapira
2011-06-01
Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.
The Bulk Multicore Architecture for Improved Programmability
2009-12-01
algorithm, forcing the same order of chunk commits as in the recording step. This design, which we call PicoLog , typically incurs a performance cost... PicoLog . Data-race detection at production- run speed. The Bulk Multicore can support an efficient data-race detec- tor based on the “happens-before...Bulk Multicore (a), with a possible OrderOnly execution log (b) and PicoLog execution log (c). contributed articles DECEMBER 2009 | VOL. 52
Prospects for Detecting a Cosmic Bulk Flow
Rose, Benjamin; Garnavich, Peter M.; Mathews, Grant James
2015-01-01
The ΛCDM model is based upon a homogeneous, isotropic space-time leading to uniform expansion with random peculiar velocities caused by local gravitation perturbations. The Cosmic Microwave Background (CMB) radiation evidences a significant dipole moment in the frame of the Local Group. This motion is usually explained with the Local Group's motion relative to the background Hubble expansion. An alternative explanation, however, is that the dipole moment is the result of horizon-scale curvature remaining from the birth of space-time, possibly a result of quantum entanglement with another universe. This would appear as a single velocity (a bulk flow) added to all points in space. These two explanations differ observationally on cosmic distance scales (z > 0.1). There have been many differing attempts to detect a bulk flow, many with no detectable bulk flow but some with a bulk flow velocity as large as 1000 km/s. Here we report on a technique based upon minimizing the scatter around the expected cosine distribution of the Hubble redshift residuals with respect to angular distance on the sky. That is, the algorithm searches for a directional dependence of Hubble residuals. We find results consistent with most other bulk flow detections at z Type Ia Supernovae to be ~0.01, whereas the current error (~0.2.) is more than an order of magnitude too large for the detection of bulk flow beyond z~0.05.
On the Smoothed Minimum Error Entropy Criterion
Badong Chen
2012-11-01
Full Text Available Recent studies suggest that the minimum error entropy (MEE criterion can outperform the traditional mean square error criterion in supervised machine learning, especially in nonlinear and non-Gaussian situations. In practice, however, one has to estimate the error entropy from the samples since in general the analytical evaluation of error entropy is not possible. By the Parzen windowing approach, the estimated error entropy converges asymptotically to the entropy of the error plus an independent random variable whose probability density function (PDF corresponds to the kernel function in the Parzen method. This quantity of entropy is called the smoothed error entropy, and the corresponding optimality criterion is named the smoothed MEE (SMEE criterion. In this paper, we study theoretically the SMEE criterion in supervised machine learning where the learning machine is assumed to be nonparametric and universal. Some basic properties are presented. In particular, we show that when the smoothing factor is very small, the smoothed error entropy equals approximately the true error entropy plus a scaled version of the Fisher information of error. We also investigate how the smoothing factor affects the optimal solution. In some special situations, the optimal solution under the SMEE criterion does not change with increasing smoothing factor. In general cases, when the smoothing factor tends to infinity, minimizing the smoothed error entropy will be approximately equivalent to minimizing error variance, regardless of the conditional PDF and the kernel.
Impact of Cosmic Variance on the Galaxy-Halo Connection for Lyman-$\\alpha$ Emitters
Mejia-Restrepo, Julian E
2016-01-01
In this paper we study the impact of cosmic variance and observational uncertainties in constraining the mass and occupation fraction, $f_{\\rm occ}$, of dark matter halos hosting Ly-$\\alpha$ Emitting Galaxies (LAEs) at high redshift. To this end, we construct mock catalogs from an N-body simulation to match the typical size of observed fields at $z=3.1$ ($\\sim 1 {\\rm deg^2}$). In our model a dark matter halo with mass in the range $M_{\\rm min}
Deep solar minimum and global climate changes
Ahmed A. Hady
2013-05-01
Full Text Available This paper examines the deep minimum of solar cycle 23 and its potential impact on climate change. In addition, a source region of the solar winds at solar activity minimum, especially in the solar cycle 23, the deepest during the last 500 years, has been studied. Solar activities have had notable effect on palaeoclimatic changes. Contemporary solar activity are so weak and hence expected to cause global cooling. Prevalent global warming, caused by building-up of green-house gases in the troposphere, seems to exceed this solar effect. This paper discusses this issue.
A minimum achievable PV electrical generating cost
Sabisky, E.S. [11 Carnation Place, Lawrenceville, NJ 08648 (United States)
1996-03-22
The role and share of photovoltaic (PV) generated electricity in our nation`s future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price
Weight-Constrained Minimum Spanning Tree Problem
Henn, Sebastian Tobias
2007-01-01
In an undirected graph G we associate costs and weights to each edge. The weight-constrained minimum spanning tree problem is to find a spanning tree of total edge weight at most a given value W and minimum total costs under this restriction. In this thesis a literature overview on this NP-hard problem, theoretical properties concerning the convex hull and the Lagrangian relaxation are given. We present also some in- and exclusion-test for this problem. We apply a ranking algorithm and the me...
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Ling Huang
2017-02-01
Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10(16) electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed
Estimation of measurement variance in the context of environment statistics
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
Diffusion-Based Trajectory Observers with Variance Constraints
Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo
Diffusion-based trajectory observers have been recently proposed as a simple and efficient framework to solve diverse smoothing problems in underwater navigation. For instance, to obtain estimates of the trajectories of an underwater vehicle given position fixes from an acoustic positioning system...... and velocity measurements from a DVL. The observers are conceptually simple and can easily deal with the problems brought about by the occurrence of asynchronous measurements and dropouts. In its original formulation, the trajectory observers depend on a user-defined constant gain that controls the level...... of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...
Static models, recursive estimators and the zero-variance approach
Rubino, Gerardo
2016-01-07
When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.
INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND
TenBarge, J. M.; Klein, K. G.; Howes, G. G. [Department of Physics and Astronomy, University of Iowa, Iowa City, IA (United States); Podesta, J. J., E-mail: jason-tenbarge@uiowa.edu [Space Science Institute, Boulder, CO (United States)
2012-07-10
The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
MARKOV-MODULATED MEAN-VARIANCE PROBLEM FOR AN INSURER
Wang Wei; Bi Junna
2011-01-01
In this paper, we consider an insurance company which has the option of investing in a risky asset and a risk-free asset, whose price parameters are driven by a finite state Markov chain. The risk process of the insurance company is modeled as a diffusion process whose diffusion and drift parameters switch over time according to the same Markov chain. We study the Markov-modulated mean-variance problem for the insurer and derive explicitly the closed form of the efficient strategy and efficient frontier. In the case of no regime switching, we can see that the efficient frontier in our paper coincides with that of [10] when there is no pure jump.
Variance component estimates for alternative litter size traits in swine.
Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T
2015-11-01
Litter size at d 5 (LS5) has been shown to be an effective trait to increase total number born (TNB) while simultaneously decreasing preweaning mortality. The objective of this study was to determine the optimal litter size day for selection (i.e., other than d 5). Traits included TNB, number born alive (NBA), litter size at d 2, 5, 10, 30 (LS2, LS5, LS10, LS30, respectively), litter size at weaning (LSW), number weaned (NW), piglet mortality at d 30 (MortD30), and average piglet birth weight (BirthWt). Litter size traits were assigned to biological litters and treated as a trait of the sow. In contrast, NW was the number of piglets weaned by the nurse dam. Bivariate animal models included farm, year-season, and parity as fixed effects. Number born alive was fit as a covariate for BirthWt. Random effects included additive genetics and the permanent environment of the sow. Variance components were plotted for TNB, NBA, and LS2 to LS30 using univariate animal models to determine how variances changed over time. Additive genetic variance was minimized at d 7 in Large White and at d 14 in Landrace pigs. Total phenotypic variance for litter size traits decreased over the first 10 d and then stabilized. Heritability estimates increased between TNB and LS30. Genetic correlations between TNB, NBA, and LS2 to LS29 with LS30 plateaued within the first 10 d. A genetic correlation with LS30 of 0.95 was reached at d 4 for Large White and at d 8 for Landrace pigs. Heritability estimates ranged from 0.07 to 0.13 for litter size traits and MortD30. Birth weight had an h of 0.24 and 0.26 for Large White and Landrace pigs, respectively. Genetic correlations among LS30, LSW, and NW ranged from 0.97 to 1.00. In the Large White breed, genetic correlations between MortD30 with TNB and LS30 were 0.23 and -0.64, respectively. These correlations were 0.10 and -0.61 in the Landrace breed. A high genetic correlation of 0.98 and 0.97 was observed between LS10 and NW for Large White and
From Means and Variances to Persons and Patterns
James W Grice
2015-07-01
Full Text Available A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation.
Mean and variance of coincidence counting with deadtime
Yu, D F
2002-01-01
We analyze the first and second moments of the coincidence-counting process for a system affected by paralyzable (extendable) deadtime with (possibly unequal) deadtimes in each singles channel. We consider both 'accidental' and 'genuine' coincidences, and derive exact analytical expressions for the first and second moments of the number of recorded coincidence events under various scenarios. The results include an exact form for the coincidence rate under the combined effects of decay, background, and deadtime. The analysis confirms that coincidence counts are not exactly Poisson, but suggests that the Poisson statistical model that is used for positron emission tomography image reconstruction is a reasonable approximation since the mean and variance are nearly equal.
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.
Risk Management - Variance Minimization or Lower Tail Outcome Elimination
Aabo, Tom
2002-01-01
This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess...... on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value......-adding perspective. A cross-case study of blue-chip industrial companies partly supports the empirical use of a risk management strategy of lower tail outcome elimination but does not exclude other factors from (co-)driving the observations....
Analysis of variance of an underdetermined geodetic displacement problem
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
Objective Bayesian Comparison of Constrained Analysis of Variance Models.
Consonni, Guido; Paroli, Roberta
2016-10-04
In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.
Batch variation between branchial cell cultures: An analysis of variance
Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.
2003-01-01
We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...
Correct use of repeated measures analysis of variance.
Park, Eunsik; Cho, Meehye; Ki, Chang-Seok
2009-02-01
In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).
2010-10-19
... nonsubstantive changes, however, to correct grammar, internal paragraph references, and a temperature conversion... means the English version of the ``International Maritime Solid Bulk Cargoes Code'' published by...
Hodological resonance, hodological variance, psychosis and schizophrenia: A hypothetical model
Paul Brian eLawrie Birkett
2011-07-01
Full Text Available Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network ("hodological resonance" becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia.
Minimum training requirement in ultrasound imaging of peripheral arterial disease
Eiberg, J P; Hansen, M A; Grønvall Rasmussen, J B
2008-01-01
To demonstrate the minimum training requirement when performing ultrasound of peripheral arterial disease.......To demonstrate the minimum training requirement when performing ultrasound of peripheral arterial disease....
Completeness properties of the minimum uncertainty states
Trifonov, D. A.
1993-01-01
The completeness properties of the Schrodinger minimum uncertainty states (SMUS) and of some of their subsets are considered. The invariant measures and the resolution unity measures for the set of SMUS are constructed and the representation of squeezing and correlating operators and SMUS as superpositions of Glauber coherent states on the real line is elucidated.
Minimum Wage Effects throughout the Wage Distribution
Neumark, David; Schweitzer, Mark; Wascher, William
2004-01-01
This paper provides evidence on a wide set of margins along which labor markets can adjust in response to increases in the minimum wage, including wages, hours, employment, and ultimately labor income. Not surprisingly, the evidence indicates that low-wage workers are most strongly affected, while higher-wage workers are little affected. Workers…
A Minimum Relative Entropy Principle for AGI
Ven, Antoine van de; Schouten, Ben
2010-01-01
In this paper the principle of minimum relative entropy (PMRE) is proposed as a fundamental principle and idea that can be used in the field of AGI. It is shown to have a very strong mathematical foundation, that it is even more fundamental then Bayes rule or MaxEnt alone and that it can be related
What's Happening in Minimum Competency Testing.
Frahm, Robert; Covington, Jimmie
An examination of the current status of minimum competency testing is presented in a series of short essays, which discuss case studies of individual school systems and state approaches. Sections are also included on the viewpoints of critics and supporters, teachers and teacher organizations, principals and students, and the federal government.…
Minimum Bias and Underlying Event at CMS
Fano, Livio
2006-01-01
The prospects of measuring minimum bias collisions (MB) and studying the underlying event (UE) at CMS are discussed. Two methods are described. The first is based on the measurement of charged tracks in the transverse region with respect to a charge-particle jet. The second technique relies on the selection of muon-pair events from Drell-Yan process.
44 CFR 62.6 - Minimum commissions.
2010-10-01
... ADJUSTMENT OF CLAIMS Issuance of Policies § 62.6 Minimum commissions. (a) The earned commission which shall be paid to any property or casualty insurance agent or broker duly licensed by a state insurance regulatory authority, with respect to each policy or renewal the agent duly procures on behalf of the...
Context quantization by minimum adaptive code length
Forchhammer, Søren; Wu, Xiaolin
2007-01-01
Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols...
A Minimum Relative Entropy Principle for AGI
B.A.M. Ben Schouten; Antoine van de van de Ven
2010-01-01
In this paper the principle of minimum relative entropy (PMRE) is proposed as a fundamental principle and idea that can be used in the field of AGI. It is shown to have a very strong mathematical foundation, that it is even more fundamental then Bayes rule or MaxEnt alone and that it can be related
Time Crystals from Minimum Time Uncertainty
Faizal, Mir; Das, Saurya
2016-01-01
Motivated by the Generalized Uncertainty Principle, covariance, and a minimum measurable time, we propose a deformation of the Heisenberg algebra, and show that this leads to corrections to all quantum mechanical systems. We also demonstrate that such a deformation implies a discrete spectrum for time. In other words, time behaves like a crystal.
Minimum impact house prototype for sustainable building
Drexler, H.; Jauslin, D.
2010-01-01
The Minihouse is a prototupe for a sustainable townhouse. On a site of only 29 sqm it offers 154 sqm of urban life. The project 'Minimum Impact House' adresses two important questions: How do we provide living space in the cities without distroying the landscape? How to improve sustainably the ecolo
ASSESSMENT OF ANNUAL MINIMUM TEMPERATURE IN SOME ...
USER
2016-04-11
Apr 11, 2016 ... This work attempts investigating the pattern of minimum temperature from 19 1 to 2006, an attempt was also .... Similarly the heavy cloud cover acts as blanket for terrestrial ... within a General Circulation Model. (GCM) can be ...
Minimum Competency Testing--Grading or Evaluation?
Prakash, Madhu Suri
The consequences of the minimum competency testing movement may bring into question the basic assumptions, goals, and expectations of our school system. The intended use of these tests is the assessment of students; the unintended consequence may be the assessment of the school system. There are two ways in which schools may fail in the context of…
Minimum intervention dentistry: periodontics and implant dentistry.
Darby, I B; Ngo, L
2013-06-01
This article will look at the role of minimum intervention dentistry in the management of periodontal disease. It will discuss the role of appropriate assessment, treatment and risk factors/indicators. In addition, the role of the patient and early intervention in the continuing care of dental implants will be discussed as well as the management of peri-implant disease.
Minimum output entropy of Gaussian channels
Lloyd, S; Maccone, L; Pirandola, S; Garcia-Patron, R
2009-01-01
We show that the minimum output entropy for all single-mode Gaussian channels is additive and is attained for Gaussian inputs. This allows the derivation of the channel capacity for a number of Gaussian channels, including that of the channel with linear loss, thermal noise, and linear amplification.
7 CFR 35.13 - Minimum quantity.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Minimum quantity. 35.13 Section 35.13 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS EXPORT...
7 CFR 33.10 - Minimum requirements.
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Regulations § 33.10 Minimum requirements. No person shall... Early: Provided, That apples for export to Pacific ports of Russia shall grade at least U.S. Utility...
Effect of wash bulk on the accuracy of polyvinyl siloxane putty-wash impressions.
Nissan, J; Gross, M; Shifman, A; Assif, D
2002-04-01
Variations in the bulk of wash in a putty-wash impression technique can result in dimensional changes proportional to the thickness of the wash material during setting. The purpose of the study was to determine the amount of wash necessary to achieve accurate stone models while using a two-step putty-wash impression technique with polyvinyl siloxane (PVS) impression material. A total of 45 impressions were made of a stainless steel master model, 15 impressions for each wash thickness (1, 2 and 3 mm). The model contained three full-crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring six dimensions (occlusogingival and interabutments) on stone dies poured from impressions of the master model. One-way analysis of variance (ANOVA) showed statistically significant differences amongst the three wash bulk groups, for all occlusogingival and interabutment measurements (P 2 mm was inadequate to obtain accurate stone dies.
The periodicity of Grand Solar Minimum
Velasco Herrera, Victor Manuel
2016-07-01
The sunspot number is the most used index to quantify the solar activity. Nevertheless, the sunspot is a syn- thetic index and not a physical index. Therefore, we should be careful to use the sunspot number to quantify the low (high) solar activity. One of the major problems of using sunspot to quantify solar activity is that its minimum value is zero. This zero value hinders the reconstruction of the solar cycle during the Maunder minimum. All solar indexes can be used as analog signals, which can be easily converted into digital signals. In con- trast, the conversion of a digital signal into an analog signal is not in general a simple task. The sunspot number during the Maunder minimum can be studied as a digital signal of the solar activity In 1894, Maunder published a discovery that has maintained the Solar Physics in an impasse. In his fa- mous work on "A Prolonged Sunspot Minimum" Maunder wrote: "The sequence of maximum and minimum has, in fact, been unfailing during the present century [..] and yet there [..], the ordinary solar cycle was once interrupted, and one long period of almost unbroken quiescence prevailed". The search of new historical Grand solar minima has been one of the most important questions in Solar Physics. However, the possibility of estimating a new Grand solar minimum is even more valuable. Since solar activity is the result of electromagnetic processes; we propose to employ the power to quantify solar activity: this is a fundamental physics concept in electrodynamics. Total Solar Irradiance is the primary energy source of the Earth's climate system and therefore its variations can contribute to natural climate change. In this work, we propose to consider the fluctuations in the power of the Total Solar Irradiance as a physical measure of the energy released by the solar dynamo, which contributes to understanding the nature of "profound solar magnetic field in calm". Using a new reconstruction of the Total Solar Irradiance we found the
Development of superconductor bulk for superconductor bearing
Kim, Chan Joong; Jun, Byung Hyuk; Park, Soon Dong (and others)
2008-08-15
Current carrying capacity is one of the most important issues in the consideration of superconductor bulk materials for engineering applications. There are numerous applications of Y-Ba-Cu-O (YBCO) bulk superconductors e.g. magnetic levitation train, flywheel energy storage system, levitation transportation, lunar telescope, centrifugal device, magnetic shielding materials, bulk magnets etc. Accordingly, to obtain YBCO materials in the form of large, single crystals without weak-link problem is necessary. A top seeded melt growth (TSMG) process was used to fabricate single crystal YBCO bulk superconductors. The seeded and infiltration growth (IG) technique was also very promising method for the synthesis of large, single-grain YBCO bulk superconductors with good superconducting properties. 5 wt.% Ag doped Y211 green compacts were sintered at 900 .deg. C {approx} 1200 .deg.C and then a single crystal YBCO was fabricated by an infiltration method. A refinement and uniform distribution of the Y211 particles in the Y123 matrix were achieved by sintering the Ag-doped samples. This enhancement of the critical current density was ascribable to a fine dispersion of the Y211 particles, a low porosity and the presence of Ag particles. In addition, we have designed and manufactured large YBCO single domain with levitation force of 10-13 kg/cm{sup 2} using TSMG processing technique.
Into the Bulk: A Covariant Approach
Engelhardt, Netta
2016-01-01
I propose a general, covariant way of defining when one region is "deeper in the bulk" than another. This definition is formulated outside of an event horizon (or in the absence thereof) in generic geometries; it may be applied to both points and surfaces, and may be used to compare the depth of bulk points or surfaces relative to a particular boundary subregion or relative to the entire boundary. Using the recently proposed "lightcone cut" formalism, the comparative depth between two bulk points can be determined from the singularity structure of Lorentzian correlators in the dual field theory. I prove that, by this definition, causal wedges of progressively larger regions probe monotonically deeper in the bulk. The definition furthermore matches expectations in pure AdS and in static AdS black holes with isotropic spatial slices, where a well-defined holographic coordinate exists. In terms of holographic RG flow, this new definition of bulk depth makes contact with coarse-graining over both large distances ...
Continuous-Time Mean-Variance Portfolio Selection under the CEV Process
Hui-qiang Ma
2014-01-01
We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...
Understanding the influence of watershed storage caused by human interferences on ET variance
Zeng, R.; Cai, X.
2014-12-01
Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.
A diphoton resonance from bulk RS
Csáki, Csaba; Randall, Lisa
2016-07-01
Recent LHC data hinted at a 750 GeV mass resonance that decays into two photons. A significant feature of this resonance is that its decays to any other Standard Model particles would be too low to be detected so far. Such a state has a compelling explanation in terms of a scalar or a pseudoscalar that is strongly coupled to vector states charged under the Standard Model gauge groups. Such a scenario is readily accommodated in bulk RS with a scalar localized in the bulk away from but close to the Higgs. Turning this around, we argue that a good way to find the elusive bulk RS model might be the search for a resonance with prominent couplings to gauge bosons.