Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
On the Mean Absolute Error in Inverse Binomial Sampling
Mendo, Luis
2009-01-01
A closed-form expression and an upper bound are obtained for the mean absolute error of the unbiased estimator of a probability in inverse binomial sampling. The results given permit the estimation of an arbitrary probability with a prescribed level of the normalized mean absolute error.
Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.
Guth, David
1990-01-01
Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their…
Variable selection for modeling the absolute magnitude at maximum of Type Ia supernovae
Uemura, Makoto; Kawabata, Koji S.; Ikeda, Shiro; Maeda, Keiichi
2015-06-01
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and a LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates for the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernovae: (i) The absolute magnitude at maximum depends on the color and light-curve width. (ii) The light-curve width depends on the strength of Si II. Recent studies have suggested adding more variables in order to explain the absolute magnitude. However, our analysis does not support adding any other variables in order to have a better generalization error.
A Maximum Likelihood Approach to Least Absolute Deviation Regression
Yinbo Li
2004-09-01
Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.
Variable Selection for Modeling the Absolute Magnitude at Maximum of Type Ia Supernovae
Uemura, Makoto; Kawabata, S; Ikeda, Shiro; Maeda, Keiichi
2015-01-01
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates of the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux-ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernova: i) The absolute magnitude at maximum depends on the color and light-curve width. ii) The light-curve width depends on the strength of Si II. Recent studies have suggested to add more va...
Xiaojun Jiang; Huijie Huang; Xiangzhao Wang; Lihua Huang
2009-01-01
A method for compensating the measuring error of the grating displacement measurement system with absolute zero mark is presented.It divides the full scale range into piece-wise subsections and compares the maximum variation of the measuring errors of two adjacent subsections with the threshold.Whether the specified subsection is divided into smaller subsections is determined by the comparison result.After different compensation parameters and weighted average values of the random errors are obtained,the error compensation algorithm is implemented in the left and right subsections,and the whole measuring error of the grating displacement measurement system is reduced by about 73%.Experimental results show that the method may not only effectively compensate the spike error but also greatly improve the precision of the measuring system.
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles
Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner
2016-01-01
This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…
Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles
Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner
2016-01-01
This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…
Error analysis in newborn screening: can quotients support the absolute values?
Arneth, Borros; Hintz, Martin
2017-03-01
Newborn screening is performed using modern tandem mass spectrometry, which can simultaneously detect a variety of analytes, including several amino acids and fatty acids. Tandem mass spectrometry measures the diagnostic parameters as absolute concentrations and produces fragments which are used as markers of specific substances. Several prominent quotients can also be derived, which are quotients of two absolute measured concentrations. In this study, we determined the precision of both the absolute concentrations and the derived quotients. First, the measurement error of the absolute concentrations and the measurement error of the ratios were practically determined. Then, the Gaussian theory of error calculation was used. Finally, these errors were compared with one another. The practical analytical accuracies of the quotients were significantly higher (e.g., coefficient of variation (CV) = 5.1% for the phenylalanine to tyrosine (Phe/Tyr) quotient and CV = 5.6% for the Fisher quotient) than the accuracies of the absolute measured concentrations (mean CVs = 12%). According to our results, the ratios are analytically correct and, from an analytical point of view, can support the absolute values in finding the correct diagnosis.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Position error correction in absolute surface measurement based on a multi-angle averaging method
Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin
2017-04-01
We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.
HERA Transverse Polarimeter absolute scale and error by rise-time calibration
Karibian, V
2003-01-01
We give the results of an analysis of some 18 rise-time calibrations which are based on data collected in 1996/97. Such measurements are used to determine the absolute polarization scale of the transverse electron beam polarimeter (TPOL) at HERA. The results of the 1996/97 calibrations are found to be in good agreement with earlier calibrations of the TPOL performed in 1994 with errors of 1.2% and 1.1%. Based on these calibrations and a comparison with measurements from the longitudinal polarimeter (LPOL) at HERA carried out over a two-months period in 2000, we obtain a mean LPOL/TPOL ratio of 1.018. Both polarimeters are found to agree with each other within their overall errors of about 2% each.
Effective Connectivity Associated With Auditory Error Detection In Musicians With Absolute Pitch
Amy L Parkinson
2014-03-01
Full Text Available It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP, absolute pitch and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG, primary motor cortex (M1 and premotor cortex (PM. We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere.
Tuning PID and FOPID Controllers using the Integral Time Absolute Error Criterion
Maiti, Deepyaman; Chakraborty, Mithun; Konar, Amit; Janarthanan, Ramadoss
2008-01-01
Particle swarm optimization (PSO) is extensively used for real parameter optimization in diverse fields of study. This paper describes an application of PSO to the problem of designing a fractional-order proportional-integral-derivative (FOPID) controller whose parameters comprise proportionality constant, integral constant, derivative constant, integral order (lambda) and derivative order (delta). The presence of five optimizable parameters makes the task of designing a FOPID controller more challenging than conventional PID controller design. Our design method focuses on minimizing the Integral Time Absolute Error (ITAE) criterion. The digital realization of the deigned system utilizes the Tustin operator-based continued fraction expansion scheme. We carry out a simulation that illustrates the effectiveness of the proposed approach especially for realizing fractional-order plants. This paper also attempts to study the behavior of fractional PID controller vis-a-vis that of its integer order counterpart and ...
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Estimation of bias errors in measured airplane responses using maximum likelihood method
Klein, Vladiaslav; Morgan, Dan R.
1987-01-01
A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.
无
2009-01-01
In order to restrain the mid-spatial frequency error in magnetorheological finishing (MRF) process, a novel part-random path is designed based on the theory of maximum entropy method (MEM). Using KDMRF-1000F polishing machine, one flat work piece (98 mm in diameter) is polished. The mid-spatial frequency error in the region using part-random path is much lower than that by using common raster path. After one MRF iteration (7.46 min), peak-to-valley (PV) is 0.062 wave (1 wave =632.8 nm), root-mean-square (RMS) is 0.010 wave and no obvious mid-spatial frequency error is found. The result shows that the part-random path is a novel path, which results in a high form accuracy and low mid-spatial frequency error in MRF process.
Thome, Kurtis J.; McCorkel, Joel; Angal, Amit
2016-09-01
The goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to provide high-accuracy data for evaluation of long-term climate change trends. Essential to the CLARREO project is demonstration of SI-traceable, reflected measurements that are a factor of 10 more accurate than current state-of-the-art sensors. The CLARREO approach relies on accurate, monochromatic absolute radiance calibration in the laboratory transferred to orbit via solar irradiance knowledge. The current work describes the results of field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) that is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. Recent measurements of absolute spectral solar irradiance using SOLARIS are presented. The ground-based SOLARIS data are corrected to top-of-atmosphere values using AERONET data collected within 5 km of the SOLARIS operation. The SOLARIS data are converted to absolute irradiance using laboratory calibrations based on the Goddard Laser for Absolute Measurement of Radiance (GLAMR). Results are compared to accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.
Asymptotic correctability of Bell-diagonal quantum states and maximum tolerable bit error rates
Ranade, K S; Ranade, Kedar S.; Alber, Gernot
2005-01-01
The general conditions are discussed which quantum state purification protocols have to fulfill in order to be capable of purifying Bell-diagonal qubit-pair states, provided they consist of steps that map Bell-diagonal states to Bell-diagonal states and they finally apply a suitably chosen Calderbank-Shor-Steane code to the outcome of such steps. As a main result a necessary and a sufficient condition on asymptotic correctability are presented, which relate this problem to the magnitude of a characteristic exponent governing the relation between bit and phase errors under the purification steps. These conditions allow a straightforward determination of maximum tolerable bit error rates of quantum key distribution protocols whose security analysis can be reduced to the purification of Bell-diagonal states.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-02-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Long, Jiale; Xi, Jiangtao; Zhang, Jianmin; Zhu, Ming; Cheng, Wenqing; Li, Zhongwei; Shi, Yusheng
2016-09-01
In a recent published work, we proposed a technique to recover the absolute phase maps of fringe patterns with two selected fringe wavelengths. To achieve higher anti-error capability, the proposed method requires employing the fringe patterns with longer wavelengths; however, longer wavelength may lead to the degradation of the signal-to-noise ratio (SNR) in the surface measurement. In this paper, we propose a new approach to unwrap the phase maps from their wrapped versions based on the use of fringes with three different wavelengths which is characterized by improved anti-error capability and SNR. Therefore, while the previous method works on the two-phase maps obtained from six-step phase-shifting profilometry (PSP) (thus 12 fringe patterns are needed), the proposed technique performs very well on three-phase maps from three steps PSP, requiring only nine fringe patterns and hence more efficient. Moreover, the advantages of the two-wavelength method in simple implementation and flexibility in the use of fringe patterns are also reserved. Theoretical analysis and experiment results are presented to confirm the effectiveness of the proposed method.
Maximum error-bounded Piecewise Linear Representation for online stream approximation
Xie, Qing
2014-04-04
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.
Qing-ping Deng; Xue-jun Xu; Shu-min Shen
2000-01-01
This paper deals with Crouzeix-Raviart nonconforming finite element approxi mation of Navier-Stokes equation in a plane bounded domain, by using the so-called velocity-pressure mixed formulation. The quasi-optimal maximum norm error es timates of the velocity and its first derivatives and of the pressure are derived for nonconforming C-R scheme of stationary Navier-Stokes problem. The analysis is based on the weighted inf-sup condition and the technique of weighted Sobolev norm. By the way, the optimal L2-error estimate for nonconforming finite element approximation is obtained.
Lee, C.-H.; Herget, C. J.
1976-01-01
This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
Orus Perez, Raul
2017-04-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
Orus Perez, Raul
2016-11-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Easy Absolute Values? Absolutely
Taylor, Sharon E.; Mittag, Kathleen Cage
2015-01-01
The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…
Kukush, Alexander; Schneeweiss, Hans
2004-01-01
We compare the asymptotic covariance matrix of the ML estimator in a nonlinear measurement error model to the asymptotic covariance matrices of the CS and SQS estimators studied in Kukush et al (2002). For small measurement error variances they are equal up to the order of the measurement error variance and thus nearly equally efficient.
Error Compensation Method for Mirror Symmetry Absolute Measurement%镜面对称法绝对测量中的误差补偿方法
何宇航; 柴立群; 陈波; 李强; 魏小红; 高波
2013-01-01
提出了一种对镜面对称法绝对测量中的原理性误差进行补偿的方法.镜面对称法绝对测量中,需要旋转其中一块平板,由于旋转次数的有限性,重构的三板波前均存在缺失cNθ项的原理性误差.通过增加一次不同角度的旋转,根据Zernike多项式在极坐标系中形式的旋转不变性,对旋转前后的波前差值求解多项式系数方程,获得了cNθ项的多项式系数,进而对原理性误差进行了补偿.由于cNθ项包含无穷多项,根据精度的需要和计算开销决定补偿的项数.模拟实验证明了该补偿方法的有效性.%A method is proposed to compensate intrinsic error in mirror symmetry absolute measurement. Because of the limitation of rotation times in mirror symmetry absolute measurement, intrinsic error of cNd terms occurs in reconstructed wavefronts of three flats. By adding a rotation with a different angle, the wavefront difference between two measurements before and after rotation is calculated, and the Zernike coefficients of cNd terms can be obtained by coefficient equations due to rotation invariability of the form of Zernike polynomials in polar coordinates. Therefore the intrinsic error of cNd terms may be compensated. Because the amount of cNθ terms is infinite, the compensated terms are decided in terms of the balance between accuracy and computing capacity. Computer simulation proves the validity of the proposed method.
A Maximum-error Specification Oriented Gross Error Identification Method%一种面向最大值指标的粗大误差处理方法
普仕凡; 韩旭; 李智生; 李钊
2014-01-01
A maximum-error specification oriented gross error identification method based on general Paǔta criterion is proposed, which provides a reference for gross error identification in maximum-error specification. It is assumed that the target stochastic observa-tion sequence is subject to IID normal distribution. Then, through a risk analysis on mistaking the maximum observation value as the gross error data, some modifications are made to the classic Paǔta criterion, and the general Paǔta criterion is introduced. The gross error identification threshold calculation method is also given. Practical application test results show that the method is feasible.%提出了一种面向最大值指标的广义拉依达准则粗差处理方法，为最大值指标下粗大误差的有效鉴别提供了参考依据。该方法假设观测序列服从独立同分布的正态分布，从最大观测值被误作为粗差数据的风险分析入手，对拉依达准则的判定标准进行了改进，推导并给出了广义拉依达准则的粗差判决条件。实践应用的结果表明，该方法是可行的。
Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Massidda, Scott; Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Vay, Jean-Luc [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the
Vilmos Simon
2013-01-01
The aim of this study is to define optimal tooth modifications,introduced by appropriately chosen head-cutter geometry and machine tool setting,to simultaneously minimize tooth contact pressure and angular displacement error of the driven gear (transmission error) of face-hobbed spiral bevel gears.As a result of these modifications,the gear pair becomes mismatched,and a point contact replaces the theoretical line contact.In the applied loaded tooth contact analysis it is assumed that the point contact under load is spreading over a surface along the whole or part of the “potential” contact line.A computer program was developed to implement the formulation provided above.By using this program the influence of tooth modifications introduced by the variation in machine tool settings and in head cutter data on load and pressure distributions,transmission errors,and fillet stresses is investigated and discussed.The correlation between the ease-off obtained by pinion tooth modifications and the corresponding tooth contact pressure distribution is investigated and the obtained results are presented.
雷达组网的精确极大似然误差配准算法%An Exact Maximum Likelihood Error Registration Algorithm for Radar Network
丰昌政; 薛强
2012-01-01
针对最小二乘法和卡尔曼滤波方法在雷达网系统中的误差配准问题,提出一种雷达组网的精确极大似然误差配准算法.采用基于圆极投影的极大似然配准算法,利用各雷达站的几何关系,通过极大似然混合高斯-牛顿迭代方法估计出雷达网的系统误差,并进行仿真.仿真结果证明:该配准方法具有良好的一致性,可以用于多雷达组网的误差配准.%For the least square method and Caiman filter method in radar network system's error registration problems, put forward a kind of radar netting exact maximum likelihood error registration algorithm. Using maximum likelihood registration algorithm based on circular polar projection, according to the radar station geometric relationship, to estimate the error of radar network system by maximum likelihood mixed Gauss-Newton iterative method, and carried out a simulation. The simulation results show that the algorithm has good compatibility, can be used for multi radar netted registration.
T. Gnanasekaran
2008-01-01
Full Text Available Problem statement: In this study we propose a method to improve the performance of Maximum A-Posteriori Probability Algorithm, which is used in turbo decoder. Previously the performance of turbo decoder is improved by means of scaling the channel reliability value. Approach: A modification in MAP algorithm proposed in this study, which achieves further improvement in forward error correction by means of scaling the extrinsic information in both decoders without introducing any complexity. The encoder is modified with a new puncturing matrix, which yields Unequal Error Protection (UEP. This modified MAP algorithm is analyzed with the traditional turbo code system Equal Error Protection (EEP and also with Unequal Error Protection (UEP both in AWGN channel and fading channel. Result: MAP and modified MAP achieve coding gain of 0.6 dB over EEP in AWGN channel. The MAP and modified MAP achieve coding gain of 0.4 dB and 0.9dB over EEP respectively in Rayleigh fading channel. Modified MAP in UEP class 1 and class 2 gained 0.8 dB and 0.6 dB respectively in AWGN channel where as in fading channel class 1 and 2 gained 0.4 dB and 0.6 dB respectively. Conclusion/Recommendations: The modified MAP algorithm improves the Bit Error Rate (BER performance in EEP as well as UEP both in AWGN and fading channels. We propose modified MAP error correction algorithm with UEP for broad band communication.
J.G.M. van Marrewijk (Charles)
2008-01-01
textabstractA country is said to have an absolute advantage over another country in the production of a good or service if it can produce that good or service using fewer real resources. Equivalently, using the same inputs, the country can produce more output. The concept of absolute advantage can a
Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L
2011-12-15
Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.
Sabitha Gauni
2014-03-01
Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.
Phillips, Alfred, Jr.
Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .
A modified phase-coding method for absolute phase retrieval
Xing, Y.; Quan, C.; Tay, C. J.
2016-12-01
Fringe projection technique is one of the most robust tools for three dimensional (3D) shape measurement. Various fringe projection methods have been proposed for addressing different issues in profilometry and phase-coding is one such technique employed to determine fringe orders for absolute phase retrieval. However this method is prone to fringe order error, while dealing with high-frequency fringes. This paper studies phase error introduced by system non-linearity in phase-coding and provides a mathematical model to obtain the maximum number of achievable codewords in a given scheme. In addition, a modified phase-coding method is also proposed for phase error compensation. Experimental study validates the theoretical analysis on the maximum number of achievable codewords and the performance of the modified phase-coding method is also illustrated.
Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex
2012-06-01
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕb. In the presence of large voltage errors, δU≫ΔEb, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
Massidda, Scott [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Lidia, Steven M.; Seidl, Peter [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, {Delta}{Epsilon}{sub b}. In the presence of large voltage errors, {delta}U Double-Nested-Greater-Than {Delta}E{sub b}, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
何洋; 纪昌明; 田开华; 张验科; 李传刚
2016-01-01
为了更好的研究径流预报误差的分布规律，应用最大熵原理，建立径流预报误差分布的最大熵模型；以官地水库径流预报系列为例，计算其不同预见期的径流预报误差概率密度函数及分布曲线，将该分布曲线与理论正态分布曲线和样本直方图进行对比，结果表明最大熵法求得的误差分布能更好地描述径流预报误差的分布特性。考虑流域径流年内的丰枯变化，以枯水期、汛期和过渡期对径流系列进行划分，分别分析各个时期的误差分布规律，并给出预报误差在不同置信区间下的置信度，从而更好地掌握径流预报误差的分布规律，为提高径流预报精度提供一条新途径。%To deeply study the distribution law of runoff forecast error, the maximum entropy principle is applied and the maximum entropy model for the distribution of runoff prediction error is established in this paper. The authors use the runoff forecast series in Guandi Reservoir as an example and calculate the probability density function and distribution curve of the runoff forecast error for different forecasting periods. The distribution curves are compared with the theoretical normal distribution curves and the histogram of the samples. The results show that the distribution characteristics of the error distribution calculated by the maximum entropy method can describe the runoff forecasting error better. Considering the change of runoff years, the runoff series are divided into droughts, flood and transition seasons. The error distribution rule of each period is analyzed, and the confidence of forecasting error at different confidence interval offered, thus mastering the distribution rule of runoff forecasting error better and providing a new way to improve the accuracy of runoff forecasting.
Newton On Absolute Space A Commentary
Adewole, A I A
2001-01-01
Newton seems to have stated a quantitative relationship between the position of a body in relative space and the position of the body in absolute space in the first scholium of his Principia. We show that if this suspected relationship is assumed to hold, it will dispel many errors and misrepresentations that have befallen Newton's ideas on absolute space.
Teaching Absolute Value Meaningfully
Wade, Angela
2012-01-01
What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-05-15
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
杜金华; 王莎
2013-01-01
首先介绍3种典型的用于翻译错误检测和分类的单词后验概率特征,即基于固定位置的词后验概率、基于滑动窗的词后验概率和基于词对齐的词后验概率,分析其对错误检测性能的影响；然后,将其分别与语言学特征如词性、词及由LG句法分析器抽取的句法特征等进行组合,利用最大熵分类器预测翻译错误,并在汉英NIST数据集上进行实验验证和比较.实验结果表明,不同的单词后验概率对分类错误率的影响是显著的,并且在词后验概率基础上加入语言学特征的组合特征可以显著降低分类错误率,提高译文错误预测性能.%The authors firstly introduce three typical word posterior probabilities (WPP) for error detection and classification, which are fixed position WPP, sliding window WPP, and alignment-based WPP, and analyzes their impact on the detection performance. Then each WPP feature is combined with three linguistic features (Word, POS and LG Parsing knowledge) over the maximum entropy classifier to predict the translation errors. Experimental results on Chinese-to-English NIST datasets show that the influences of different WPP features on the classification error rate (CER) are significant, and the combination of WPP with linguistic features can significantly reduce the CER and improve the prediction capability of the classifier.
Päs, H; P\\"as, Heinrich; Weiler, Thomas J.
2002-01-01
The determination of absolute neutrino masses is crucial for the understanding of theories underlying the standard model, such as SUSY. We review the experimental prospects to determine absolute neutrino masses and the correlations among approaches, using the Delta m^2's inferred from neutrino oscillation experiments and assuming a three neutrino Universe.
Schechter, J.; Shahid, M. N.
2012-01-01
We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos.......We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos....
Automatic section thickness determination using an absolute gradient focus function.
Elozory, D T; Kramer, K A; Chaudhuri, B; Bonam, O P; Goldgof, D B; Hall, L O; Mouton, P R
2012-12-01
Quantitative analysis of microstructures using computerized stereology systems is an essential tool in many disciplines of bioscience research. Section thickness determination in current nonautomated approaches requires manual location of upper and lower surfaces of tissue sections. In contrast to conventional autofocus functions that locate the optimally focused optical plane using the global maximum on a focus curve, this study identified by two sharp 'knees' on the focus curve as the transition from unfocused to focused optical planes. Analysis of 14 grey-scale focus functions showed, the thresholded absolute gradient function, was best for finding detectable bends that closely correspond to the bounding optical planes at the upper and lower tissue surfaces. Modifications to this function generated four novel functions that outperformed the original. The 'modified absolute gradient count' function outperformed all others with an average error of 0.56 μm on a test set of images similar to the training set; and, an average error of 0.39 μm on a test set comprised of images captured from a different case, that is, different staining methods on a different brain region from a different subject rat. We describe a novel algorithm that allows for automatic section thickness determination based on just out-of-focus planes, a prerequisite for fully automatic computerized stereology.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Absolute Gravity data (78 stations) was received in July 1993. Principal gravity parameters include Gravity Value, Uncertainty, and Vertical Gradient. The...
Sinha, Supurna
2005-01-01
We present an analytical study of the loss of quantum coherence at absolute zero. Our model consists of a harmonic oscillator coupled to an environment of harmonic oscillators at absolute zero. We find that for an Ohmic bath, the offdiagonal elements of the density matrix in the position representation decay as a power law in time at late times. This slow loss of coherence in the quantum domain is qualitatively different from the exponential decay observed in studies of high temperature envir...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
McLeod, Stephen
2014-07-01
Absolute needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended absolute needs on the grounds that the verb 'need' has instrumental and absolute senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are absolute biological needs. The absolute nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of absolute need is not inherently normative in either of the first two senses.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Significance of absolute energy scale for physics at BESⅢ
FU Cheng-Dong; MO Xiao-Hu
2008-01-01
The effects of absolute energy calibration on BESⅢ physics are discussed in detail,which mainly involve the effects on τ mass measurement,cross section scan measurement,and generic error determination in other measurements.
Absolute Gravimetry in Fennoscandia
Pettersen, B. R; TImmen, L.; Gitlein, O.
motions) has its major axis in the direction of southwest to northeast and covers a distance of about 2000 km. Absolute gravimetry was made in Finland and Norway in 1976 with a rise-and fall instrument. A decade later the number of gravity stations was expanded by JILAg-5, in Finland from 1988, in Norway...
Error analysis of the quartic nodal expansion method for slab geometry
Penland, R.C.; Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States); Azmy, Y.Y. [Oak Ridge National Lab., TN (United States)
1995-02-01
This paper presents an analysis of the quartic polynomial Nodal Expansion Method (NEM) for one-dimensional neutron diffusion calculations. As part of an ongoing effort to develop an adaptive mesh refinement strategy for use in state-of-the-art nodal kinetics codes, we derive a priori error bounds on the computed solution for uniform meshes and validate them using a simple test problem. Predicted error bounds are found to be greater than computed maximum absolute errors by no more than a factor of six allowing mesh size selection to reflect desired accuracy. We also quantify the rapid convergence in the NEM computed solution as a function of mesh size.
Absolute Neutrino Mass Determination
Päs, H
2001-01-01
We discuss four approaches to the determination of absolute neutrino mass. These are the measurement of the zero-neutrino double beta decay rate, of the tritium decay end-point spectrum, of the cosmic ray spectrum above the GZK cutoff, and the cosmological measurement of the power spectrum governing the CMB and large scale structure. The first two approaches are sensitive to the mass eigenstates coupling to the electron neutrino, whereas the latter two are sensitive to the heavy component of the cosmic neutrino background. All mass eigenstates are related by the $\\Delta m^2$'s inferred from neutrino oscillation data. Consequently, the potential for absolute mass determination of each of the four approaches is correlated with the other three, in ways that we point out.
Quantum theory allows for absolute maximal contextuality
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
Baumann, Henri
This work consists of a feasibility study of a first stage prototype airborne absolute gravimeter system. In contrast to relative systems, which are using spring gravimeters, the measurements acquired by absolute systems are uncorrelated and the instrument is not suffering from problems like instrumental drift, frequency response of the spring and possible variation of the calibration factor. The major problem we had to resolve were to reduce the influence of the non-gravitational accelerations included in the measurements. We studied two different approaches to resolve it: direct mechanical filtering, and post-processing digital compensation. The first part of the work describes in detail the different mechanical passive filters of vibrations, which were studied and tested in the laboratory and later in a small truck in movement. For these tests as well as for the airborne measurements an absolute gravimeter FG5-L from Micro-G Ltd was used together with an Inertial navigation system Litton-200, a vertical accelerometer EpiSensor, and GPS receivers for positioning. These tests showed that only the use of an optical table gives acceptable results. However, it is unable to compensate for the effects of the accelerations of the drag free chamber. The second part describes the strategy of the data processing. It is based on modeling the perturbing accelerations by means of GPS, EpiSensor and INS data. In the third part the airborne experiment is described in detail, from the mounting in the aircraft and data processing to the different problems encountered during the evaluation of the quality and accuracy of the results. In the part of data processing the different steps conducted from the raw apparent gravity data and the trajectories to the estimation of the true gravity are explained. A comparison between the estimated airborne data and those obtained by ground upward continuation at flight altitude allows to state that airborne absolute gravimetry is feasible and
Absolutely Indecomposable Modules
Göbel, Rüdiger
2007-01-01
A module is called absolutely indecomposable if it is directly indecomposable in every generic extension of the universe. We want to show the existence of large abelian groups that are absolutely indecomposable. This will follow from a more general result about R-modules over a large class of commutative rings R with endomorphism ring R which remains the same when passing to a generic extension of the universe. It turns out that `large' in this context has the precise meaning, namely being smaller then the first omega-Erdos cardinal defined below. We will first apply result on large rigid trees with a similar property established by Shelah in 1982, and will prove the existence of related ` R_omega-modules' (R-modules with countably many distinguished submodules) and finally pass to R-modules. The passage through R_omega-modules has the great advantage that the proofs become very transparent essentially using a few `linear algebra' arguments accessible also for graduate students. The result gives a new constru...
Okada, H; Bravar, A; Bunce, G; Dhawan, S; Eyser, K O; Gill, R; Haeberli, W; Huang, H; Jinnouchi, O; Makdisi, Y; Nakagawa, I; Nass, A; Saitô, N; Stephenson, E; Sviridia, D; Wise, T; Wood, J; Zelenski, A
2007-01-01
Precise and absolute beam polarization measurements are critical for the RHIC spin physics program. Because all experimental spin-dependent results are normalized by beam polarization, the normalization uncertainty contributes directly to final physics uncertainties. We aimed to perform the beam polarization measurement to an accuracy of $\\Delta P_{beam}/P_{beam} < 5%$. The absolute polarimeter consists of Polarized Atomic Hydrogen Gas Jet Target and left-right pairs of silicon strip detectors and was installed in the RHIC-ring in 2004. This system features \\textit{proton-proton} elastic scattering in the Coulomb nuclear interference (CNI) region. Precise measurements of the analyzing power $A_N$ of this process has allowed us to achieve $\\Delta P_{beam}/P_{beam} =4.2%$ in 2005 for the first long spin-physics run. In this report, we describe the entire set up and performance of the system. The procedure of beam polarization measurement and analysis results from 2004-2005 are described. Physics topics of $A...
Absolute Gravimetry in Fennoscandia
Pettersen, B. R; TImmen, L.; Gitlein, O.
The Fennoscandian postglacial uplift has been mapped geometrically using precise levelling, tide gauges, and networks of permanent GPS stations. The results identify major uplift rates at sites located around the northern part of the Gulf of Bothnia. The vertical motions decay in all directions...... motions) has its major axis in the direction of southwest to northeast and covers a distance of about 2000 km. Absolute gravimetry was made in Finland and Norway in 1976 with a rise-and fall instrument. A decade later the number of gravity stations was expanded by JILAg-5, in Finland from 1988, in Norway...... from 1991, and in Sweden from 1992. FG5 was introduced in these three countries in 1993 (7 stations) and continued with an extended campaign in 1995 (12 stations). In 2003 a project was initiated by IfE, Hannover to collect observations simultaneously with GRACE on an annual cycle. New instruments were...
Optical tweezers absolute calibration
Dutra, R S; Neto, P A Maia; Nussenzveig, H M
2014-01-01
Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past fifteen years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spo...
Absolute multilateration between spheres
Muelaner, Jody; Wadsworth, William; Azini, Maria; Mullineux, Glen; Hughes, Ben; Reichold, Armin
2017-04-01
Environmental effects typically limit the accuracy of large scale coordinate measurements in applications such as aircraft production and particle accelerator alignment. This paper presents an initial design for a novel measurement technique with analysis and simulation showing that that it could overcome the environmental limitations to provide a step change in large scale coordinate measurement accuracy. Referred to as absolute multilateration between spheres (AMS), it involves using absolute distance interferometry to directly measure the distances between pairs of plain steel spheres. A large portion of each sphere remains accessible as a reference datum, while the laser path can be shielded from environmental disturbances. As a single scale bar this can provide accurate scale information to be used for instrument verification or network measurement scaling. Since spheres can be simultaneously measured from multiple directions, it also allows highly accurate multilateration-based coordinate measurements to act as a large scale datum structure for localized measurements, or to be integrated within assembly tooling, coordinate measurement machines or robotic machinery. Analysis and simulation show that AMS can be self-aligned to achieve a theoretical combined standard uncertainty for the independent uncertainties of an individual 1 m scale bar of approximately 0.49 µm. It is also shown that combined with a 1 µm m‑1 standard uncertainty in the central reference system this could result in coordinate standard uncertainty magnitudes of 42 µm over a slender 1 m by 20 m network. This would be a sufficient step change in accuracy to enable next generation aerospace structures with natural laminar flow and part-to-part interchangeability.
DIAGNOSTIC TEST FOR GARCH MODELS BASED ON ABSOLUTE RESIDUAL AUTOCORRELATIONS
Farhat Iqbal
2013-10-01
Full Text Available In this paper the asymptotic distribution of the absolute residual autocorrelations from generalized autoregressive conditional heteroscedastic (GARCH models is derived. The correct asymptotic standard errors for the absolute residual autocorrelations are also obtained and based on these results, a diagnostic test for checking the adequacy of GARCH-type models are developed. Our results do not depend on the existence of higher moments and is therefore robust under heavy-tailed distributions.
DIAGNOSTIC TEST FOR GARCH MODELS BASED ON ABSOLUTE RESIDUAL AUTOCORRELATIONS
Farhat Iqbal
2013-10-01
Full Text Available In this paper the asymptotic distribution of the absolute residual autocorrelations from generalized autoregressive conditional heteroscedastic (GARCH models is derived. The correct asymptotic standard errors for the absolute residual autocorrelations are also obtained and based on these results, a diagnostic test for checking the adequacy of GARCH-type models are developed. Our results do not depend on the existence of higher moments and is therefore robust under heavy-tailed distributions.
Feng, Chi; Li, Dong; Gao, Shan; Daniel, Ketui
2016-11-01
This paper presents a CFD (Computation Fluid Dynamic) simulation and experimental results for the reflected radiation error from turbine vanes when measuring turbine blade's temperature using a pyrometer. In the paper, an accurate reflection model based on discrete irregular surfaces is established. Double contour integral method is used to calculate view factor between the irregular surfaces. Calculated reflected radiation error was found to change with relative position between blades and vanes as temperature distribution of vanes and blades was simulated using CFD. Simulation results indicated that when the vanes suction surface temperature ranged from 860 K to 1060 K and the blades pressure surface average temperature is 805 K, pyrometer measurement error can reach up to 6.35%. Experimental results show that the maximum pyrometer absolute error of three different targets on the blade decreases from 6.52%, 4.15% and 1.35% to 0.89%, 0.82% and 0.69% respectively after error correction.
Eccentric error and compensation in rotationally symmetric laser triangulation
Wang Lei; Gao Jun; Wang Xiaojia; Johannes Eckstein; Peter Ott
2007-01-01
Rotationally symmetric triangulation (RST) sensor has more flexibility and less uncertainty limits becauseof the abaxial rotationally symmetric optical system.But if the incident laser is eccentric,the symmetry of the imagewill descend,and it will result in the eccentric error especially when some part of the imaged ring is blocked.Themodel of rotationally symmetric triangulation that meets the Schimpflug condition is presented in this paper.The errorfrom eccentric incident 1aser is analysed.It iS pointed out that the eccentric error is composed of two parts.one is acosine in circumference and proportional to the eccentric departure factor,and the other is a much smaller quadricfactor of the departure.When the ring is complete,the first error factor is zero because it is integrated in whole ring,but if some part of the ring iS blocked,the first factor will be the main error.Simulation verifies the result of the a-nalysis.At last,a compensation method to the error when some part of the ring is lost is presented based on neuralnetwork.The results of experiment show that the compensation will make the absolute maximum error descend tohalf,and the standard deviation of error descends to 1/3.
Estimating Absolute Site Effects
Malagnini, L; Mayeda, K M; Akinci, A; Bragato, P L
2004-07-15
The authors use previously determined direct-wave attenuation functions as well as stable, coda-derived source excitation spectra to isolate the absolute S-wave site effect for the horizontal and vertical components of weak ground motion. They used selected stations in the seismic network of the eastern Alps, and find the following: (1) all ''hard rock'' sites exhibited deamplification phenomena due to absorption at frequencies ranging between 0.5 and 12 Hz (the available bandwidth), on both the horizontal and vertical components; (2) ''hard rock'' site transfer functions showed large variability at high-frequency; (3) vertical-motion site transfer functions show strong frequency-dependence, and (4) H/V spectral ratios do not reproduce the characteristics of the true horizontal site transfer functions; (5) traditional, relative site terms obtained by using reference ''rock sites'' can be misleading in inferring the behaviors of true site transfer functions, since most rock sites have non-flat responses due to shallow heterogeneities resulting from varying degrees of weathering. They also use their stable source spectra to estimate total radiated seismic energy and compare against previous results. they find that the earthquakes in this region exhibit non-constant dynamic stress drop scaling which gives further support for a fundamental difference in rupture dynamics between small and large earthquakes. To correct the vertical and horizontal S-wave spectra for attenuation, they used detailed regional attenuation functions derived by Malagnini et al. (2002) who determined frequency-dependent geometrical spreading and Q for the region. These corrections account for the gross path effects (i.e., all distance-dependent effects), although the source and site effects are still present in the distance-corrected spectra. The main goal of this study is to isolate the absolute site effect (as a function of frequency
Notes on absolute Hodge classes
Charles, François
2011-01-01
We survey the theory of absolute Hodge classes. The notes include a full proof of Deligne's theorem on absolute Hodge classes on abelian varieties as well as a discussion of other topics, such as the field of definition of Hodge loci and the Kuga-Satake construction.
Alt, Tobias; Knicker, Axel J; Strüder, Heiko K
2017-04-01
Analytical methods to assess thigh muscle balance need to provide reliable data to allow meaningful interpretation. However, reproducibility of the dynamic control ratio at the equilibrium point has not been evaluated yet. Therefore, the aim of this study was to compare relative and absolute reliability indices of its angle and moment values with conventional and functional hamstring-quadriceps ratios. Furthermore, effects of familiarisation and angular velocity on reproducibility were analysed. A number of 33 male volunteers participated in 3 identical test sessions. Peak moments (PMs) were determined unilaterally during maximum concentric and eccentric knee flexion (prone) and extension (supine position) at 0.53, 1.57 and 2.62 rad · s(-1). A repeated measure, ANOVA, confirmed systematic bias. Intra-class correlation coefficients and standard errors of measurement indicated relative and absolute reliability. Correlation coefficients were averaged over respective factors and tested for significant differences. All balance scores showed comparable low-to-moderate relative (Relative reproducibility of dynamic control equilibrium parameters augmented with increasing angular velocity, but not with familiarisation. At 2.62 rad · s(-1), high (moment: 0.906) to moderate (angle: 0.833) relative reliability scores with accordingly high absolute indices (4.9% and 6.4%) became apparent. Thus, the dynamic control equilibrium is an equivalent method for the reliable assessment of thigh muscle balance.
Partial sums of arithmetical functions with absolutely convergent Ramanujan expansions
BISWAJYOTI SAHA
2016-08-01
For an arithmetical function $f$ with absolutely convergent Ramanujan expansion, we derive an asymptotic formula for the $\\sum_{n\\leq N}$ f(n)$ with explicit error term. As a corollary we obtain new results about sum-of-divisors functions and Jordan’s totient functions.
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Estimating absolute sea level variations by combining GNSS and Tide gauge data
Bos, M.S.; Fernandes, R.M.S; Vethamony, P.; Mehra, P.
database, we have computed new sea level rise estimates for seven Indian tide gauges with realistic error bars. Thes e error bars should be combined with the uncertainty of the vertical land motion to obtain the error bar of the absolute sea level rise...
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Absoluteness or relativity of morality
I. A. Kadievskaya
2014-02-01
Full Text Available Article is dedicated to the case study of absoluteness or relativity of morals. The questions are in a new way comprehended: Can exist absolute morals? Is how its content? Is necessary it for humanity? Is moral personality absolute value? Does justify the purpose of means? It is substantiated, that reflecting about the problem of absoluteness or relativity of morals, one ought not to be abstracted from the religion billions of people find in it the basis of their morals. Accumulated ethical experience is infinitely rich and diverse in humanity: it includes and the proclaimed prophets godly revelations, and the brilliant enlightenment of secular philosophers. Are analyzed such concepts, as morals, absolute morals, relativity, moral rigorizm, moral personality, formal ethics. The specific character of the moral relativity, which proclaims historicity and changeability of standards and standards of human behavior, is established. Moral rigorizm is understood as the principle, according to which the man must act only from the considerations of moral debt, whereas all other external motivations (interest, happiness, friendship, etc have no moral value. Is shown the priority significance of the nerigoristskoy formal ethics, in which strong idealizations and abstractions of the ethics of moral rigorizma are substituted by the weaker more realistic and more humane. In the nerigoristskoy formal ethics, as in the life, moral estimations completely can be and in the overwhelming majority of the cases are relative.
Lyman alpha SMM/UVSP absolute calibration and geocoronal correction
Fontenla, Juan M.; Reichmann, Edwin J.
1987-01-01
Lyman alpha observations from the Ultraviolet Spectrometer Polarimeter (UVSP) instrument of the Solar Maximum Mission (SMM) spacecraft were analyzed and provide instrumental calibration details. Specific values of the instrument quantum efficiency, Lyman alpha absolute intensity, and correction for geocoronal absorption are presented.
Che Wan Jasimah bt Wan Mohamed Radzi
2016-11-01
Full Text Available Several factors may influence children’s lifestyle. The main purpose of this study is to introduce a children’s lifestyle index framework and model it based on structural equation modeling (SEM with Maximum likelihood (ML and Bayesian predictors. This framework includes parental socioeconomic status, household food security, parental lifestyle, and children’s lifestyle. The sample for this study involves 452 volunteer Chinese families with children 7–12 years old. The experimental results are compared in terms of root mean square error, coefficient of determination, mean absolute error, and mean absolute percentage error metrics. An analysis of the proposed causal model suggests there are multiple significant interconnections among the variables of interest. According to both Bayesian and ML techniques, the proposed framework illustrates that parental socioeconomic status and parental lifestyle strongly impact children’s lifestyle. The impact of household food security on children’s lifestyle is rejected. However, there is a strong relationship between household food security and both parental socioeconomic status and parental lifestyle. Moreover, the outputs illustrate that the Bayesian prediction model has a good fit with the data, unlike the ML approach. The reasons for this discrepancy between ML and Bayesian prediction are debated and potential advantages and caveats with the application of the Bayesian approach in future studies are discussed.
On-Orbit Absolute Radiance Standard for the Next Generation of IR Remote Sensing Instruments
Best, Fred. A.; Adler, Douglas P.; Pettersen, P. Claire; Gero, Jonathan; Taylor, Joseph K.; Revercomb, Henry E.; Knuteson, Robert O.; Perepezko, John H.
2012-01-01
The next generation of infrared remote sensing satellite instrumentation, including climate benchmark missions will require better absolute measurement accuracy than now available, and will most certainly rely on the emerging capability to fly SI traceable standards that provide irrefutable absolute measurement accuracy. As an example, instrumentation designed to measure spectrally resolved infrared radiances with an absolute brightness temperature error of better than 0.1 K will require high...
Database applicaton for absolute spectrophotometry
Bochkov, Valery V.; Shumko, Sergiy
2002-12-01
32-bit database application with multidocument interface for Windows has been developed to calculate absolute energy distributions of observed spectra. The original database contains wavelength calibrated observed spectra which had been already passed through apparatus reductions such as flatfielding, background and apparatus noise subtracting. Absolute energy distributions of observed spectra are defined in unique scale by means of registering them simultaneously with artificial intensity standard. Observations of sequence of spectrophotometric standards are used to define absolute energy of the artificial standard. Observations of spectrophotometric standards are used to define optical extinction in selected moments. FFT algorithm implemented in the application allows performing convolution (deconvolution) spectra with user-defined PSF. The object-oriented interface has been created using facilities of C++ libraries. Client/server model with Windows Socket functionality based on TCP/IP protocol is used to develop the application. It supports Dynamic Data Exchange conversation in server mode and uses Microsoft Exchange communication facilities.
The PMA Catalogue: 420 million positions and absolute proper motions
Akhmetov, V. S.; Fedorov, P. N.; Velichko, A. B.; Shulga, V. M.
2017-07-01
We present a catalogue that contains about 420 million absolute proper motions of stars. It was derived from the combination of positions from Gaia DR1 and 2MASS, with a mean difference of epochs of about 15 yr. Most of the systematic zonal errors inherent in the 2MASS Catalogue were eliminated before deriving the absolute proper motions. The absolute calibration procedure (zero-pointing of the proper motions) was carried out using about 1.6 million positions of extragalactic sources. The mean formal error of the absolute calibration is less than 0.35 mas yr-1. The derived proper motions cover the whole celestial sphere without gaps for a range of stellar magnitudes from 8 to 21 mag. In the sky areas where the extragalactic sources are invisible (the avoidance zone), a dedicated procedure was used that transforms the relative proper motions into absolute ones. The rms error of proper motions depends on stellar magnitude and ranges from 2-5 mas yr-1 for stars with 10 mag mas yr-1 for faint ones. The present catalogue contains the Gaia DR1 positions of stars for the J2015 epoch. The system of the PMA proper motions does not depend on the systematic errors of the 2MASS positions, and in the range from 14 to 21 mag represents an independent realization of a quasi-inertial reference frame in the optical and near-infrared wavelength range. The Catalogue also contains stellar magnitudes taken from the Gaia DR1 and 2MASS catalogues. A comparison of the PMA proper motions of stars with similar data from certain recent catalogues has been undertaken.
Absolute geostrophic currents in global tropical oceans
Yang, Lina; Yuan, Dongliang
2016-11-01
A set of absolute geostrophic current (AGC) data for the period January 2004 to December 2012 are calculated using the P-vector method based on monthly gridded Argo profiles in the world tropical oceans. The AGCs agree well with altimeter geostrophic currents, Ocean Surface Current Analysis-Real time currents, and moored current-meter measurements at 10-m depth, based on which the classical Sverdrup circulation theory is evaluated. Calculations have shown that errors of wind stress calculation, AGC transport, and depth ranges of vertical integration cannot explain non-Sverdrup transport, which is mainly in the subtropical western ocean basins and equatorial currents near the Equator in each ocean basin (except the North Indian Ocean, where the circulation is dominated by monsoons). The identified non-Sverdrup transport is thereby robust and attributed to the joint effect of baroclinicity and relief of the bottom (JEBAR) and mesoscale eddy nonlinearity.
王鼎; 潘苗; 吴瑛
2011-01-01
Aim at the self-calibration of direction-dependent gm-phase errors in case of deterministic signal model, the maximum likelihood method(MLM) for calibrating the direction-dependent gain-phase errors with carry-on instrumental sensors was presented. In order to maximize the high-dimensional nonlinear cost function appearing in the MLM, an improved alternative projection iteration algorithm, which could optimize the azimuths and direc6on-dependent gain-phase errors was proposed. The closed-form expressions of the Cramér-Rao bound(CRB) for azimuths and gain-phase errors were derived. Simulation experiments show the effectiveness and advantage of the novel method.%针对确定信号模型条件下方位依赖幅相误差的自校正问题,给出了一种基于辅助阵元的方位依赖幅相误差最大似然自校正方法;针对最大似然估计器中出现的高维非线性优化问题,推导了一种改进型交替投影迭代算法,从而实现了信号方位和方位依赖幅相误差的优化计算.此外,还推导了信号方位和方位依赖幅相误差的无偏克拉美罗界(CRB).仿真实验结果验证了新方法的有效性和优越性.
Absolute luminosity measurements at LHCb
Hopchev, Plamen
2011-01-01
Absolute luminosity measurements are of general interest for colliding-beam experiments at storage rings. These measurements are necessary to determine the absolute cross-sections of reaction processes and are valuable to quantify the performance of the accelerator. LHCb has applied two methods to determine the absolute scale of its luminosity measurements for proton-proton collisions at the LHC running at a centre-of-mass energy of 7 TeV. In addition to the classic ``van der Meer'' scan method a novel technique has been developed which makes use of direct imaging of the individual beams using both proton-gas and proton-proton interactions. The beam imaging method is made possible by the high resolution of the LHCb vertex detector and the close proximity of the detector to the beams, and allows beam parameters such as positions, angles and widths to be determined. We describe both methods and compare the two results. In addition, we present the techniques used to transport the absolute luminosity measurement ...
Relativistic Absolutism in Moral Education.
Vogt, W. Paul
1982-01-01
Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)
Absolute Standards for Climate Measurements
Leckey, J.
2016-10-01
In a world of changing climate, political uncertainty, and ever-changing budgets, the benefit of measurements traceable to SI standards increases by the day. To truly resolve climate change trends on a decadal time scale, on-orbit measurements need to be referenced to something that is both absolute and unchanging. One such mission is the Climate Absolute Radiance and Refractivity Observatory (CLARREO) that will measure a variety of climate variables with an unprecedented accuracy to definitively quantify climate change. In the CLARREO mission, we will utilize phase change cells in which a material is melted to calibrate the temperature of a blackbody that can then be observed by a spectrometer. A material's melting point is an unchanging physical constant that, through a series of transfers, can ultimately calibrate a spectrometer on an absolute scale. CLARREO consists of two primary instruments: an infrared (IR) spectrometer and a reflected solar (RS) spectrometer. The mission will contain orbiting radiometers with sufficient accuracy to calibrate other space-based instrumentation and thus transferring the absolute traceability. The status of various mission options will be presented.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Physics of negative absolute temperatures
Abraham, Eitan; Penrose, Oliver
2017-01-01
Negative absolute temperatures were introduced into experimental physics by Purcell and Pound, who successfully applied this concept to nuclear spins; nevertheless, the concept has proved controversial: a recent article aroused considerable interest by its claim, based on a classical entropy formula (the "volume entropy") due to Gibbs, that negative temperatures violated basic principles of statistical thermodynamics. Here we give a thermodynamic analysis that confirms the negative-temperature interpretation of the Purcell-Pound experiments. We also examine the principal arguments that have been advanced against the negative temperature concept; we find that these arguments are not logically compelling, and moreover that the underlying "volume" entropy formula leads to predictions inconsistent with existing experimental results on nuclear spins. We conclude that, despite the counterarguments, negative absolute temperatures make good theoretical sense and did occur in the experiments designed to produce them.
Optomechanics for absolute rotation detection
Davuluri, Sankar
2016-07-01
In this article, we present an application of optomechanical cavity for the absolute rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of absolute rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.
... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...
... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...
Android Apps for Absolute Beginners
Jackson, Wallace
2011-01-01
Anybody can start building simple apps for the Android platform, and this book will show you how! Android Apps for Absolute Beginners takes you through the process of getting your first Android applications up and running using plain English and practical examples. It cuts through the fog of jargon and mystery that surrounds Android application development, and gives you simple, step-by-step instructions to get you started.* Teaches Android application development in language anyone can understand, giving you the best possible start in Android development * Provides simple, step-by-step exampl
Rational functions with maximal radius of absolute monotonicity
Loczi, Lajos
2014-05-19
We study the radius of absolute monotonicity R of rational functions with numerator and denominator of degree s that approximate the exponential function to order p. Such functions arise in the application of implicit s-stage, order p Runge-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend and Kraaijevanger. We determine the maximum attainable radius for functions in several one-parameter families of rational functions. Moreover, we prove earlier conjectured optimal radii in some families with 2 or 3 parameters via uniqueness arguments for systems of polynomial inequalities. Our results also prove the optimality of some strong stability preserving implicit and singly diagonally implicit Runge-Kutta methods. Whereas previous results in this area were primarily numerical, we give all constants as exact algebraic numbers.
Monocular 3D scene reconstruction at absolute scale
Wöhler, Christian; d'Angelo, Pablo; Krüger, Lars; Kuhl, Annika; Groß, Horst-Michael
In this article we propose a method for combining geometric and real-aperture methods for monocular three-dimensional (3D) reconstruction of static scenes at absolute scale. Our algorithm relies on a sequence of images of the object acquired by a monocular camera of fixed focal setting from different viewpoints. Object features are tracked over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth-from-Defocus approach. The parameters of the point spread functions estimated by Depth-from-Defocus are used as a regularisation term for Structure-from-Motion. The reprojection error obtained from bundle adjustment and the absolute depth error obtained from Depth-from-Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about scene structure and camera motion. We describe the implementation of the proposed method both as an offline and as an online algorithm. Evaluating the algorithm on real-world data, we demonstrate that it yields typical relative scale errors of a few per cent. We examine the influence of random effects, i.e. the noise of the pixel grey values, and systematic effects, caused by thermal expansion of the optical system or by inclusion of strongly blurred images, on the accuracy of the 3D reconstruction result. Possible applications of our approach are in the field of industrial quality inspection; in particular, it is preferable to stereo cameras in industrial vision systems with space limitations or where strong vibrations occur.
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
The absolute infrared magnitudes of type Ia supernovae
Meikle, W P S
2000-01-01
The absolute luminosities and homogeneity of early-time infrared (IR) light curves of type Ia supernovae are examined. Eight supernovae are considered. These are selected to have accurately known epochs of maximum blue light as well as having reliable distance estimates and/or good light curve coverage. Two approaches to extinction correction are considered. Owing to the low extinction in the IR, the differences in the corrections via the two methods are small. Absolute magnitude light curves in the J, H and K-bands are derived. Six of the events, including five established ``Branch-normal'' supernovae show similar coeval magnitudes. Two of these, SNe 1989B and 1998bu, were observed near maximum infrared light. This occurs about 5 days {\\it before} maximum blue light. Absolute peak magnitudes of about -19.0, -18.7 and -18.8 in J, H & K respectively were obtained. The two spectroscopically peculiar supernovae in the sample, SNe 1986G and 1991T, also show atypical IR behaviour. The light curves of the six s...
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Cosmology with Negative Absolute Temperatures
Vieira, J P P; Lewis, Antony
2016-01-01
Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al (2013) has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion ($w<-1$) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.
Cosmology with negative absolute temperatures
Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony
2016-08-01
Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Measurement Error Models in Astronomy
Kelly, Brandon C
2011-01-01
I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.
RESEARCH OF PINYIN-TO-CHARACTER CONVERSION BASED ON MAXIMUM ENTROPY MODEL
Zhao Yan; Wang Xiaolong; Liu Bingquan; Guan Yi
2006-01-01
This paper applied Maximum Entropy (ME) model to Pinyin-To-Character (PTC) conversion instead of Hidden Markov Model (HMM) that could not include complicated and long-distance lexical information. Two ME models were built based on simple and complex templates respectively, and the complex one gave better conversion result. Furthermore, conversion trigger pair of yA → yB/cB was proposed to extract the long-distance constrain feature from the corpus; and then Average Mutual Information (AMI) was used to select conversion trigger pair features which were added to the ME model. The experiment shows that conversion error of the ME with conversion trigger pairs is reduced by 4% on a small training corpus, comparing with HMM smoothed by absolute smoothing.
Andrei ACHIMAŞ CADARIU
2004-08-01
Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.
Automated absolute phase retrieval in across-track interferometry
Madsen, Soren N.; Zebker, Howard A.
1992-01-01
Discussed is a key element in the processing of topographic radar maps acquired by the NASA/JPL airborne synthetic aperture radar configured as an across-track interferometer (TOPSAR). TOPSAR utilizes a single transmit and two receive antennas; the three-dimensional target location is determined by triangulation based on a known baseline and two measured slant ranges. The slant range difference is determined very accurately from the phase difference between the signals received by the two antennas. This phase is measured modulo 2pi, whereas it is the absolute phase which relates directly to the difference in slant range. It is shown that splitting the range bandwidth into two subbands in the processor and processing each individually allows for the absolute phase. The underlying principles and system errors which must be considered are discussed, together with the implementation and results from processing data acquired during the summer of 1991.
Combinatorial Selection and Least Absolute Shrinkage via the CLASH Algorithm
Kyrillidis, Anastasios
2012-01-01
The least absolute shrinkage and selection operator (LASSO) for linear regression exploits the geometric interplay of the $\\ell_2$-data error objective and the $\\ell_1$-norm constraint to arbitrarily select sparse models. Guiding this uninformed selection process with sparsity models has been precisely the center of attention over the last decade in order to improve learning performance. To this end, we alter the selection process of LASSO to explicitly leverage combinatorial sparsity models (CSMs) via the combinatorial selection and least absolute shrinkage (CLASH) operator. We provide concrete guidelines how to leverage combinatorial constraints within CLASH, and characterize CLASH's guarantees as a function of the set restricted isometry constants of the sensing matrix. Finally, our experimental results show that CLASH can outperform both LASSO and model-based compressive sensing in sparse estimation.
Estimating minimum and maximum air temperature using MODIS data over Indo-Gangetic Plain
D B Shah; M R Pandya; H J Trivedi; A R Jani
2013-12-01
Spatially distributed air temperature data are required for climatological, hydrological and environmental studies. However, high spatial distribution patterns of air temperature are not available from meteorological stations due to its sparse network. The objective of this study was to estimate high spatial resolution minimum air temperature (min) and maximum air temperature (max) over the Indo-Gangetic Plain using Moderate Resolution Imaging Spectroradiometer (MODIS) data and India Meteorological Department (IMD) ground station data. min was estimated by establishing an empirical relationship between IMD min and night-time MODIS Land Surface Temperature (s). While, max was estimated using the Temperature-Vegetation Index (TVX) approach. The TVX approach is based on the linear relationship between s and Normalized Difference Vegetation Index (NDVI) data where max is estimated by extrapolating the NDVI-s regression line to maximum value of NDVImax for effective full vegetation cover. The present study also proposed a methodology to estimate NDVImax using IMD measured max for the Indo-Gangetic Plain. Comparison of MODIS estimated min with IMD measured min showed mean absolute error (MAE) of 1.73°C and a root mean square error (RMSE) of 2.2°C. Analysis in the study for max estimation showed that calibrated NDVImax performed well, with the MAE of 1.79°C and RMSE of 2.16°C.
MARSpline model for lead seven-day maximum and minimum air temperature prediction in Chennai, India
K Ramesh; R Anitha
2014-06-01
In this study, a Multivariate Adaptive Regression Spline (MARS) based lead seven days minimum and maximum surface air temperature prediction system is modelled for station Chennai, India. To emphasize the effectiveness of the proposed system, comparison is made with the models created using statistical learning technique Support Vector Machine Regression (SVMr). The analysis highlights that prediction accuracy of MARS models for minimum temperature forecast are promising for short-term forecast (lead days 1 to 3) with mean absolute error (MAE) less than 1°C and the prediction efficiency and skill degrades in medium term forecast (lead days 4 to 7) with slightly above 1°C. The MAE of maximum temperature is little higher than minimum temperature forecast varying from 0.87°C for day-one to 1.27°C for lag day-seven with MARS approach. The statistical error analysis emphasizes that MARS models perform well with an average 0.2°C of reduction in MAE over SVMr models for all ahead seven days and provide significant guidance for the prediction of temperature event. The study also suggests that the correlation between the atmospheric parameters used as predictors and the temperature event decreases as the lag increases with both approaches.
Absolutely separating quantum maps and channels
Filippov, S. N.; Magadov, K. Yu; Jivulescu, M. A.
2017-08-01
Absolutely separable states ϱ remain separable under arbitrary unitary transformations U\\varrho {U}\\dagger . By example of a three qubit system we show that in a multipartite scenario neither full separability implies bipartite absolute separability nor the reverse statement holds. The main goal of the paper is to analyze quantum maps resulting in absolutely separable output states. Such absolutely separating maps affect the states in a way, when no Hamiltonian dynamics can make them entangled afterwards. We study the general properties of absolutely separating maps and channels with respect to bipartitions and multipartitions and show that absolutely separating maps are not necessarily entanglement breaking. We examine the stability of absolutely separating maps under a tensor product and show that {{{Φ }}}\\otimes N is absolutely separating for any N if and only if Φ is the tracing map. Particular results are obtained for families of local unital multiqubit channels, global generalized Pauli channels, and combination of identity, transposition, and tracing maps acting on states of arbitrary dimension. We also study the interplay between local and global noise components in absolutely separating bipartite depolarizing maps and discuss the input states with high resistance to absolute separability.
Absolute configuration of isovouacapenol C
Hoong-Kun Fun
2010-08-01
Full Text Available The title compound, C27H34O5 {systematic name: (4aR,5R,6R,6aS,7R,11aS,11bR-4a,6-dihydroxy-4,4,7,11b-tetramethyl-1,2,3,4,4a,5,6,6a,7,11,11a,11b-dodecahydrophenanthro[3,2-b]furan-5-yl benzoate}, is a cassane furanoditerpene, which was isolated from the roots of Caesalpinia pulcherrima. The three cyclohexane rings are trans fused: two of these are in chair conformations with the third in a twisted half-chair conformation, whereas the furan ring is almost planar (r.m.s. deviation = 0.003 Å. An intramolecular C—H...O interaction generates an S(6 ring. The absolute configurations of the stereogenic centres at positions 4a, 5, 6, 6a, 7, 11a and 11b are R, R, R, S, R, S and R, respectively. In the crystal, molecules are linked into infinite chains along [010] by O—H...O hydrogen bonds. C...O [3.306 (2–3.347 (2 Å] short contacts and C—H...π interactions also occur.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Measurement of the absolute speed is possible?
Shevchenko, Sergey V.; Vladimir V. Tokarevsky
2016-01-01
One of popular problems, which are experimentally studied in physics in a long time, is the testing of the special relativity theory, first of all – measurements of isotropy and constancy of light speed; as well as attempts to determine so called “absolute speed”, i.e. the Earth speed in the absolute spacetime (absolute reference frame), if this spacetime (ARF) exists. Corresponding experiments aimed at the measuring of proper speed of some reference frame in oth...
To measure the absolute speed is possible?
Shevchenko, Sergey; Tokarevsky, Vladimir
2013-01-01
One of popular problems, which are experimentally studied in physics in a long time, is the testing of the special relativity theory, first of all – measurements of isotropy and constancy of light speed; as well as attempts to determine so called “absolute speed”, i.e. the Earth speed in the absolute spacetime (absolute reference frame), if this spacetime (ARF) exists. Corresponding experiments aimed at the measuring of proper speed of some reference frame in other one, incl...
Error estimation in plant growth analysis
Andrzej Gregorczyk
2014-01-01
Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Demonstrating an absolute quantum advantage in direct absorption measurement
Moreau, Paul-Antoine; Whittaker, Rebecca; Joshi, Siddarth K; Birchall, Patrick; McMillan, Alex; Rarity, John G; Matthews, Jonathan C F
2016-01-01
Engineering apparatus that harness quantum theory offers practical advantages over current technology. A fundamentally more powerful prospect is the long-standing prediction that such quantum technologies could out-perform any future iteration of their classical counterparts, no matter how well the attributes of those classical strategies can be improved. Here, we experimentally demonstrate such an instance of \\textit{absolute} advantage per photon probe in the precision of optical direct absorption measurement. We use correlated intensity measurements of spontaneous parametric downconversion using a commercially available air-cooled CCD, a new estimator for data analysis and a high heralding efficiency photon-pair source. We show this enables improvement in the precision of measurement, per photon probe, beyond what is achievable with an ideal coherent state (a perfect laser) detected with $100\\%$ efficient and noiseless detection. We see this absolute improvement for up to $50\\%$ absorption, with a maximum ...
Predicting accurate absolute binding energies in aqueous solution
Jensen, Jan Halborg
2015-01-01
Recent predictions of absolute binding free energies of host-guest complexes in aqueous solution using electronic structure theory have been encouraging for some systems, while other systems remain problematic. In this paper I summarize some of the many factors that could easily contribute 1-3 kcal...... mol(-1) errors at 298 K: three-body dispersion effects, molecular symmetry, anharmonicity, spurious imaginary frequencies, insufficient conformational sampling, wrong or changing ionization states, errors in the solvation free energy of ions, and explicit solvent (and ion) effects that are not well......-represented by continuum models. While I focus on binding free energies in aqueous solution the approach also applies (with minor adjustments) to any free energy difference such as conformational or reaction free energy differences or activation free energies in any solvent....
The use of X-ray crystallography to determine absolute configuration.
Flack, H D; Bernardinelli, G
2008-05-15
Essential background on the determination of absolute configuration by way of single-crystal X-ray diffraction (XRD) is presented. The use and limitations of an internal chiral reference are described. The physical model underlying the Flack parameter is explained. Absolute structure and absolute configuration are defined and their similarities and differences are highlighted. The necessary conditions on the Flack parameter for satisfactory absolute-structure determination are detailed. The symmetry and purity conditions for absolute-configuration determination are discussed. The physical basis of resonant scattering is briefly presented and the insights obtained from a complete derivation of a Bijvoet intensity ratio by way of the mean-square Friedel difference are exposed. The requirements on least-squares refinement are emphasized. The topics of right-handed axes, XRD intensity measurement, software, crystal-structure evaluation, errors in crystal structures, and compatibility of data in their relation to absolute-configuration determination are described. Characterization of the compounds and crystals by the physicochemical measurement of optical rotation, CD spectra, and enantioselective chromatography are presented. Some simple and some complex examples of absolute-configuration determination using combined XRD and CD measurements, using XRD and enantioselective chromatography, and in multiply-twinned crystals clarify the technique. The review concludes with comments on absolute-configuration determination from light-atom structures.
Absolute Income, Relative Income, and Happiness
Ball, Richard; Chernova, Kateryna
2008-01-01
This paper uses data from the World Values Survey to investigate how an individual's self-reported happiness is related to (i) the level of her income in absolute terms, and (ii) the level of her income relative to other people in her country. The main findings are that (i) both absolute and relative income are positively and significantly…
Investigating Absolute Value: A Real World Application
Kidd, Margaret; Pagni, David
2009-01-01
Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…
Monolithically integrated absolute frequency comb laser system
Wanke, Michael C.
2016-07-12
Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Absolute quantitation of protein posttranslational modification isoform.
Yang, Zhu; Li, Ning
2015-01-01
Mass spectrometry has been widely applied in characterization and quantification of proteins from complex biological samples. Because the numbers of absolute amounts of proteins are needed in construction of mathematical models for molecular systems of various biological phenotypes and phenomena, a number of quantitative proteomic methods have been adopted to measure absolute quantities of proteins using mass spectrometry. The liquid chromatography-tandem mass spectrometry (LC-MS/MS) coupled with internal peptide standards, i.e., the stable isotope-coded peptide dilution series, which was originated from the field of analytical chemistry, becomes a widely applied method in absolute quantitative proteomics research. This approach provides more and more absolute protein quantitation results of high confidence. As quantitative study of posttranslational modification (PTM) that modulates the biological activity of proteins is crucial for biological science and each isoform may contribute a unique biological function, degradation, and/or subcellular location, the absolute quantitation of protein PTM isoforms has become more relevant to its biological significance. In order to obtain the absolute cellular amount of a PTM isoform of a protein accurately, impacts of protein fractionation, protein enrichment, and proteolytic digestion yield should be taken into consideration and those effects before differentially stable isotope-coded PTM peptide standards are spiked into sample peptides have to be corrected. Assisted with stable isotope-labeled peptide standards, the absolute quantitation of isoforms of posttranslationally modified protein (AQUIP) method takes all these factors into account and determines the absolute amount of a protein PTM isoform from the absolute amount of the protein of interest and the PTM occupancy at the site of the protein. The absolute amount of the protein of interest is inferred by quantifying both the absolute amounts of a few PTM
Analysis of absolute flatness testing in sub-stitching interferometer
Jia, Xin; Xu, Fuchao; Xie, Weimin; Xing, Tingwen
2016-09-01
Sub-aperture stitching is an effective way to extend the lateral and vertical dynamic range of a conventional interferometer. The test accuracy can be achieved by removing the error of reference surface by the absolute testing method. When the testing accuracy (repeatability and reproducibility) is close to 1nm, in addition to the reference surface, other factors will also affect the measuring accuracy such as environment, zoom magnification, stitching precision, tooling and fixture, the characteristics of optical materials and so on. In the thousand level cleanroom, we establish a good environment system. Long time stability, temperature controlled at 22°+/-0.02°.The humidity and noise are controlled in a certain range. We establish a stitching system in the clean room. The vibration testing system is used to test the vibration. The air pressure testing system is also used. In the motion system, we control the tilt error no more than 4 second to reduce the error. The angle error can be tested by the autocollimator and double grating reading head.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Bouchard, Jean-Pierre; Veilleux, Israël; Jedidi, Rym; Noiseux, Isabelle; Fortin, Michel; Mermut, Ozzy
2010-05-24
Development, production quality control and calibration of optical tissue-mimicking phantoms require a convenient and robust characterization method with known absolute accuracy. We present a solid phantom characterization technique based on time resolved transmittance measurement of light through a relatively small phantom sample. The small size of the sample enables characterization of every material batch produced in a routine phantoms production. Time resolved transmittance data are pre-processed to correct for dark noise, sample thickness and instrument response function. Pre-processed data are then compared to a forward model based on the radiative transfer equation solved through Monte Carlo simulations accurately taking into account the finite geometry of the sample. The computational burden of the Monte-Carlo technique was alleviated by building a lookup table of pre-computed results and using interpolation to obtain modeled transmittance traces at intermediate values of the optical properties. Near perfect fit residuals are obtained with a fit window using all data above 1% of the maximum value of the time resolved transmittance trace. Absolute accuracy of the method is estimated through a thorough error analysis which takes into account the following contributions: measurement noise, system repeatability, instrument response function stability, sample thickness variation refractive index inaccuracy, time correlated single photon counting system time based inaccuracy and forward model inaccuracy. Two sigma absolute error estimates of 0.01 cm(-1) (11.3%) and 0.67 cm(-1) (6.8%) are obtained for the absorption coefficient and reduced scattering coefficient respectively.
Measurement of absolute optical thickness of mask glass by wavelength-tuning Fourier analysis.
Kim, Yangjin; Hbino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru
2015-07-01
Optical thickness is a fundamental characteristic of an optical component. A measurement method combining discrete Fourier-transform (DFT) analysis and a phase-shifting technique gives an appropriate value for the absolute optical thickness of a transparent plate. However, there is a systematic error caused by the nonlinearity of the phase-shifting technique. In this research the absolute optical-thickness distribution of mask blank glass was measured using DFT and wavelength-tuning Fizeau interferometry without using sensitive phase-shifting techniques. The error occurring during the DFT analysis was compensated for by using the unwrapping correlation. The experimental results indicated that the absolute optical thickness of mask glass was measured with an accuracy of 5 nm.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
[Survey in hospitals. Nursing errors, error culture and error management].
Habermann, Monika; Cramer, Henning
2010-09-01
Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.
Absolute flatness testing of skip-flat interferometry by matrix analysis in polar coordinates.
Han, Zhi-Gang; Yin, Lu; Chen, Lei; Zhu, Ri-Hong
2016-03-20
A new method utilizing matrix analysis in polar coordinates has been presented for absolute testing of skip-flat interferometry. The retrieval of the absolute profile mainly includes three steps: (1) transform the wavefront maps of the two cavity measurements into data in polar coordinates; (2) retrieve the profile of the reflective flat in polar coordinates by matrix analysis; and (3) transform the profile of the reflective flat back into data in Cartesian coordinates and retrieve the profile of the sample. Simulation of synthetic surface data has been provided, showing the capability of the approach to achieve an accuracy of the order of 0.01 nm RMS. The absolute profile can be retrieved by a set of closed mathematical formulas without polynomial fitting of wavefront maps or the iterative evaluation of an error function, making the new method more efficient for absolute testing.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
The Simplicity Argument and Absolute Morality
Mijuskovic, Ben
1975-01-01
In this paper the author has maintained that there is a similarity of thought to be found in the writings of Cudworth, Emerson, and Husserl in his investigation of an absolute system of morality. (Author/RK)
The Simplicity Argument and Absolute Morality
Mijuskovic, Ben
1975-01-01
In this paper the author has maintained that there is a similarity of thought to be found in the writings of Cudworth, Emerson, and Husserl in his investigation of an absolute system of morality. (Author/RK)
Magnifying absolute instruments for optically homogeneous regions
Tyc, Tomas
2011-01-01
We propose a class of magnifying absolute optical instruments with a positive isotropic refractive index. They create magnified stigmatic images, either virtual or real, of optically homogeneous three-dimensional spatial regions within geometrical optics.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
The risks of absolute medical confidentiality.
Crook, M A
2013-03-01
Some ethicists argue that patient confidentiality is absolute and thus should never be broken. I examine these arguments that when critically scrutinised, become porous. I will explore the concept of patient confidentiality and argue that although, this is a very important medical and bioethical issue, this needs to be wisely delivered to reduce third party harm or even detriment to the patient. The argument for absolute confidentiality is particularly weak when it comes to genetic information and inherited disease.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Kevin Bradley Clark
2013-08-01
Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in
Beam positioning error budget in ICF driver
Shi Zhi Quan; Su Jing Qin
2002-01-01
The author presents the method of linear weight sum to beam positioning budget on the basis of ICF request on targeting, the approach of equal or unequal probability to allocate errors to each optical element. Based on the relationship between the motion of the optical components and beam position on target, the position error of the optical components was evaluated, which was referred to as the maximum range. Lots of ray trace were performed, the position error budget were modified by law of the normal distribution. An overview of position error budget of the components is provided
Absolute instability in viscoelastic mixing layers
Ray, Prasun K.; Zaki, Tamer A.
2014-01-01
The spatiotemporal linear stability of viscoelastic planar mixing layers is investigated. A one-parameter family of velocity profiles is used as the base state with the parameter, S, controlling the amount of shear and backflow. The influence of viscoelasticity in dilute polymer solutions is modeled with the Oldroyd-B and FENE-P constitutive equations. Both models require the specification of the ratio of the polymer-relaxation and convective time scales (the Weissenberg number, We) and the ratio of solvent and solution viscosities (β). The maximum polymer extensibility, L, must also be specified for the FENE-P model. We examine how the variation of these parameters along with the Reynolds number, Re, affects the minimum value of S at which the flow becomes locally absolutely unstable. With the Oldroyd-B model, the influence of viscoelasticity is shown to be almost fully captured by the elasticity, E^* equiv (1-β ) We/Re, and Scrit decreases as elasticity is increased, i.e., elasticity is destabilizing. A simple approximate dispersion relation obtained via long-wave asymptotic analysis is shown to accurately capture this destabilizing influence. Results obtained with the FENE-P model exhibit a rich variety of behavior. At large values of the extensibility, L, results are similar to those for the Oldroyd-B fluid as expected. However, when the extensibility is reduced to more realistic values (L ≈ 100), one must consider the scaled shear rate, η _c equiv We S/2L, in addition to the elasticity. When ηc is large, the base-state polymer stress obtained by the FENE-P model is reduced, and there is a corresponding reduction in the overall influence of viscoelasticity on stability. Additionally, elasticity exhibits a stabilizing effect which is driven by the streamwise-normal perturbation polymer stress. As ηc is reduced, the base-state and perturbation normal polymer stresses predicted by the FENE-P model move towards the Oldroyd-B values, and the destabilizing
Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P
2016-03-21
Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently.
Goodrich, John W.
2017-01-01
This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].
Measurement error analysis of taxi meter
He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu
2011-12-01
The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.
Maximum Safety Regenerative Power Tracking for DC Traction Power Systems
Guifu Du
2017-02-01
Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.
Floating-Point Numbers with Error Estimates (revised)
Masotti, Glauco
2012-01-01
The study addresses the problem of precision in floating-point (FP) computations. A method for estimating the errors which affect intermediate and final results is proposed and a summary of many software simulations is discussed. The basic idea consists of representing FP numbers by means of a data structure collecting value and estimated error information. Under certain constraints, the estimate of the absolute error is accurate and has a compact statistical distribution. By monitoring the estimated relative error during a computation (an ad-hoc definition of relative error has been used), the validity of results can be ensured. The error estimate enables the implementation of robust algorithms, and the detection of ill-conditioned problems. A dynamic extension of number precision, under the control of error estimates, is advocated, in order to compute results within given error bounds. A reduced time penalty could be achieved by a specialized FP processor. The realization of a hardwired processor incorporat...
A global algorithm for estimating Absolute Salinity
T. J. McDougall
2012-12-01
Full Text Available The International Thermodynamic Equation of Seawater – 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density than does Practical Salinity.
When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic, Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg^{−1} in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p in the world ocean.
To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811. In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally.
Landsat-7 ETM+ radiometric stability and absolute calibration
Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, P.; ,
2002-01-01
Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band absolute calibration to better than ??5%, and thermal band absolute calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of ??0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.
Mat Jan, Nur Amalina; Shabri, Ani
2017-01-01
TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.
Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef
2009-09-01
Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed.
Simulation model for a silicon Hall sensor in an absolute digital position detection system
Pronk, F.A.; Groenland, J.P.J.; Lammerink, T.S.J.
1986-01-01
The performance of a digital position detection system with silicon Hall sensors for the detection of coded absolute position data has been investigated. The position information is fixed in one single track as a maximum length sequence of bits by means of longitudinal saturation recording in a hard
Error handling strategies in multiphase inverse modeling
Finsterle, S.; Zhang, Y.
2010-12-01
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi
2017-02-01
Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results.
Generalized Gaussian Error Calculus
Grabe, Michael
2010-01-01
For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...
Absolute photoacoustic thermometry in deep tissue.
Yao, Junjie; Ke, Haixin; Tai, Stephen; Zhou, Yong; Wang, Lihong V
2013-12-15
Photoacoustic thermography is a promising tool for temperature measurement in deep tissue. Here we propose an absolute temperature measurement method based on the dual temperature dependences of the Grüneisen parameter and the speed of sound in tissue. By taking ratiometric measurements at two adjacent temperatures, we can eliminate the factors that are temperature irrelevant but difficult to correct for in deep tissue. To validate our method, absolute temperatures of blood-filled tubes embedded ~9 mm deep in chicken tissue were measured in a biologically relevant range from 28°C to 46°C. The temperature measurement accuracy was ~0.6°C. The results suggest that our method can be potentially used for absolute temperature monitoring in deep tissue during thermotherapy.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
Classification of Spreadsheet Errors
Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian
2008-01-01
This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...
Automated absolute activation analysis with californium-252 sources
MacMurdo, K.W.; Bowman, W.W.
1978-09-01
A 100-mg /sup 252/Cf neutron activation analysis facility is used routinely at the Savannah River Laboratory for multielement analysis of many solid and liquid samples. An absolute analysis technique converts counting data directly to elemental concentration without the use of classical comparative standards and flux monitors. With the totally automated pneumatic sample transfer system, cyclic irradiation-decay-count regimes can be pre-selected for up to 40 samples, and samples can be analyzed with the facility unattended. An automatic data control system starts and stops a high-resolution gamma-ray spectrometer and/or a delayed-neutron detector; the system also stores data and controls output modes. Gamma ray data are reduced by three main programs in the IBM 360/195 computer: the 4096-channel spectrum and pertinent experimental timing, counting, and sample data are stored on magnetic tape; the spectrum is then reduced to a list of significant photopeak energies, integrated areas, and their associated statistical errors; and the third program assigns gamma ray photopeaks to the appropriate neutron activation product(s) by comparing photopeak energies to tabulated gamma ray energies. Photopeak areas are then converted to elemental concentration by using experimental timing and sample data, calculated elemental neutron capture rates, absolute detector efficiencies, and absolute spectroscopic decay data. Calculational procedures have been developed so that fissile material can be analyzed by cyclic neutron activation and delayed-neutron counting procedures. These calculations are based on a 6 half-life group model of delayed neutron emission; calculations include corrections for delayed neutron interference from /sup 17/O. Detection sensitivities of < or = 400 ppB for natural uranium and 8 ppB (< or = 0.5 (nCi/g)) for /sup 239/Pu were demonstrated with 15-g samples at a throughput of up to 140 per day. Over 40 elements can be detected at the sub-ppM level.
Absolute Position Total Internal Reflection Microscopy with an Optical Tweezer
Liu, Lulu; Rodriguez, Alejandro W; Capasso, Federico
2014-01-01
A non-invasive, in-situ calibration method for Total Internal Reflection Microscopy (TIRM) based on optical tweezing is presented which greatly expands the capabilities of this technique. We show that by making only simple modifications to the basic TIRM sensing setup and procedure, a probe particle's absolute position relative to a dielectric interface may be known with better than 10 nm precision out to a distance greater than 1 $\\mu$m from the surface. This represents an approximate 10x improvement in error and 3x improvement in measurement range over conventional TIRM methods. The technique's advantage is in the direct measurement of the probe particle's scattering intensity vs. height profile in-situ, rather than relying on calculations or inexact system analogs for calibration. To demonstrate the improved versatility of the TIRM method in terms of tunability, precision, and range, we show our results for the hindered near-wall diffusion coefficient for a spherical dielectric particle.
Measured and modelled absolute gravity changes in Greenland
Nielsen, J. Emil; Forsberg, Rene; Strykowski, Gabriel
2014-01-01
In glaciated areas, the Earth is responding to the ongoing changes of the ice sheets, a response known as glacial isostatic adjustment (GIA). GIA can be investigated through observations of gravity change. For the ongoing assessment of the ice sheets mass balance, where satellite data are used, the study of GIA is important since it acts as an error source. GIA consists of three signals as seen by a gravimeter on the surface of the Earth. These signals are investigated in this study. The ICE-5G ice history and recently developed ice models of present day changes are used to model the gravity change in Greenland. The result is compared with the initial measurements of absolute gravity (AG) change at selected Greenland Network (GNET) sites.
Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model
无
2006-01-01
In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.
Quantitating error in blood flow measurements with radioactive microspheres
Austin, R.E. Jr.; Hauck, W.W.; Aldea, G.S.; Flynn, A.E.; Coggins, D.L.; Hoffman, J.I.
1989-07-01
Accurate determination of the reproducibility of measurements using the microsphere technique is important in assessing differences in blood flow to different organs or regions within organs, as well as changes in perfusion under various experimental conditions. The sources of error of the technique are briefly reviewed. In addition, we derived a method for combining quantifiable sources of error into a single estimate that was evaluated experimentally by simultaneously injecting eight or nine sets of microspheres (each with a different radionuclide label) into four anesthetized dogs. Each nuclide was used to calculate blood flow in 145-190 myocardial regions. We compared each flow determination (using a single nuclide label) with a weighted mean for the piece (based on the remaining nuclides). The difference was defined as ''measured'' error. In all, there were a total of 5,975 flow observations. We compared measured error with theoretical estimates based on the Poisson error of radioactive disintegration and microsphere entrapment, nuclide separation error, and reference flow error. We found that combined estimates based on these sources completely accounted for measured error in the relative distribution of microspheres. In addition, our estimates of the error in measuring absolute flows (which were established using microsphere reference samples) slightly, but significantly, underestimated measured error in absolute flow.
Absolute Asymmetric Synthesis Using A Cocrystal Approach
H.Koshima
2007-01-01
1 Results Absolute asymmetric synthesis by means of solid-state reaction of chiral crystals self-assembled from achiral molecules is an attractive and promising methodology for asymmetric synthesis because it is not necessary to employ any external chiral source like a chiral catalyst.In order to design reliably absolute asymmetric syntheses in the solid state,it is inevitable to prepare and predict the formation of chiral crystals from achiral compounds.We have prepared a number of chiral cocrystals co...
Absolute-Magnitude Distributions of Supernovae
Richardson, Dean; Wright, John; Maddox, Larry
2014-01-01
The absolute-magnitude distributions of seven supernova types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M_B -15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of -19.25. The IIP distribution was the dimmest at -16.75.
Absolute Stability Limit for Relativistic Charged Spheres
Giuliani, Alessandro
2007-01-01
We find an exact solution for the stability limit of relativistic charged spheres for the case of constant gravitational mass density and constant charge density. We argue that this provides an absolute stability limit for any relativistic charged sphere in which the gravitational mass density decreases with radius and the charge density increases with radius. We then provide a cruder absolute stability limit that applies to any charged sphere with a spherically symmetric mass and charge distribution. We give numerical results for all cases. In addition, we discuss the example of a neutral sphere surrounded by a thin, charged shell.
DI3 - A New Procedure for Absolute Directional Measurements
A Geese
2011-06-01
Full Text Available The standard observatory procedure for determining a geomagnetic field's declination and inclination absolutely is the DI-flux measurement. The instrument consists of a non-magnetic theodolite equipped with a single-axis fluxgate magnetometer. Additionally, a scalar magnetometer is needed to provide all three components of the field. Using only 12 measurement steps, all systematic errors can be accounted for, but if only one of the readings is wrong, the whole measurement has to be rejected. We use a three-component sensor on top of the theodolites telescope. By performing more measurement steps, we gain much better control of the whole procedure: As the magnetometer can be fully calibrated by rotating about two independent directions, every combined reading of magnetometer output and theodolite angles provides the absolute field vector. We predefined a set of angle positions that the observer has to try to achieve. To further simplify the measurement procedure, the observer is guided by a pocket pc, in which he has only to confirm the theodolite position. The magnetic field is then stored automatically, together with the horizontal and vertical angles. The DI3 measurement is periodically performed at the Niemegk Observatory, allowing for a direct comparison with the traditional measurements.
Absolute chronology and stratigraphy of Lepenski Vir
Borić Dušan
2007-01-01
meaningful and representative of two separate and defined phases of occupation at this locale. This early period would correspond with the phase that the excavator of Lepenski Vir defined as Proto-Lepenski Vir although his ideas about the spatial distribution of this phase, its interpretation, duration and relation to the later phase of trapezoidal buildings must be revised in the light of new AMS dates and other available data. The phase with trapezoidal buildings most likely starts only around 6200 cal BC and most of the trapezoidal buildings might have been abandoned by around 5900 cal BC. The absolute span of only two or three hundred years and likely even less, for the flourishing of building activity related to trapezoidal structures at Lepenski Vir significantly compresses Srejović's phase I. Thus, it is difficult to maintain the excavator's five subphases which, similarly to Ivana Radovanović's more recent re-phasing of Lepenski Vir into I-1-3, remain largely guess works before more extensive and systematic dating of each building is accomplished along with statistical modeling in order to narrow the magnitude of error. On the whole, new dates from these contexts better correspond with Srejović's stratigraphic logic of sequencing buildings to particular phases on the basis of their superimposing and cutting than with Radovanović's stylistic logic, i.e. her typology of hearth forms, ash-places, entrance platforms, and presence/absence of -supports around rectangular hearths used as reliable chronological indicators. The short chronological span for phase I also suggests that phase Lepenski Vir II is not realistic. This has already been shown by overlapping plans of the phase I buildings and stone outlines that the excavator of the site attributed to Lepenski Vir II phase. According to Srejović, Lepenski Vir phase II was characterized by buildings with stone walls made in the shape of trapezes, repeating the outline of supposedly earlier limestone floors of his
Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading
Pierluigi Guerriero
2015-01-01
Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.
2012-01-01
This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and rece...
Stimulus Probability Effects in Absolute Identification
Kent, Christopher; Lamberts, Koen
2016-01-01
This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…
Time Function and Absolute Black Hole
Javadi, Hossein; Forouzbakhsh, Farshid
2006-01-01
Einstein’s theory of gravity is not consistent with quantum mechanics, because general relativity cannot be quantized. [1] But without conversion of force and energy, it is impossible to find a grand unified theory. A very important result of CPH theory is time function that allows we give a new ...... description of absolute black hole and before the big bang....
Teaching Absolute Value Inequalities to Mature Students
Sierpinska, Anna; Bobos, Georgeana; Pruncut, Andreea
2011-01-01
This paper gives an account of a teaching experiment on absolute value inequalities, whose aim was to identify characteristics of an approach that would realize the potential of the topic to develop theoretical thinking in students enrolled in prerequisite mathematics courses at a large, urban North American university. The potential is…
Thin-film magnetoresistive absolute position detector
Groenland, Johannes Petrus Jacobus
1990-01-01
The subject of this thesis is the investigation of a digital absolute posi- tion-detection system, which is based on a position-information carrier (i.e. a magnetic tape) with one single code track on the one hand, and an array of magnetoresistive sensors for the detection of the informatio
Det demokratiske argument for absolut ytringsfrihed
Lægaard, Sune
2014-01-01
Artiklen diskuterer den påstand, at absolut ytringsfrihed er en nødvendig forudsætning for demokratisk legitimitet med udgangspunkt i en rekonstruktion af et argument fremsat af Ronald Dworkin. Spørgsmålet er, hvorfor ytringsfrihed skulle være en forudsætning for demokratisk legitimitet, og hvorf...
Magnetoresistive sensor for absolute position detection
Groenland, J.P.J.
1984-01-01
A digital measurement principle for absolute position is decscribed. The position data is recorded serially into a single track of a hard-magnetic layer with the help of longitudinal saturation recording. Detection is possible by means of an array of sensor elements which can be made of a substrate.
Generalized Norms Inequalities for Absolute Value Operators
Ilyas Ali
2014-02-01
Full Text Available In this article, we generalize some norms inequalities for sums, differences, and products of absolute value operators. Our results based on Minkowski type inequalities and generalized forms of the Cauchy-Schwarz inequality. Some other related inequalities are also discussed.
New Techniques for Absolute Gravity Measurements.
1983-01-07
Hammond, J.A. (1978) Bollettino Di Geofisica Teorica ed Applicata Vol. XX. 8. Hammond, J. A., and Iliff, R. L. (1979) The AFGL absolute gravity system...International Gravimetric Bureau, No. L:I-43. 7. Hammond. J.A. (1978) Bollettino Di Geofisica Teorica ed Applicata Vol. XX. 8. Hammond, J.A., and
Det demokratiske argument for absolut ytringsfrihed
Lægaard, Sune
2014-01-01
Artiklen diskuterer den påstand, at absolut ytringsfrihed er en nødvendig forudsætning for demokratisk legitimitet med udgangspunkt i en rekonstruktion af et argument fremsat af Ronald Dworkin. Spørgsmålet er, hvorfor ytringsfrihed skulle være en forudsætning for demokratisk legitimitet, og hvorfor...
Stimulus Probability Effects in Absolute Identification
Kent, Christopher; Lamberts, Koen
2016-01-01
This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…
Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator
Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl
2011-01-01
In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....
Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator
Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl
2011-01-01
In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....
Absolute Radiation Thermometry in the NIR
Bünger, L.; Taubert, R. D.; Gutschwager, B.; Anhalt, K.; Briaudeau, S.; Sadli, M.
2017-04-01
A near infrared (NIR) radiation thermometer (RT) for temperature measurements in the range from 773 K up to 1235 K was characterized and calibrated in terms of the "Mise en Pratique for the definition of the Kelvin" (MeP-K) by measuring its absolute spectral radiance responsivity. Using Planck's law of thermal radiation allows the direct measurement of the thermodynamic temperature independently of any ITS-90 fixed-point. To determine the absolute spectral radiance responsivity of the radiation thermometer in the NIR spectral region, an existing PTB monochromator-based calibration setup was upgraded with a supercontinuum laser system (0.45 μm to 2.4 μm) resulting in a significantly improved signal-to-noise ratio. The RT was characterized with respect to its nonlinearity, size-of-source effect, distance effect, and the consistency of its individual temperature measuring ranges. To further improve the calibration setup, a new tool for the aperture alignment and distance measurement was developed. Furthermore, the diffraction correction as well as the impedance correction of the current-to-voltage converter is considered. The calibration scheme and the corresponding uncertainty budget of the absolute spectral responsivity are presented. A relative standard uncertainty of 0.1 % (k=1) for the absolute spectral radiance responsivity was achieved. The absolute radiometric calibration was validated at four temperature values with respect to the ITS-90 via a variable temperature heatpipe blackbody (773 K ...1235 K) and at a gold fixed-point blackbody radiator (1337.33 K).
Error Model of Curves in GIS and Digitization Experiment
GUO Tongde; WANG Jiayao; WANG Guangxia
2006-01-01
A stochastic error process of curves is proposed as the error model to describe the errors of curves in GIS. In terms of the stochastic process, four characteristics concerning the local error of curves, namely, mean error function, standard error function, absolute error function, and the correlation function of errors , are put forward. The total error of a curve is expressed by a mean square integral of the stochastic error process. The probabilistic meanings and geometric meanings of the characteristics mentioned above are also discussed. A scan digitization experiment is designed to check the efficiency of the model. In the experiment, a piece of contour line is digitized for more than 100 times and lots of sample functions are derived from the experiment. Finally, all the error characteristics are estimated on the basis of sample functions. The experiment results show that the systematic error in digitized map data is not negligible, and the errors of points on curves are chiefly dependent on the curvature and the concavity of the curves.
Error detection based on MB types
FANG Yong; JEONG JeChang; WU ChengKe
2008-01-01
This paper proposes a method of error detection based on macroblock (MB) types for video transmission. For decoded inter MBs, the absolute values of received residues are accumulated. At the same time, the intra textural complexity of the current MB is estimated by that of the motion compensated reference block. We compare the inter residue with the intra textural complexity. If the inter residue is larger than the intra textural complexity by a predefined threshold, the MB is con-sidered to be erroneous and errors are concealed. For decoded intra MBs, the connective smoothness of the current MB with neighboring MBs is tested to find erroneous MBs. Simulation results show that the new method can remove those seriously-corrupted MBs efficiently. Combined with error concealment, the new method improves the recovered quality at the decoder by about 0.5-1 dB.
A New Method of Error Compensation for Numerical Control System
夏蔚军; 吴智铭; 李济顺; 张洛平
2003-01-01
This paper presents a method of rapid machine tool error modeling, separation, and compensation using grating ruler. A robust modeling procedure for geometric errors is developed and a fast data processing algorithm is designed by using the error separation technique. After compensation with the new method, the maximum position error of the experiment workbench can be reduced from 400μm to 15μm. The experimental results show the effectiveness and accuracy of this method.
Is absolute noninvasive temperature measurement by the Pr[MOE-DO3A] complex feasible.
Hentschel, M; Findeisen, M; Schmidt, W; Frenzel, T; Wlodarczyk, W; Wust, P; Felix, R
2000-02-01
Recently, the feasibility of the praseodymium complex of 10-(2-methoxyethyl)-1,4,7,10-tetraaza-cyclododecane-1,4,7-tr iacetate (Pr[MOE-DO3A]) for non-invasive temperature measurement via 1H spectroscopy has been demonstrated. Particularly the suitability of the complex for non-invasive temperature measurements including in vivo spectroscopy without spatial resolution as well as first spectroscopic imaging measurements at low temporal resolution (> or = 4 min) and high temporal resolution (breath hold, approximately 20 s) has been shown. As of today, calibration curves according to the particular experimental conditions are necessary. This work aims to clarify whether the Pr[MOE-DO3A] probe in conjunction with 1H-NMR spectroscopy allows non-invasive absolute temperature measurements with high accuracy. The measurement results from two different representative media, distilled water and human plasma, show a slight but significant dependence of the calibration curves on the surrounding medium. Calibration curves in water and plasma were derived for the temperature dependence of the chemical shift difference (F) between Pr[MOE-DO3A]'s OCH3 and water with F = -(27.53 +/- 0.04) + (0.125 +/- 0.001) x T and F = -(27.61 +/- 0.02) + (0.129 +/- 0.001) x T, respectively, with F in ppm and T in degrees C. However, the differences are minuscule even for the highest spectral resolution of 0.001 ppm/pt, so that they are indistinguishable under practical conditions. The estimated temperature errors are +/- 0.18 degrees C for water and +/- 0.14 degrees C for plasma and with that only slightly worse than the measurement accuracy of the fiber-optical temperature probe (+/- 0.1 degrees C). It can be concluded that the results obtained indicate the feasibility of the 1H spectroscopy method in conjunction with the Pr[MOE-DO3A] probe for absolute temperature measurements, with a maximum accuracy of +/- 0.2 degrees C.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
Comparison of available measurements of the absolute fluorescence yield
Rosado, J; Arqueros, F
2010-01-01
The uncertainty in the absolute value of the fluorescence yield is still one of the main contributions to the total error in the reconstruction of the primary energy of ultra-energetic air showers using the fluorescence technique. A significant number of experimental values of the fluorescence yield have been published in the last years, however reported results are given very often in different units (photons/MeV or photons/m) and for different wavelength intervals. In this work we present a comparison of available results normalized to its value in photons/MeV for the 337 nm band at 800 hPa and 293 K. Possible sources of systematic errors on these measurements are discussed. In particular, the conversion of photons/m to photons/MeV requires an accurate determination of the energy deposited by the electrons in the field of view of the experimental setup. We have calculated the energy deposition for each experiment by means of a detailed Monte Carlo simulation including when possible the geometrical details o...
Reda, I.; Zeng, J.; Scheuch, J.; Hanssen, L.; Wilthan, B.; Myers, D.; Stoffel, T.
2012-03-01
This article describes a method of measuring the absolute outdoor longwave irradiance using an absolute cavity pyrgeometer (ACP), U.S. Patent application no. 13/049, 275. The ACP consists of domeless thermopile pyrgeometer, gold-plated concentrator, temperature controller, and data acquisition. The dome was removed from the pyrgeometer to remove errors associated with dome transmittance and the dome correction factor. To avoid thermal convection and wind effect errors resulting from using a domeless thermopile, the gold-plated concentrator was placed above the thermopile. The concentrator is a dual compound parabolic concentrator (CPC) with 180{sup o} view angle to measure the outdoor incoming longwave irradiance from the atmosphere. The incoming irradiance is reflected from the specular gold surface of the CPC and concentrated on the 11 mm diameter of the pyrgeometer's blackened thermopile. The CPC's interior surface design and the resulting cavitation result in a throughput value that was characterized by the National Institute of Standards and Technology. The ACP was installed horizontally outdoor on an aluminum plate connected to the temperature controller to control the pyrgeometer's case temperature. The responsivity of the pyrgeometer's thermopile detector was determined by lowering the case temperature and calculating the rate of change of the thermopile output voltage versus the changing net irradiance. The responsivity is then used to calculate the absolute atmospheric longwave irradiance with an uncertainty estimate (U{sub 95}) of {+-}3.96 W m{sup 02} with traceability to the International System of Units, SI. The measured irradiance was compared with the irradiance measured by two pyrgeometers calibrated by the World Radiation Center with traceability to the Interim World Infrared Standard Group, WISG. A total of 408 readings were collected over three different nights. The calculated irradiance measured by the ACP was 1.5 W/m{sup 2
Nute, Christine
2014-11-25
Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.
Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu
2015-10-01
Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.
Comparison of Prediction-Error-Modelling Criteria
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....
Absolutely Maximally Entangled states, combinatorial designs and multi-unitary matrices
Goyeneche, Dardo; Latorre, José I; Riera, Arnau; Życzkowski, Karol
2015-01-01
Absolutely Maximally Entangled (AME) states are those multipartite quantum states that carry absolute maximum entanglement in all possible partitions. AME states are known to play a relevant role in multipartite teleportation, in quantum secret sharing and they provide the basis novel tensor networks related to holography. We present alternative constructions of AME states and show their link with combinatorial designs. We also analyze a key property of AME, namely their relation to tensors that can be understood as unitary transformations in every of its bi-partitions. We call this property multi-unitarity.
Mackie, Peter; Nellthorp, John; Laird, James
2005-01-01
Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...
Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.
2009-01-01
For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br
Low-cost ultrasonic distance sensor arrays with networked error correction.
Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou
2013-09-05
Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Tianzhou Chen
2013-09-01
Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.
Errors of measurement by laser goniometer
Agapov, Mikhail Y.; Bournashev, Milhail N.
2000-11-01
The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.
From Hubble's NGSL to Absolute Fluxes
Heap, Sara R.; Lindler, Don
2012-01-01
Hubble's Next Generation Spectral Library (NGSL) consists of R-l000 spectra of 374 stars of assorted temperature, gravity, and metallicity. Each spectrum covers the wavelength range, 0.18-1.00 microns. The library can be viewed and/or downloaded from the website, http://archive.stsci.edu/prepds/stisngsll. Stars in the NGSL are now being used as absolute flux standards at ground-based observatories. However, the uncertainty in the absolute flux is about 2%, which does not meet the requirements of dark-energy surveys. We are therefore developing an observing procedure that should yield fluxes with uncertainties less than 1 % and will take part in an HST proposal to observe up to 15 stars using this new procedure.
Absolute and relative dosimetry for ELIMED
Cirrone, G. A. P.; Cuttone, G.; Candiano, G.; Carpinelli, M.; Leonora, E.; Lo Presti, D.; Musumarra, A.; Pisciotta, P.; Raffaele, L.; Randazzo, N.; Romano, F.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Cirio, R.; Marchetto, F.; Sacchi, R.; Giordanengo, S.; Monaco, V.
2013-07-01
The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.
Learning in a unidimensional absolute identification task.
Rouder, Jeffrey N; Morey, Richard D; Cowan, Nelson; Pfaltz, Monique
2004-10-01
We tested whether there is long-term learning in the absolute identification of line lengths. Line lengths are unidimensional stimuli, and there is a common belief that learning of these stimuli quickly reaches a low-level asymptote of about seven items and progresses no more. We show that this is not the case. Our participants served in a 1.5-h session each day for over a week. Although they did not achieve perfect performance, they continued to improve day by day throughout the week and eventually learned to distinguish between 12 and 20 line lengths. These results are in contrast to common characterizations of learning in absolute identification tasks with unidimensional stimuli. We suggest that this learning reflects improvement in short-term processing.
Absolute calibration of TFTR helium proportional counters
Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L. [Princeton Univ., NJ (United States). Plasma Physics Lab.; Barnes, C.W. [Princeton Univ., NJ (United States). Plasma Physics Lab.]|[Los Alamos National Lab., NM (United States); Loughlin, M. [Princeton Univ., NJ (United States). Plasma Physics Lab.]|[JET Joint Undertaking, Abingdon (United Kingdom)
1995-06-01
The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments.
Asteroid absolute magnitudes and slope parameters
Tedesco, Edward F.
1991-01-01
A new listing of absolute magnitudes (H) and slope parameters (G) has been created and published in the Minor Planet Circulars; this same listing will appear in the 1992 Ephemerides of Minor Planets. Unlike previous listings, the values of the current list were derived from fits of data at the V band. All observations were reduced in the same fashion using, where appropriate, a single basis default value of 0.15 for the slope parameter. Distances and phase angles were computed for each observation. The data for 113 asteroids was of sufficiently high quality to permit derivation of their H and G. These improved absolute magnitudes and slope parameters will be used to deduce the most reliable bias-corrected asteroid size-frequency distribution yet made.
Absolute zero and the conquest of cold
Shachtman, Tom
2000-01-01
In a sweeping yet marvelously concise history, Tom Shachtman ushers us into a world in which scientists tease apart the all-important secrets of cold. Readers take an extraordinary trip, starting in the 1600s with an alchemist's air conditioning of Westminster Abbey and scientists' creation of thermometers. Later, while entrepreneurs sold Walden Pond ice to tropical countries -- packed in "high-tech" sawdust -- researchers pursued absolute zero and interpreted their work as romantically as did adventurers to remote regions. Today, playing with ultracold temperatures is one of the hottest frontiers in physics, with scientists creating useful particles Einstein only dreamed of. Tom Shachtman shares a great scientific adventure story and its characters' rich lives in a book that has won a grant from the prestigious Alfred P. Sloan Foundation. Absolute Zero is for everyone who loves history and science history stories, who's eager to explore Nobel Prize-winning physics today, or who has ever sighed with pleasure ...
An absolute measure for a key currency
Oya, Shunsuke; Aihara, Kazuyuki; Hirata, Yoshito
It is generally considered that the US dollar and the euro are the key currencies in the world and in Europe, respectively. However, there is no absolute general measure for a key currency. Here, we investigate the 24-hour periodicity of foreign exchange markets using a recurrence plot, and define an absolute measure for a key currency based on the strength of the periodicity. Moreover, we analyze the time evolution of this measure. The results show that the credibility of the US dollar has not decreased significantly since the Lehman shock, when the Lehman Brothers bankrupted and influenced the economic markets, and has increased even relatively better than that of the euro and that of the Japanese yen.
Absolute and relative dosimetry for ELIMED
Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Cuttone, G.; Candiano, G.; Musumarra, A.; Pisciotta, P.; Romano, F. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Leonora, E.; Randazzo, N. [INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Presti, D. Lo [INFN-Sezione di Catania, Via Santa Sofia 64, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Raffaele, L. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Cirio, R.; Sacchi, R.; Monaco, V. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino, Italy and Università di Torino, Dipartimento di Fisica, Via P.Giuria, 1 10125 Torino (Italy); Marchetto, F.; Giordanengo, S. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy)
2013-07-26
The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.
The absolute differential calculus (calculus of tensors)
Levi-Civita, Tullio
2013-01-01
Written by a towering figure of twentieth-century mathematics, this classic examines the mathematical background necessary for a grasp of relativity theory. Tullio Levi-Civita provides a thorough treatment of the introductory theories that form the basis for discussions of fundamental quadratic forms and absolute differential calculus, and he further explores physical applications.Part one opens with considerations of functional determinants and matrices, advancing to systems of total differential equations, linear partial differential equations, algebraic foundations, and a geometrical intro
Absolute vs. Relative Notion of Wealth Changes
2009-01-01
This paper discusses solutions derived from lottery experiments using two alternative assumptions: that people perceive wealth changes as absolute amounts of money; and that people consider wealth changes as a proportion of some reference value dependant on the context of the problem under consideration. The former assumption leads to the design of Prospect Theory, the latter - to a solution closely resembling the utility function hypothesized by Markowitz (1952B). This paper presents several...
Limitations of absolute current densities derived from the Semel & Skumanich method
无
2009-01-01
Semel and Skumanich proposed a method to obtain the absolute electric current density, |Jz|, without disambiguation of 180° in the transverse field directions. The advantage of the method is that the uncertainty in the determination of the ambiguity in the magnetic azimuth is removed. Here, we investigate the limits of the calculation when applied to a numerical MHD model. We have found that the combination of changes in the magnetic azimuth with vanishing horizontal field component leads to errors, where electric current densities are often strong. Where errors occur, the calculation gives |Jz| too small by factors typically 1.2 - 2.0.
Absolute testing of flats in sub-stitching interferometer by rotation-shift method
Jia, Xin; Xu, Fuchao; Xie, Weimin; Li, Yun; Xing, Tingwen
2015-09-01
Most of the commercial available sub-aperture stitching interferometers measure the surface with a standard lens that produces a reference wavefront, and the precision of the interferometer is generally limited by the standard lens. The test accuracy can be achieved by removing the error of reference surface by the absolute testing method. When the testing accuracy (repeatability and reproducibility) is close to 1nm, in addition to the reference surface, other factors will also affect the measuring accuracy such as environment, zoom magnification, stitching precision, tooling and fixture, the characteristics of optical materials and so on. We establish a stitching system in the thousand level cleanroom. The stitching system is including the Zygo interferometer, the motion system with Bilz active isolation system at level VC-F. We review the traditional absolute flat testing methods and emphasize the method of rotation-shift functions. According to the rotation-shift method we get the profile of the reference lens and the testing lens. The problem of the rotation-shift method is the tilt error. In the motion system, we control the tilt error no more than 4 second to reduce the error. In order to obtain higher testing accuracy, we analyze the influence surface shape measurement accuracy by recording the environment error with the fluke testing equipment.
Measurement of absolute gravity acceleration in Firenze
de Angelis, M.; Greco, F.; Pistorio, A.; Poli, N.; Prevedelli, M.; Saccorotti, G.; Sorrentino, F.; Tino, G. M.
2011-01-01
This paper reports the results from the accurate measurement of the acceleration of gravity g taken at two separate premises in the Polo Scientifico of the University of Firenze (Italy). In these laboratories, two separate experiments aiming at measuring the Newtonian constant and testing the Newtonian law at short distances are in progress. Both experiments require an independent knowledge on the local value of g. The only available datum, pertaining to the italian zero-order gravity network, was taken more than 20 years ago at a distance of more than 60 km from the study site. Gravity measurements were conducted using an FG5 absolute gravimeter, and accompanied by seismic recordings for evaluating the noise condition at the site. The absolute accelerations of gravity at the two laboratories are (980 492 160.6 ± 4.0) μGal and (980 492 048.3 ± 3.0) μGal for the European Laboratory for Non-Linear Spectroscopy (LENS) and Dipartimento di Fisica e Astronomia, respectively. Other than for the two referenced experiments, the data here presented will serve as a benchmark for any future study requiring an accurate knowledge of the absolute value of the acceleration of gravity in the study region.
Measurement of absolute gravity acceleration in Firenze
M. de Angelis
2011-01-01
Full Text Available This paper reports the results from the accurate measurement of the acceleration of gravity g taken at two separate premises in the Polo Scientifico of the University of Firenze (Italy. In these laboratories, two separate experiments aiming at measuring the Newtonian constant and testing the Newtonian law at short distances are in progress. Both experiments require an independent knowledge on the local value of g. The only available datum, pertaining to the italian zero-order gravity network, was taken more than 20 years ago at a distance of more than 60 km from the study site. Gravity measurements were conducted using an FG5 absolute gravimeter, and accompanied by seismic recordings for evaluating the noise condition at the site. The absolute accelerations of gravity at the two laboratories are (980 492 160.6 ± 4.0 μGal and (980 492 048.3 ± 3.0 μGal for the European Laboratory for Non-Linear Spectroscopy (LENS and Dipartimento di Fisica e Astronomia, respectively. Other than for the two referenced experiments, the data here presented will serve as a benchmark for any future study requiring an accurate knowledge of the absolute value of the acceleration of gravity in the study region.
Glosup, J.G.; Axelrod, M.C.
1996-08-05
The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
Absolute measurement of the $\\beta\\alpha$ decay of $^{16}$N
We propose to study the $\\beta$-decay of $^{16}$N at ISOLDE with the aim of determining the branching ratio for $\\beta\\alpha$ decay on an absolute scale. There are indications that the previously measured branching ratio is in error by an amount significantly larger than the quoted uncertainty. This limits the precision with which the S-factor of the astrophysically important $^{12}$C($\\alpha, \\gamma)^{16}$O reaction can be determined.
Chang, L W; Chien, P Y; Lee, C T
1999-05-01
A novel method is presented for of measuring absolute displacement with a synthesized wavelength interferometer. The optical phase of the interferometer is simultaneously modulated with a frequency-modulated laser diode and optical path-length difference. The error signal originating from the intensity modulation of the source is eliminated by a signal processing circuit. In addition, a lock-in technique is used to demodulate the envelope of the interferometric signal. The displacement signal is derived by the self-mixing technique.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Duan, Beiping; Zheng, Zhoushun; Cao, Wen
2016-08-01
In this paper, we revisit two spectral approximations, including truncated approximation and interpolation for Caputo fractional derivative. The two approaches have been studied to approximate Riemann-Liouville (R-L) fractional derivative by Chen et al. and Zayernouri et al. respectively in their most recent work. For truncated approximation the reconsideration partly arises from the difference between fractional derivative in R-L sense and Caputo sense: Caputo fractional derivative requires higher regularity of the unknown than R-L version. Another reason for the reconsideration is that we distinguish the differential order of the unknown with the index of Jacobi polynomials, which is not presented in the previous work. Also we provide a way to choose the index when facing multi-order problems. By using generalized Hardy's inequality, the gap between the weighted Sobolev space involving Caputo fractional derivative and the classical weighted space is bridged, then the optimal projection error is derived in the non-uniformly Jacobi-weighted Sobolev space and the maximum absolute error is presented as well. For the interpolation, analysis of interpolation error was not given in their work. In this paper we build the interpolation error in non-uniformly Jacobi-weighted Sobolev space by constructing fractional inverse inequality. With combining collocation method, the approximation technique is applied to solve fractional initial-value problems (FIVPs). Numerical examples are also provided to illustrate the effectiveness of this algorithm.
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.
Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng
2017-04-17
added with Gaussian distributed noise. Meanwhile clinical breast ultrasound images are used to visually evaluate the effectiveness of the method. To examine the performance, comparison tests between the proposed RSBF and six state-of-the-art methods for ultrasound speckle removal are performed on simulated ultrasound images with various noise and speckle levels. The results of the proposed RSBF are satisfying since the Gaussian noise and the Rayleigh speckle are greatly suppressed. The proposed method can improve the SNRs of the enhanced images to nearly 15 and 13 dB compared with images corrupted by speckle as well as images contaminated by speckle and noise under various SNR levels, respectively. The RSBF is effective in enhancing edge while smoothing the speckle and noise in clinical ultrasound images. In the comparison experiments, the proposed method demonstrates its superiority in accuracy and robustness for denoising and edge preserving under various levels of noise and speckle in terms of visual quality as well as numeric metrics, such as peak signal to noise ratio, SNR and root mean squared error. The experimental results show that the proposed method is effective for removing the speckle and the background noise in ultrasound images. The main reason is that it performs a "detect and replace" two-step mechanism. The advantages of the proposed RBSF lie in two aspects. Firstly, each central pixel is classified as noise, speckle or noise-free texture according to the absolute difference between the target pixel and the reference median. Subsequently, the Rayleigh-maximum-likelihood filter and the bilateral filter are switched to eliminate speckle and noise, respectively, while the noise-free pixels are unaltered. Therefore, it is implemented with better accuracy and robustness than the traditional methods. Generally, these traits declare that the proposed RSBF would have significant clinical application.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
A hardware error estimate for floating-point computations
Lang, Tomás; Bruguera, Javier D.
2008-08-01
, this type has some anomalies that make it difficult to use. We propose a scaled absolute error, whose value is close to the relative error but does not have these anomalies. The main cost issue might be the additional storage and the narrow datapath required for the estimate computation. We evaluate our proposal and compare it with other alternatives. We conclude that the proposed approach might be beneficial.
Achieving Climate Change Absolute Accuracy in Orbit
Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.
2013-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.
Probabilistic quantum error correction
Fern, J; Fern, Jesse; Terilla, John
2002-01-01
There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.
Absolute calibration of the Auger fluorescence detectors
Bauleo, P.; Brack, J.; Garrard, L.; Harton, J.; Knapik, R.; Meyhandan, R.; Rovero, A.C.; /Buenos Aires, IAFE; Tamashiro, A.; Warner, D.
2005-07-01
Absolute calibration of the Pierre Auger Observatory fluorescence detectors uses a light source at the telescope aperture. The technique accounts for the combined effects of all detector components in a single measurement. The calibrated 2.5 m diameter light source fills the aperture, providing uniform illumination to each pixel. The known flux from the light source and the response of the acquisition system give the required calibration for each pixel. In the lab, light source uniformity is studied using CCD images and the intensity is measured relative to NIST-calibrated photodiodes. Overall uncertainties are presently 12%, and are dominated by systematics.
Absolute quantification of myocardial blood flow.
Yoshinaga, Keiichiro; Manabe, Osamu; Tamaki, Nagara
2016-07-21
With the increasing availability of positron emission tomography (PET) myocardial perfusion imaging, the absolute quantification of myocardial blood flow (MBF) has become popular in clinical settings. Quantitative MBF provides an important additional diagnostic or prognostic information over conventional visual assessment. The success of MBF quantification using PET/computed tomography (CT) has increased the demand for this quantitative diagnostic approach to be more accessible. In this regard, MBF quantification approaches have been developed using several other diagnostic imaging modalities including single-photon emission computed tomography, CT, and cardiac magnetic resonance. This review will address the clinical aspects of PET MBF quantification and the new approaches to MBF quantification.
婷婷（整理）
2007-01-01
ABSOLUT与创意素来有着不解之缘。由Andy Warhal的ABSOLUT WARHOL至今，已有超过400位不同领域的创意大师为ABSOLUT的当代艺术宝库贡献了自己的得意之作。ABSOLUT的创意仿佛永远不会枯竭，而一系列的作品也让惊喜从未落空。
Absolute Priority for a Vehicle in VANET
Shirani, Rostam; Hendessi, Faramarz; Montazeri, Mohammad Ali; Sheikh Zefreh, Mohammad
In today's world, traffic jams waste hundreds of hours of our life. This causes many researchers try to resolve the problem with the idea of Intelligent Transportation System. For some applications like a travelling ambulance, it is important to reduce delay even for a second. In this paper, we propose a completely infrastructure-less approach for finding shortest path and controlling traffic light to provide absolute priority for an emergency vehicle. We use the idea of vehicular ad-hoc networking to reduce the imposed travelling time. Then, we simulate our proposed protocol and compare it with a centrally controlled traffic light system.
Musical Activity Tunes Up Absolute Pitch Ability
Dohn, Anders; Garza-Villarreal, Eduardo A.; Ribe, Lars Riisgaard
2014-01-01
Absolute pitch (AP) is the ability to identify or produce pitches of musical tones without an external reference. Active AP (i.e., pitch production or pitch adjustment) and passive AP (i.e., pitch identification) are considered to not necessarily coincide, although no study has properly compared...... that APs generally undershoot when adjusting musical pitch, a tendency that decreases when musical activity increases. Finally, APs are less accurate when adjusting the pitch to black key targets than to white key targets. Hence, AP ability may be partly practice-dependent and we speculate that APs may...
Development of an absolute neutron dosimeter
Acevedo, C; Birstein, L; Loyola, H [Section de Desarrollos Innovativos, Comision Chilena de EnergIa Nuclear (CCHEN), Casilla 188-D, Santiago (Chile)], E-mail: lbirstei@cchen.cl
2008-11-01
An Absolute Neutron Dosimeter was developed to be used as a calibration standard for the Radiation Metrology Laboratory at CCHEN. The main component of the Dosimeter consists of a Proportional Counter of cylindrical shape, with Polyethylene walls and Ethylene gas in its interior. It includes a cage shaped arrangement of graphite bars that operates like the Proportional Counter cathode and a tungsten wire of 25 {mu}m in diameter {mu}m as the anode. Results of a Montecarlo modeling for the Dosimeter operation and results of tests and measurements performed with a radioactive source are presented.
Musical Activity Tunes Up Absolute Pitch Ability
Dohn, Anders; Garza-Villarreal, Eduardo A.; Ribe, Lars Riisgaard
2014-01-01
Absolute pitch (AP) is the ability to identify or produce pitches of musical tones without an external reference. Active AP (i.e., pitch production or pitch adjustment) and passive AP (i.e., pitch identification) are considered to not necessarily coincide, although no study has properly compared...... that APs generally undershoot when adjusting musical pitch, a tendency that decreases when musical activity increases. Finally, APs are less accurate when adjusting the pitch to black key targets than to white key targets. Hence, AP ability may be partly practice-dependent and we speculate that APs may...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Hong, Soon Gi; Son, Sang Joon; Moon, Joon Gi; KIm, Bo Kyum; Lee, Je Hee [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of)
2016-12-15
To figure out if the treatment plan for rectum, bladder and prostate that have a lot of interfraction errors satisfies dosimetric limits without adaptive plan by analyzing MR image. This study was based on 5 prostate cancer patients who had IMRT(total dose: 70 Gy) Using ViewRay MRIdian System(ViewRay, ViewRay Inc., Cleveland, OH, USA) The treatment plans were made on the same CT images to compare with the plan quality according to adaptive plan, and the Eclipse(Ver 10.0.42, Varian, USA) was used. After registrate the 5 treatment MR images to the CT images for treatment plan to analyze the interfraction changes of organ, we measured the dose volume histogram and the changes of the absolute volume for each organ by applying the first treatment plan to each image. Over 5 fractions, the total dose for PTV was V{sub 36.25} Gy ≧ 95%. To confirm that the prescription dose satisfies the SBRT dose limit for prostate, we measured V{sub 100%} , V{sub 95%}, V{sub 90%} for CTV and V{sub 100%}, V{sub 90%}, V{sub 80%}, V{sub 50%} of rectum and bladder. All dose average value of CTV, rectum and bladder satisfied dose limit, but there was a case that exceeded dose limit more than one after analyzing the each image of treatment. After measuring the changes of absolute volume comparing the MR image of the first treatment plan with the one of the interfraction treatment, the difference values were maximum 1.72 times at rectum and maximum 2.0 times at bladder. In case of rectum, the expected values were planned under the dose limit, on average, V{sub 100%}=0.32%, V{sub 90%}=3.33%, V{sub 80%}=7.71%, V{sub 50%}=23.55% in the first treatment plan. In case of rectum, the average of absolute volume in first plan was 117.9 cc. However, the average of really treated volume was 79.2 cc. In case of CTV, the 100% prescription dose area didn't satisfy even though the margin for PTV was 5 mm because of the variation of rectal and bladder volume. There was no case that the value from average
ABSOLUTE STABILITY OF GENERAL LURIE TYPE INDIRECT CONTROL SYSTEMS
甘作新; 葛渭高; 赵素霞; 仵永先
2001-01-01
In this paper, by introducing a new concept of absolute stability for a certain argument, necessary and sufficient conditions for absolute stability of general Lurie indirect control systems are obtained, and some practical sufficient conditions are also given.
Absolute Orientation Based on Distance Kernel Functions
Yanbiao Sun
2016-03-01
Full Text Available The classical absolute orientation method is capable of transforming tie points (TPs from a local coordinate system to a global (geodetic coordinate system. The method is based only on a unique set of similarity transformation parameters estimated by minimizing the total difference between all ground control points (GCPs and the fitted points. Nevertheless, it often yields a transformation with poor accuracy, especially in large-scale study cases. To address this problem, this study proposes a novel absolute orientation method based on distance kernel functions, in which various sets of similarity transformation parameters instead of only one set are calculated. When estimating the similarity transformation parameters for TPs using the iterative solution of a non-linear least squares problem, we assigned larger weighting matrices for the GCPs for which the distances from the point are short. The weighting matrices can be evaluated using the distance kernel function as a function of the distances between the GCPs and the TPs. Furthermore, we used the exponential function and the Gaussian function to describe distance kernel functions in this study. To validate and verify the proposed method, six synthetic and two real datasets were tested. The accuracy was significantly improved by the proposed method when compared to the classical method, although a higher computational complexity is experienced.
Absolute stereochemistry of altersolanol A and alterporriols.
Kanamaru, Saki; Honma, Miho; Murakami, Takanori; Tsushima, Taro; Kudo, Shinji; Tanaka, Kazuaki; Nihei, Ken-Ichi; Nehira, Tatsuo; Hashimoto, Masaru
2012-02-01
The absolute stereochemistry of altersolanol A (1) was established by observing a positive exciton couplet in the circular dichroism (CD) spectrum of the C3,C4-O-bis(2-naphthoyl) derivative 10 and by chemical correlations with known compound 8. Before the discussion, the relative stereochemistry of 1 was confirmed by X-ray crystallographic analysis. The shielding effect at C7'-OMe group by C1-O-benzoylation established the relative stereochemical relationship between the C8-C8' axial bonding and the C1-C4/C1'-C4' polyol moieties of alterporriols E (3), an atropisomer of the C8-C8' dimer of 1. As 3 could be obtained by dimerization of 1 in vitro, the absolute configuration of its central chirality elements (C1-C4) must be identical to those of 1. Spectral comparison between the experimental and theoretical CD spectra supported the above conclusion. Axial stereochemistry of novel C4-O-deoxy dimeric derivatives, alterporriols F (4) and G (5), were also revealed by comparison of their CD spectra to those of 2 and 3.
Absolute Electron Extraction Efficiency of Liquid Xenon
Kamdin, Katayun; Mizrachi, Eli; Morad, James; Sorensen, Peter
2016-03-01
Dual phase liquid/gas xenon time projection chambers (TPCs) currently set the world's most sensitive limits on weakly interacting massive particles (WIMPs), a favored dark matter candidate. These detectors rely on extracting electrons from liquid xenon into gaseous xenon, where they produce proportional scintillation. The proportional scintillation from the extracted electrons serves to internally amplify the WIMP signal; even a single extracted electron is detectable. Credible dark matter searches can proceed with electron extraction efficiency (EEE) lower than 100%. However, electrons systematically left at the liquid/gas boundary are a concern. Possible effects include spontaneous single or multi-electron proportional scintillation signals in the gas, or charging of the liquid/gas interface or detector materials. Understanding EEE is consequently a serious concern for this class of rare event search detectors. Previous EEE measurements have mostly been relative, not absolute, assuming efficiency plateaus at 100%. I will present an absolute EEE measurement with a small liquid/gas xenon TPC test bed located at Lawrence Berkeley National Laboratory.
Absolute Generalized Oscillator Strength Profiles of Rydberg Transitions in C2F6
FAN Xiao-Wei(樊晓伟); LU Shan(卢杉); ZHANG Xian-Zhou(张现周); K.T.Leung
2004-01-01
Absolute generalized oscillator strengths (GOSs) for the two Rydberg excitations at 12.1 e V and 13.5 e V in C2 F6have been determined as functions of energy loss and momentum transfer (K) at impact energy of 2.5 keV. The GOS profiles for these two Rydberg transitions to 3 p Rydberg orbital have the characteristic dipole-dominated shapes with a strong maximum at K ＝ 0.
Comment on "Measurement of the speed-of-light perturbation of free-fall absolute gravimeters"
Nagornyi, V D
2014-01-01
The paper (Rothleitner et al. 2014 Metrologia 51, L9) reports on the measurement of the speed-of-light perturbation in absolute gravimeters. The conclusion that the perturbation reaches only 2/3 of the commonly accepted value violates the fundamental limitation on the maximum speed of information transfer. The conclusion was deluded by unaccounted parasitic perturbations, some of which are obvious from the report.
Li, Beiwen; Liu, Ziping; Zhang, Song
2016-10-03
We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.
Correction for quadrature errors
Netterstrøm, A.; Christensen, Erik Lintz
1994-01-01
In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...
1998-01-01
To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.
ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL
1994-01-01
Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)
Errors on errors - Estimating cosmological parameter covariance
Joachimi, Benjamin
2014-01-01
Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.
Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif
2012-04-01
Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.
A Conceptual Approach to Absolute Value Equations and Inequalities
Ellis, Mark W.; Bryson, Janet L.
2011-01-01
The absolute value learning objective in high school mathematics requires students to solve far more complex absolute value equations and inequalities. When absolute value problems become more complex, students often do not have sufficient conceptual understanding to make any sense of what is happening mathematically. The authors suggest that the…
Invariant and Absolute Invariant Means of Double Sequences
Abdullah Alotaibi
2012-01-01
Full Text Available We examine some properties of the invariant mean, define the concepts of strong σ-convergence and absolute σ-convergence for double sequences, and determine the associated sublinear functionals. We also define the absolute invariant mean through which the space of absolutely σ-convergent double sequences is characterized.
PV Maximum Power-Point Tracking by Using Artificial Neural Network
Farzad Sedaghati; Ali Nahavandi; Mohammad Ali Badamchizadeh; Sehraneh Ghaemi; Mehdi Abedinpour Fallah
2012-01-01
In this paper, using artificial neural network (ANN) for tracking of maximum power point is discussed. Error back propagation method is used in order to train neural network. Neural network has advantages of fast and precisely tracking of maximum power point. In this method neural network is used to specify the reference voltage of maximum power point under different atmospheric conditions. By properly controling of dc-dc boost converter, tracking of maximum power point is feasible. To verify...
Rapid rotators revisited: absolute dimensions of KOI-13
Howarth, Ian D.; Morello, Giuseppe
2017-09-01
We analyse Kepler light-curves of the exoplanet Kepler Object of Interest no. 13b (KOI-13b) transiting its moderately rapidly rotating (gravity-darkened) parent star. A physical model, with minimal ad hoc free parameters, reproduces the time-averaged light-curve at the ∼10 parts per million level. We demonstrate that this Roche-model solution allows the absolute dimensions of the system to be determined from the star's projected equatorial rotation speed, ve sin i*, without any additional assumptions; we find a planetary radius RP = (1.33 ± 0.05) R♃, stellar polar radius Rp★ = (1.55 ± 0.06) R⊙, combined mass M* + MP( ≃ M*) = (1.47 ± 0.17) M⊙ and distance d ≃ (370 ± 25) pc, where the errors are dominated by uncertainties in relative flux contribution of the visual-binary companion KOI-13B. The implied stellar rotation period is within ∼5 per cent of the non-orbital, 25.43-hr signal found in the Kepler photometry. We show that the model accurately reproduces independent tomographic observations, and yields an offset between orbital and stellar-rotation angular-momentum vectors of 60.25° ± 0.05°.
Einstein's Special Theory of Relativity Is Absolutely Wrong
Theofilos, George
2000-11-01
One of the greatest frauds perpetuated on mankind is the Special Theory Relativity. Relativity is like the Leaning Tower of Pisa, which has perfect structure, but the foundation is sitting on a swamp. The basis of relativity is the velocity of light but "c" does not give a true description of light. The missing factor is frequency. There are several characteristics of a photon and two of these are: that it travels at the speed of light in any moving frame and it has a frequency. This paper describes a proof of Einstein's error by applying a frequency to the velocity of light and then deriving a red shift equation, which is exactly the same for low velocities as the standard equation and close to Einstein's erroneous equation for high velocities. There is a 5to.9 the velocity of light. But like I said I believe relativity is wrong and it takes a simple experiment to prove who is correct. The modified equation of light is then applied to the basis of special relativity, showing where relativity is absolutely wrong.
Using absolute gravimeter data to determine vertical gravity gradients
Robertson, D.S.
2001-01-01
The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.
Forecast of absolute methane emissions of mining areas
Krause, E.; Lukowicz, K.; Cybulski, K. [Central Mining Institute (GIG), Katowice (Poland)
2001-07-01
Within the space of the last years, in case of forecasting of methane emissions to mining areas, serious errors occur between the predicted and real methane emissions, exceeding the tolerance limits. The developed in the 1970s forecasting methods relating to methane emissions to mining areas admitted in their assumptions modular output from longwalls amounting to 500 Mg/day or 750 Mg/day. The progressing restructuring processes in the Polish mining industry, including the growth of output concentration, contributed to a radical increase of methane emissions to workings of the environment of longwalls being under operation. Taking into consideration the significant increase of the amount of methane emissions to mining areas, the problem of methane emission forecasting acquired special importance in the course of extraction planning in gassy mines. At the Experimental Mine 'Barbara' of the Central Mining Institute in the period 1998-1999 a forecasting method has been developed, called the dynamic forecast of longwall absolute methane emissions, issued in 2000 in the form of a technical guide. The method has been verified in six hard coal mines. The paper presents the assumptions and in general the forecasting method. 8 refs., 3 figs.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Moore, Don A.; Klein, William M. P.
2008-01-01
Which matters more--beliefs about absolute ability or ability relative to others? This study set out to compare the effects of such beliefs on satisfaction with performance, self-evaluations, and bets on future performance. In Experiment 1, undergraduate participants were told they had answered 20% correct, 80% correct, or were not given their…
Moore, Don A.; Klein, William M. P.
2008-01-01
Which matters more--beliefs about absolute ability or ability relative to others? This study set out to compare the effects of such beliefs on satisfaction with performance, self-evaluations, and bets on future performance. In Experiment 1, undergraduate participants were told they had answered 20% correct, 80% correct, or were not given their…
Study of thin-film resistor resistance error
Spirin V. G.
2009-10-01
Full Text Available A relationship between a thin-film resistor resistance error and mask misalignment with a substrate conductive layer at the second photolithography stage for a thin-film resistor design in which the resistive element does not overlap conductor pads is studied. The error value is at a maximum when the resistor aspect ratio is equal to 1.0.
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Errors in Radiologic Reporting
Esmaeel Shokrollahi
2010-05-01
Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological
Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca
2015-09-01
Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
How is an absolute democracy possible?
Joanna Bednarek
2011-01-01
Full Text Available In the last part of the Empire trilogy, Commonwealth, Negri and Hardt ask about the possibility of the self-governance of the multitude. When answering, they argue that absolute democracy, understood as the political articulation of the multitude that does not entail its unification (construction of the people is possible. As Negri states, this way of thinking about political articulation is rooted in the tradition of democratic materialism and constitutes the alternative to the dominant current of modern political philosophy that identifies political power with sovereignty. The multitude organizes itself politically by means of the constitutive power, identical with the ontological creativity or productivity of the multitude. To state the problem of political organization means to state the problem of class composition: political democracy is at the same time economic democracy.
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
WHY DOES LEIBNIZ NEED ABSOLUTE TIME?
NICOLÁS VAUGHAN C.
2007-08-01
Full Text Available Resumen: En este ensayo pongo en contraposición dos doctrinas conspicuamenteleibnicianas: la doctrina del tiempo relacional e ideal, y la doctrina de la armonía preestablecida. Argumentaré que si todas las substancias están necesariamentecoordinadas, entonces no tiene sentido negar el carácter absoluto y real del tiempo. En la primera sección describiré la concepción newtoniana y clarkeana del tiempo absoluto; en la segunda discutiré la crítica leibniciana a dicha concepción, crítica sobre la que se erige su doctrina relacional e ideal del tiempo; en la tercera sección daré un vistazo a la metafísica monádica madura de Leibniz, haciendo especial énfasis en la doctrina de la armonía preestablecida; finalmente, en la última sección sugeriré la existencia de una tensión irreconciliable entre estas dos doctrinas.Abstract: In this paper I bring together two characteristically Leibnizean doctrines:the doctrine of relational and ideal time, and the doctrine of preestablished harmony. I will argue that, if every substance is necessarily connected with another, then it makes no sense to deny absolute and real time. In the first section, I will describe Newton’s and Clarke’s conception of absolute time; then, in the second section, I will consider Leibniz’s critique of that conception, on which he bases his ideal and relational doctrine of time. In the third section I will look briefly at Leibniz’s mature monadic metaphysics, taking special account of his doctrine of preestablished harmony. In the last section, I will suggest that there is an irreconcilable tension between these two doctrines.
CHARMM-GUI Ligand Binder for absolute binding free energy calculations and its application.
Jo, Sunhwan; Jiang, Wei; Lee, Hui Sun; Roux, Benoît; Im, Wonpil
2013-01-28
Advanced free energy perturbation molecular dynamics (FEP/MD) simulation methods are available to accurately calculate absolute binding free energies of protein-ligand complexes. However, these methods rely on several sophisticated command scripts implementing various biasing energy restraints to enhance the convergence of the FEP/MD calculations, which must all be handled properly to yield correct results. Here, we present a user-friendly Web interface, CHARMM-GUI Ligand Binder ( http://www.charmm-gui.org/input/gbinding ), to provide standardized CHARMM input files for calculations of absolute binding free energies using the FEP/MD simulations. A number of features are implemented to conveniently set up the FEP/MD simulations in highly customizable manners, thereby permitting an accelerated throughput of this important class of computations while decreasing the possibility of human errors. The interface and a series of input files generated by the interface are tested with illustrative calculations of absolute binding free energies of three nonpolar aromatic ligands to the L99A mutant of T4 lysozyme and three FK506-related ligands to FKBP12. Statistical errors within individual calculations are found to be small (~1 kcal/mol), and the calculated binding free energies generally agree well with the experimental measurements and the previous computational studies (within ~2 kcal/mol). Therefore, CHARMM-GUI Ligand Binder provides a convenient and reliable way to set up the ligand binding free energy calculations and can be applicable to pharmaceutically important protein-ligand systems.
Parry, Christopher; Blonquist, J Mark; Bugbee, Bruce
2014-11-01
In situ optical meters are widely used to estimate leaf chlorophyll concentration, but non-uniform chlorophyll distribution causes optical measurements to vary widely among species for the same chlorophyll concentration. Over 30 studies have sought to quantify the in situ/in vitro (optical/absolute) relationship, but neither chlorophyll extraction nor measurement techniques for in vitro analysis have been consistent among studies. Here we: (1) review standard procedures for measurement of chlorophyll; (2) estimate the error associated with non-standard procedures; and (3) implement the most accurate methods to provide equations for conversion of optical to absolute chlorophyll for 22 species grown in multiple environments. Tests of five Minolta (model SPAD-502) and 25 Opti-Sciences (model CCM-200) meters, manufactured from 1992 to 2013, indicate that differences among replicate models are less than 5%. We thus developed equations for converting between units from these meter types. There was no significant effect of environment on the optical/absolute chlorophyll relationship. We derive the theoretical relationship between optical transmission ratios and absolute chlorophyll concentration and show how non-uniform distribution among species causes a variable, non-linear response. These results link in situ optical measurements with in vitro chlorophyll concentration and provide insight to strategies for radiation capture among diverse species.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin, E-mail: richard@beares.net [Monash Centre for Astrophysics, Monash University, Clayton, Victoria 3800 (Australia)
2014-12-20
We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Inpatients’ medical prescription errors
Aline Melo Santos Silva
2009-09-01
Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Near-infrared absolute magnitudes of Type Ia Supernovae
Avelino, Arturo; Friedman, Andrew S.; Mandel, Kaisey; Kirshner, Robert; Challis, Peter
2017-01-01
Type Ia Supernovae light curves (SN Ia) in the near infrared (NIR) exhibit low dispersion in their peak luminosities and are less vulnerable to extinction by interstellar dust in their host galaxies. The increasing number of high quality NIR SNe Ia light curves, including the recent CfAIR2 sample obtained with PAIRITEL, provides updated evidence for their utility as standard candles for cosmology. Using NIR YJHKs light curves of ~150 nearby SNe Ia from the CfAIR2 and CSP samples, and from the literature, we determine the mean value and dispersion of the absolute magnitude in the range between -10 to 50 rest-frame days after the maximum luminosity in B band. We present the mean light-curve templates and Hubble diagram for YJHKs bands. This work contributes to a firm local anchor for supernova cosmology studies in the NIR which will help to reduce the systematic uncertainties due to host galaxy dust present in optical-only studies. This research is supported by NSF grants AST-156854, AST-1211196, Fundacion Mexico en Harvard, and CONACyT.
Contouring error compensation on a micro coordinate measuring machine
Fan, Kuang-Chao; Wang, Hung-Yu; Ye, Jyun-Kuan
2011-12-01
In recent years, three-dimensional measurements of nano-technology researches have received a great attention in the world. Based on the high accuracy demand, the error compensation of measurement machine is very important. In this study, a high precision Micro-CMM (coordinate measuring machine) has been developed which is composed of a coplanar stage for reducing the Abbé error in the vertical direction, the linear diffraction grating interferometer (LDGI) as the position feedback sensor in nanometer resolution, and ultrasonic motors for position control. This paper presents the error compensation strategy including "Home accuracy" and "Position accuracy" in both axes. For the home error compensation, we utilize a commercial DVD pick-up head and its S-curve principle to accurately search the origin of each axis. For the positioning error compensation, the absolute positions relative to the home are calibrated by laser interferometer and the error budget table is stored for feed forward error compensation. Contouring error can thus be compensated if both the compensation of both X and Y positioning errors are applied. Experiments show the contouring accuracy can be controlled to within 50nm after compensation.
Zhang, Song; Yau, Shing-Tung
2008-06-10
For a three-dimensional shape measurement system with a single projector and multiple cameras, registering patches from different cameras is crucial. Registration usually involves a complicated and time-consuming procedure. We propose a new method that can robustly match different patches via absolute phase without significantly increasing its cost. For y and z coordinates, the transformations from one camera to the other are approximated as third-order polynomial functions of the absolute phase. The x coordinates involve only translations and scalings. These functions are calibrated and only need to be determined once. Experiments demonstrated that the alignment error is within RMS 0.7 mm.
Simple and accurate empirical absolute volume calibration of a multi-sensor fringe projection system
Gdeisat, Munther; Qudeisat, Mohammad; AlSa`d, Mohammed; Burton, David; Lilley, Francis; Ammous, Marwan M. M.
2016-05-01
This paper suggests a novel absolute empirical calibration method for a multi-sensor fringe projection system. The optical setup of the projector-camera sensor can be arbitrary. The term absolute calibration here means that the centre of the three dimensional coordinates in the resultant calibrated volume coincides with a preset centre to the three-dimensional real-world coordinate system. The use of a zero-phase fringe marking spot is proposed to increase depth calibration accuracy, where the spot centre is determined with sub-pixel accuracy. Also, a new method is proposed for transversal calibration. Depth and transversal calibration methods have been tested using both single sensor and three-sensor fringe projection systems. The standard deviation of the error produced by this system is 0.25 mm. The calibrated volume produced by this method is 400 mm×400 mm×140 mm.
Frequency-scanning interferometry for dynamic absolute distance measurement using Kalman filter.
Tao, Long; Liu, Zhigang; Zhang, Weibo; Zhou, Yangli
2014-12-15
We propose a frequency-scanning interferometry using the Kalman filtering technique for dynamic absolute distance measurement. Frequency-scanning interferometry only uses a single tunable laser driven by a triangle waveform signal for forward and backward optical frequency scanning. The absolute distance and moving speed of a target can be estimated by the present input measurement of frequency-scanning interferometry and the previously calculated state based on the Kalman filter algorithm. This method not only compensates for movement errors in conventional frequency-scanning interferometry, but also achieves high-precision and low-complexity dynamic measurements. Experimental results of dynamic measurements under static state, vibration and one-dimensional movement are presented.
Bai, Yang; Lu, Yunfeng; Hu, Pengcheng; Wang, Gang; Xu, Jinxin; Zeng, Tao; Li, Zhengkun; Zhang, Zhonghua; Tan, Jiubin
2016-05-11
A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF) movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10(-4) pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range.
THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX
Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Szczygieł, Dorota M.; Gould, Andrew [Department of Astronomy, The Ohio State University, 4051 McPherson Laboratory, Columbus, OH 43210 (United States); Sneden, Christopher [Department of Astronomy, University of Texas at Austin, TX 78712 (United States); Dong, Subo [Institute for Advanced Study, 500 Einstein Drive, Princeton, NJ 08540 (United States)
2013-09-20
We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.
Clemens eMaidhof
2013-07-01
Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.
First absolutely calibrated on-axis ion flow measurements in MST
Schott, B.; Baltzer, M.; Craig, D.; den Hartog, D. J.; Nishizawa, T.; Nornberg, M. D.
2016-10-01
Improvements in absolute calibration techniques allow for the first direct measurements of the flow profile in the core of MST. We use both active charge exchange recombination spectroscopy and passive emission near 343 nm to measure ion temperature and flow. It is generally assumed that O VI is the brightest passive emission source. However, we show that there are cases, such as high temperature, pulsed poloidal current drive (PPCD) plasmas where the passive emission is dominated by C VI. Differences in the fine structure for O VI and C VI result in a systematic velocity error of about 12 km/s if the wrong model is assumed. Active measurements, however, are relatively insensitive to background model choice. The dominant source of error in active velocity measurements remains the systematic errors in calibration. The first absolutely calibrated, localized toroidal velocity measurements were obtained using an updated calibration technique. During PPCD, the on-axis ion flow is up to 40 km/s larger than both the n = 6 mode velocity and the line-averaged ion velocity. These measurements provide the first direct look at the flow profile in the core of MST. This work has been supported by the US DOE and the Wheaton College summer research program.
An error assessment of the kriging based approximation model using a mean square error
Ju, Byeong Hyeon; Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)
2006-08-15
A Kriging model is a sort of approximation model and used as a deterministic model of a computationally expensive analysis or simulation. Although it has various advantages, it is difficult to assess the accuracy of the approximated model. It is generally known that a Mean Square Error (MSE) obtained from the kriging model can't calculate statistically exact error bounds contrary to a response surface method, and a cross validation is mainly used. But the cross validation also has many uncertainties. Moreover, the cross validation can't be used when a maximum error is required in the given region. For solving this problem, we first proposed a modified mean square error which can consider relative errors. Using the modified mean square error, we developed the strategy of adding a new sample to the place that the MSE has the maximum when the MSE is used for the assessment of the kriging model. Finally, we offer guidelines for the use of the MSE which is obtained from the kriging model. Four test problems show that the proposed strategy is a proper method which can assess the accuracy of the kriging model. Based on the results of four test problems, a convergence coefficient of 0.01 is recommended for an exact function approximation.
Chen, Jincan; Yan, Zijun; Wu, Liqing
1996-06-01
Considering a thermoelectric generator as a heat engine cycle, the general differential equations of the temperature field inside thermoelectric elements are established by means of nonequilibrium thermodynamics. These equations are used to study the influence of heat leak, Joule's heat, and Thomson heat on the performance of the thermoelectric generator. New expressions are derived for the power output and the efficiency of the thermoelectric generator. The maximum power output is calculated and the optimal matching condition of load is determined. The maximum efficiency is discussed by a representative numerical example. The aim of this research is to provide some novel conclusions and redress some errors existing in a related investigation.
Maximum-likelihood estimation prevents unphysical Mueller matrices
Aiello, A; Voigt, D; Woerdman, J P
2005-01-01
We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.
Bai, Ling; Smuts, Jonathan; Walsh, Phillip; Qiu, Changling; McNair, Harold M; Schug, Kevin A
2017-02-08
The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120-240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method.
Gyrokinetic Statistical Absolute Equilibrium and Turbulence
Jian-Zhou Zhu and Gregory W. Hammett
2011-01-10
A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, "On some statistical properties of hydrodynamical and magnetohydrodynamical fields," Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.
Color assimilation and contrast near absolute threshold
McCann, John
2012-01-01
Simultaneous Contrast and Assimilation test targets are almost always viewed at high light levels. We measured the appearances of Simultaneous Contrast, Assimilation and other spatial surrounds near absolute rod threshold. Given the very different spatial organizations of receptive fields in rod and cone vision at detection threshold, it is not obvious that these familiar cone-vision spatial effects would be observed at rod light levels. Nevertheless, the spatial experiments showed that these targets have the same changes in appearance as those observed in bright light. Our experiments used very dim candle light that was above threshold for rods and L cones, and below threshold for M and S cones. Although detection threshold experiments show very different spatial organizations for rod and cone vision, we found that spatial contrast experiments gave the same changes of appearance. Neural contrast mechanisms at the lowest end of our visual HDR range are very similar to those at the top of the range in sunlight. This is true for both chromatic and achromatic targets.
PROFIT – THE ABSOLUTE EXPRESSION OF PROFITABILITY
Daniela SIMTION
2013-12-01
Full Text Available Profitability of an economic unit is expressed through a system of indicators, because "no index or economic category can reflect the total, perfect, complex reality of economic phenomena or processes. Each expresses a side of concrete, essential details (indexes, but a full one (economic category. This system of indexes for profitability is characterized by a higher degree of consolidation, of reflection of the economic-financial results. They must be correlated to the other indexes of economic efficiency from the various subsystems that constitute the factors which determine the actual amount of profit and the rate of return. Each indicator has a certain form of expression according to the phenomena to which it refers. Thus, they can be expressed in relative sizes as medium sizes or indexes. They can also be expressed in physical, conventional or value units. The ability to develop monetary results can not be judged independently to the employed means for achieving them. Therefore, the profitability analysis is not limited to investigating its absolute indexes but also the relative ones, obtained by comparing the results to the means employed or consumed for developing the specific activity
von Clarmann, T.
2014-09-01
The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Maximum Likelihood Position Location with a Limited Number of References
D. Munoz-Rodriguez
2011-04-01
Full Text Available A Position Location (PL scheme for mobile users on the outskirts of coverage areas is presented. The proposedmethodology makes it possible to obtain location information with only two land-fixed references. We introduce ageneral formulation and show that maximum-likelihood estimation can provide adequate PL information in thisscenario. The Root Mean Square (RMS error and error-distribution characterization are obtained for differentpropagation scenarios. In addition, simulation results and comparisons to another method are provided showing theaccuracy and the robustness of the method proposed. We study accuracy limits of the proposed methodology fordifferent propagation environments and show that even in the case of mismatch in the error variances, good PLestimation is feasible.
Maximum likelihood identification of aircraft stability and control derivatives
Mehra, R. K.; Stepner, D. E.; Tyler, J. S.
1974-01-01
Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.
MA. Lendita Kryeziu
2015-06-01
Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.
Dr. Grace Zhang
2000-01-01
Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.
HIRDLS observations of global gravity wave absolute momentum fluxes: A wavelet based approach
John, Sherine Rachel; Kishore Kumar, Karanam
2016-02-01
Using wavelet technique for detection of height varying vertical and horizontal wavelengths of gravity waves, the absolute values of gravity wave momentum fluxes are estimated from High Resolution Dynamics Limb Sounder (HIRDLS) temperature measurements. Two years of temperature measurements (2005 December-2007 November) from HIRDLS onboard EOS-Aura satellite over the globe are used for this purpose. The least square fitting method is employed to extract the 0-6 zonal wavenumber planetary wave amplitudes, which are removed from the instantaneous temperature profiles to extract gravity wave fields. The vertical and horizontal wavelengths of the prominent waves are computed using wavelet and cross correlation techniques respectively. The absolute momentum fluxes are then estimated using prominent gravity wave perturbations and their vertical and horizontal wavelengths. The momentum fluxes obtained from HIRDLS are compared with the fluxes obtained from ground based Rayleigh LIDAR observations over a low latitude station, Gadanki (13.5°N, 79.2°E) and are found to be in good agreement. After validation, the absolute gravity wave momentum fluxes over the entire globe are estimated. It is found that the winter hemisphere has the maximum momentum flux magnitudes over the high latitudes with a secondary maximum over the summer hemispheric low-latitudes. The significance of the present study lies in introducing the wavelet technique for estimating the height varying vertical and horizontal wavelengths of gravity waves and validating space based momentum flux estimations using ground based lidar observations.
Conically scanning lidar error in complex terrain
Ferhat Bingöl
2009-05-01
Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.
Efficient Image Transmission Through Analog Error Correction
Liu, Yang; Li,; Xie, Kai
2011-01-01
This paper presents a new paradigm for image transmission through analog error correction codes. Conventional schemes rely on digitizing images through quantization (which inevitably causes significant bandwidth expansion) and transmitting binary bit-streams through digital error correction codes (which do not automatically differentiate the different levels of significance among the bits). To strike a better overall performance in terms of transmission efficiency and quality, we propose to use a single analog error correction code in lieu of digital quantization, digital code and digital modulation. The key is to get analog coding right. We show that this can be achieved by cleverly exploiting an elegant "butterfly" property of chaotic systems. Specifically, we demonstrate a tail-biting triple-branch baker's map code and its maximum-likelihood decoding algorithm. Simulations show that the proposed analog code can actually outperform digital turbo code, one of the best codes known to date. The results and fin...
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Positioning, alignment and absolute pointing of the ANTARES neutrino telescope
Fehr, F [Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1 (Germany); Distefano, C, E-mail: fehr@physik.uni-erlangen.d [INFN Laboratori Nazional del Sud, Via S. Sofia 62, 95123 Catania (Italy)
2010-01-01
A precise detector alignment and absolute pointing is crucial for point-source searches. The ANTARES neutrino telescope utilises an array of hydrophones, tiltmeters and compasses for the relative positioning of the optical sensors. The absolute calibration is accomplished by long-baseline low-frequency triangulation of the acoustic reference devices in the deep-sea with a differential GPS system at the sea surface. The absolute pointing can be independently verified by detecting the shadow of the Moon in cosmic rays.
Absolute quantification of somatic DNA alterations in human cancer
Carter, Scott L.; Cibulskis, Kristian; Helman, Elena; McKenna, Aaron; Shen, Hui; Zack, Travis; Laird, Peter W.; Onofrio, Robert C.; Winckler, Wendy; Weir, Barbara A; Beroukhim, Rameen; Pellman, David; Levine, Douglas A.; Lander, Eric S.; Meyerson, Matthew
2012-01-01
We developed a computational method (ABSOLUTE) that infers tumor purity and malignant cell ploidy directly from analysis of somatic DNA alterations. ABSOLUTE can detect subclonal heterogeneity, somatic homozygosity, and calculate statistical sensitivity to detect specific aberrations. We used ABSOLUTE to analyze ovarian cancer data and identified pervasive subclonal somatic point mutations. In contrast, mutations occurring in key tumor suppressor genes, TP53 and NF1 were predominantly clonal ...
Absolute quantification of somatic DNA alterations in human cancer
Carter, Scott L.; Cibulskis, Kristian; Helman, Elena; McKenna, Aaron; Shen, Hui; Zack, Travis; Laird, Peter W.; Onofrio, Robert C.; Winckler, Wendy; Weir, Barbara A; Beroukhim, Rameen; Pellman, David; Levine, Douglas A.; Lander, Eric S.; Meyerson, Matthew
2015-01-01
We developed a computational method (ABSOLUTE) that infers tumor purity and malignant cell ploidy directly from analysis of somatic DNA alterations. ABSOLUTE can detect subclonal heterogeneity, somatic homozygosity, and calculate statistical sensitivity to detect specific aberrations. We used ABSOLUTE to analyze ovarian cancer data and identified pervasive subclonal somatic point mutations. In contrast, mutations occurring in key tumor suppressor genes, TP53 and NF1 were predominantly clonal ...
Antonio Boldrini
2013-06-01
Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research
Cosmic Ray Spectral Deformation Caused by Energy Determination Errors
Carlson, Per J; Carlson, Per; Wannemark, Conny
2005-01-01
Using simulation methods, distortion effects on energy spectra caused by errors in the energy determination have been investigated. For cosmic ray proton spectra, falling steeply with kinetic energy E as E-2.7, significant effects appear. When magnetic spectrometers are used to determine the energy, the relative error increases linearly with the energy and distortions with a sinusoidal form appear starting at an energy that depends significantly on the error distribution but at an energy lower than that corresponding to the Maximum Detectable Rigidity of the spectrometer. The effect should be taken into consideration when comparing data from different experiments, often having different error distributions.
Influence of Ephemeris Error on GPS Single Point Positioning Accuracy
Lihua, Ma; Wang, Meng
2013-09-01
The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.
1985-01-01
A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.
LIBERTARISMO & ERROR CATEGORIAL
Carlos G. Patarroyo G.
2009-01-01
Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.
High-precision Absolute Coordinate Measurement using Frequency Scanned Interferometry
Chen, Tianxiang; Riles, Keith; Li, Cheng
2013-01-01
In this paper, we report high-precision absolute position measurement performed with frequency scanned interferometry (FSI). We reported previously on measurement of absolute distance with FSI [1]. Absolute position is determined by several related absolute distances measured simultaneously. The achieved precision of 2-dimensional measurements is better than 1 micron, and in 3-dimensional measurements, the precision on X and Y is confirmed to be below 1 micron, while the confirmed precision on Z is about 2 microns, where the confirmation is limited by the lower precision of the moving stage in Z direction.
The absolute disparity anomaly and the mechanism of relative disparities.
Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne
2016-06-01
There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1).
Long storage times for hyperpolarized 129Xe and precise measurement of its absolute polarization
Repetto, Maricel; Zimmer, Stefan; Karpuk, Sergei; Bluemler, Peter; Heil, Werner [Johannes Gutenberg Universitaet, Institut fuer Physik. Staudingerweg 7 55099, Mainz (Germany)
2014-07-01
Applications of hyperpolarized (HP) 129Xe in medical research and fundamental physics experiments increased significantly in recent years. All uses profit from high degrees of polarization (PXe) which not only needs to be generated but also preserved during transport and storage. PXe is usually determined via comparison of the NMR signals from HP Xe with the NMR signal of thermally polarized H2O or Xe. All these procedures have experimental errors which are hard to eliminate. We present a simple method for the measurement of absolute PXe which best resolution is 0.6 % together with wall storage times > 12 hs using a homebuilt, mobile Xe polarizer.
Evaluation of the absolute regional temperature potential
D. T. Shindell
2012-09-01
Full Text Available The Absolute Regional Temperature Potential (ARTP is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90–28° S, 28° S–28° N, 28–60° N and 60–90° N as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within ±20% of the actual responses, though there are some exceptions for 90–28° S and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the ±20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39–45% and 9–39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.
Absolute parameters of young stars: QZ Carinae
Walker, W. S. G.; Blackford, M.; Butland, R.; Budding, E.
2017-09-01
New high-resolution spectroscopy and BVR photometry together with literature data on the complex massive quaternary star QZ Car are collected and analysed. Absolute parameters are found as follows. System A: M1 = 43 (±3), M2 = 19 (+3 -7), R1 = 28 (±2), R2 = 6 (±2), (⊙); T1 ∼ 28 000, T2 ∼ 33 000 K; System B: M1 = 30 (±3), M2 = 20 (±3), R1 = 10 (±0.5), R2 = 20 (±1), (⊙); T1 ∼ 36 000, T2 ∼ 30 000 K (model dependent temperatures). The wide system AB: Period = 49.5 (±1) yr, Epochs, conjunction = 1984.8 (±1), periastron = 2005.3 (±3) yr, mean separation = 65 (±3), (au); orbital inclination = 85 (+5 -15) deg, photometric distance ∼2700 (±300) pc, age = 4 (±1) Myr. Other new contributions concern: (a) analysis of the timing of minima differences (O - C)s for the eclipsing binary (System B); (b) the width of the eclipses, pointing to relatively large effects of radiation pressure; (c) inferences from the rotational widths of lines for both Systems A and B; and (d) implications for theoretical models of early-type stars. While feeling greater confidence on the quaternary's general parametrization, observational complications arising from strong wind interactions or other, unclear, causes still inhibit precision and call for continued multiwavelength observations. Our high-inclination value for the AB system helps to explain failures to resolve the wide binary in the previous years. The derived young age independently confirms membership of QZ Car to the open cluster Collinder 228.
Absolute Radiometric Calibration of KOMPSAT-3A
Ahn, H. Y.; Shin, D. Y.; Kim, J. S.; Seo, D. C.; Choi, C. U.
2016-06-01
This paper presents a vicarious radiometric calibration of the Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) performed by the Korea Aerospace Research Institute (KARI) and the Pukyong National University Remote Sensing Group (PKNU RSG) in 2015.The primary stages of this study are summarized as follows: (1) A field campaign to determine radiometric calibrated target fields was undertaken in Mongolia and South Korea. Surface reflectance data obtained in the campaign were input to a radiative transfer code that predicted at-sensor radiance. Through this process, equations and parameters were derived for the KOMPSAT-3A sensor to enable the conversion of calibrated DN to physical units, such as at-sensor radiance or TOA reflectance. (2) To validate the absolute calibration coefficients for the KOMPSAT-3A sensor, we performed a radiometric validation with a comparison of KOMPSAT-3A and Landsat-8 TOA reflectance using one of the six PICS (Libya 4). Correlations between top-of-atmosphere (TOA) radiances and the spectral band responses of the KOMPSAT-3A sensors at the Zuunmod, Mongolia and Goheung, South Korea sites were significant for multispectral bands. The average difference in TOA reflectance between KOMPSAT-3A and Landsat-8 image over the Libya 4, Libya site in the red-green-blue (RGB) region was under 3%, whereas in the NIR band, the TOA reflectance of KOMPSAT-3A was lower than the that of Landsat-8 due to the difference in the band passes of two sensors. The KOMPSAT-3Aensor includes a band pass near 940 nm that can be strongly absorbed by water vapor and therefore displayed low reflectance. Toovercome this, we need to undertake a detailed analysis using rescale methods, such as the spectral bandwidth adjustment factor.
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Cervical spine reposition errors after cervical flexion and extension.
Wang, Xu; Lindstroem, René; Carstens, Niels Peter Bak; Graven-Nielsen, Thomas
2017-03-13
Upright head and neck position has been frequently applied as baseline for diagnosis of neck problems. However, the variance of the position after cervical motions has never been demonstrated. Thus, it is unclear if the baseline position varies evenly across the cervical joints. The purpose was to assess reposition errors of upright cervical spine. Cervical reposition errors were measured in twenty healthy subjects (6 females) using video-fluoroscopy. Two flexion movements were performed with a 20 s interval, the same was repeated for extension, with an interval of 5 min between flexion and extension movements. Cervical joint positions were assessed with anatomical landmarks and external markers in a Matlab program. Reposition errors were extracted in degrees (initial position minus reposition) as constant errors (CEs) and absolute errors (AEs). Twelve of twenty-eight CEs (7 joints times 4 repositions) exceeded the minimal detectable change (MDC), while all AEs exceeded the MDC. Averaged AEs across the cervical joints were larger after 5 min' intervals compared to 20 s intervals (p cervical spine. The cervical spine returns to the upright positions with a 2° average absolute difference after cervical flexion and extension movements in healthy adults.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Mannervik, B; Jakobson, I; Warholm, M
1986-01-01
Optimal design of experiments as well as proper analysis of data are dependent on knowledge of the experimental error. A detailed analysis of the error structure of kinetic data obtained with acetylcholinesterase showed conclusively that the classical assumptions of constant absolute or constant relative error are inadequate for the dependent variable (velocity). The best mathematical models for the experimental error involved the substrate and inhibitor concentrations and reflected the rate law for the initial velocity. Data obtained with other enzymes displayed similar relationships between experimental error and the independent variables. The new empirical error functions were shown superior to previously used models when utilized in weighted non-linear-regression analysis of kinetic data. The results suggest that, in the spectrophotometric assays used in the present study, the observed experimental variance is primarily due to errors in determination of the concentrations of substrate and inhibitor and not to error in measuring the velocity. PMID:3753447
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Julian, Liam
2009-01-01
In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…
Challenge and Error: Critical Events and Attention-Related Errors
Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel
2011-01-01
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…
Syam Kumar, S.A., E-mail: skppm@rediffmail.com [Department of Medical Physics, Cancer Institute (WIA), Adyar, Chennai, Tamil Nadu (India); Sukumar, Prabakar; Sriram, Padmanaban; Rajasekaran, Dhanabalan; Aketi, Srinu; Vivekanandan, Nagarajan [Department of Medical Physics, Cancer Institute (WIA), Adyar, Chennai, Tamil Nadu (India)
2012-01-01
The recalculation of 1 fraction from a patient treatment plan on a phantom and subsequent measurements have become the norms for measurement-based verification, which combines the quality assurance recommendations that deal with the treatment planning system and the beam delivery system. This type of evaluation has prompted attention to measurement equipment and techniques. Ionization chambers are considered the gold standard because of their precision, availability, and relative ease of use. This study evaluates and compares 5 different ionization chambers: phantom combinations for verification in routine patient-specific quality assurance of RapidArc treatments. Fifteen different RapidArc plans conforming to the clinical standards were selected for the study. Verification plans were then created for each treatment plan with different chamber-phantom combinations scanned by computed tomography. This includes Medtec intensity modulated radiation therapy (IMRT) phantom with micro-ionization chamber (0.007 cm{sup 3}) and pinpoint chamber (0.015 cm{sup 3}), PTW-Octavius phantom with semiflex chamber (0.125 cm{sup 3}) and 2D array (0.125 cm{sup 3}), and indigenously made Circular wax phantom with 0.6 cm{sup 3} chamber. The measured isocenter absolute dose was compared with the treatment planning system (TPS) plan. The micro-ionization chamber shows more deviations when compared with semiflex and 0.6 cm{sup 3} with a maximum variation of -4.76%, -1.49%, and 2.23% for micro-ionization, semiflex, and farmer chambers, respectively. The positive variations indicate that the chamber with larger volume overestimates. Farmer chamber shows higher deviation when compared with 0.125 cm{sup 3}. In general the deviation was found to be <1% with the semiflex and farmer chambers. A maximum variation of 2% was observed for the 0.007 cm{sup 3} ionization chamber, except in a few cases. Pinpoint chamber underestimates the calculated isocenter dose by a maximum of 4.8%. Absolute dose
Syam Kumar, S A; Sukumar, Prabakar; Sriram, Padmanaban; Rajasekaran, Dhanabalan; Aketi, Srinu; Vivekanandan, Nagarajan
2012-01-01
The recalculation of 1 fraction from a patient treatment plan on a phantom and subsequent measurements have become the norms for measurement-based verification, which combines the quality assurance recommendations that deal with the treatment planning system and the beam delivery system. This type of evaluation has prompted attention to measurement equipment and techniques. Ionization chambers are considered the gold standard because of their precision, availability, and relative ease of use. This study evaluates and compares 5 different ionization chambers: phantom combinations for verification in routine patient-specific quality assurance of RapidArc treatments. Fifteen different RapidArc plans conforming to the clinical standards were selected for the study. Verification plans were then created for each treatment plan with different chamber-phantom combinations scanned by computed tomography. This includes Medtec intensity modulated radiation therapy (IMRT) phantom with micro-ionization chamber (0.007 cm(3)) and pinpoint chamber (0.015 cm(3)), PTW-Octavius phantom with semiflex chamber (0.125 cm(3)) and 2D array (0.125 cm(3)), and indigenously made Circular wax phantom with 0.6 cm(3) chamber. The measured isocenter absolute dose was compared with the treatment planning system (TPS) plan. The micro-ionization chamber shows more deviations when compared with semiflex and 0.6 cm(3) with a maximum variation of -4.76%, -1.49%, and 2.23% for micro-ionization, semiflex, and farmer chambers, respectively. The positive variations indicate that the chamber with larger volume overestimates. Farmer chamber shows higher deviation when compared with 0.125 cm(3). In general the deviation was found to be <1% with the semiflex and farmer chambers. A maximum variation of 2% was observed for the 0.007 cm(3) ionization chamber, except in a few cases. Pinpoint chamber underestimates the calculated isocenter dose by a maximum of 4.8%. Absolute dose measurements using the semiflex
Ngo, Son Tung; Nguyen, Minh Tung; Nguyen, Minh Tho
2017-05-01
The absolute binding free energy of an inhibitor to HIV-1 Protease (PR) was determined throughout evaluation of the non-bonded interaction energy difference between the two bound and unbound states of the inhibitor and surrounding molecules by the fast pulling of ligand (FPL) process using non-equilibrium molecular dynamics (NEMD) simulations. The calculated free energy difference terms help clarifying the nature of the binding. Theoretical binding affinities are in good correlation with experimental data, with R = 0.89. The paradigm used is able to rank two inhibitors having the maximum difference of ∼1.5 kcal/mol in absolute binding free energies.
Absolute Depth Sensitivity in Cat Primary Visual Cortex under Natural Viewing Conditions.
Pigarev, Ivan N; Levichkina, Ekaterina V
2016-01-01
Mechanisms of 3D perception, investigated in many laboratories, have defined depth either relative to the fixation plane or to other objects in the visual scene. It is obvious that for efficient perception of the 3D world, additional mechanisms of depth constancy could operate in the visual system to provide information about absolute distance. Neurons with properties reflecting some features of depth constancy have been described in the parietal and extrastriate occipital cortical areas. It has also been shown that, for some neurons in the visual area V1, responses to stimuli of constant angular size differ at close and remote distances. The present study was designed to investigate whether, in natural free gaze viewing conditions, neurons tuned to absolute depths can be found in the primary visual cortex (area V1). Single-unit extracellular activity was recorded from the visual cortex of waking cats sitting on a trolley in front of a large screen. The trolley was slowly approaching the visual scene, which consisted of stationary sinusoidal gratings of optimal orientation rear-projected over the whole surface of the screen. Each neuron was tested with two gratings, with spatial frequency of one grating being twice as high as that of the other. Assuming that a cell is tuned to a spatial frequency, its maximum response to the grating with a spatial frequency twice as high should be shifted to a distance half way closer to the screen in order to attain the same size of retinal projection. For hypothetical neurons selective to absolute depth, location of the maximum response should remain at the same distance irrespective of the type of stimulus. It was found that about 20% of neurons in our experimental paradigm demonstrated sensitivity to particular distances independently of the spatial frequencies of the gratings. We interpret these findings as an indication of the use of absolute depth information in the primary visual cortex.
V R Durai; Rashmi Bhardwaj
2014-07-01
The output from Global Forecasting System (GFS) T574L64 operational at India Meteorological Department (IMD), New Delhi is used for obtaining location specific quantitative forecast of maximum and minimum temperatures over India in the medium range time scale. In this study, a statistical bias correction algorithm has been introduced to reduce the systematic bias in the 24–120 hour GFS model location specific forecast of maximum and minimum temperatures for 98 selected synoptic stations, representing different geographical regions of India. The statistical bias correction algorithm used for minimizing the bias of the next forecast is Decaying Weighted Mean (DWM), as it is suitable for small samples. The main objective of this study is to evaluate the skill of Direct Model Output (DMO) and Bias Corrected (BC) GFS for location specific forecast of maximum and minimum temperatures over India. The performance skill of 24–120 hour DMO and BC forecast of GFS model is evaluated for all the 98 synoptic stations during summer (May–August 2012) and winter (November 2012–February 2013) seasons using different statistical evaluation skill measures. The magnitude of Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for BC GFS forecast is lower than DMO during both summer and winter seasons. The BC GFS forecasts have higher skill score as compared to GFS DMO over most of the stations in all day-1 to day-5 forecasts during both summer and winter seasons. It is concluded from the study that the skill of GFS statistical BC forecast improves over the GFS DMO remarkably and hence can be used as an operational weather forecasting system for location specific forecast over India.
Maximum entropy method for solving operator equations of the first kind
金其年; 侯宗义
1997-01-01
The maximum entropy method for linear ill-posed problems with modeling error and noisy data is considered and the stability and convergence results are obtained. When the maximum entropy solution satisfies the "source condition", suitable rates of convergence can be derived. Considering the practical applications, an a posteriori choice for the regularization parameter is presented. As a byproduct, a characterization of the maximum entropy regularized solution is given.
Absolute and relative surface profile interferometry using multiple frequency-scanned lasers
Peca, Marek; Vojtíšek, Petr; Lédl, Vít
2016-01-01
An interferometer has been used to measure the surface profile of generic object. Frequency scanning interferometry has been employed to provide unambiguous phase readings, to suppress etalon fringes, and to supersede phase-shifting. The frequency scan has been performed in three narrow wavelength bands, each generated by a temperature tuned laser diode. It is shown, that for certain portions of measured object, it was possible to get absolute phase measurement, counting all wave periods from the point of zero path difference, yielding precision of 2.7nm RMS over 11.75mm total path difference. For the other areas where steep slopes were present in object geometry, a relative measurement is still possible, at measured surface roughness comparable to that of machining process (the same 2.7nm RMS). It is concluded, that areas containing steep slopes exhibit systematic error, attributed to a combined factors of dispersion and retrace error.
Absolute and relative surface profile interferometry using multiple frequency-scanned lasers
Peca, Marek; Psota, Pavel; Vojtíšek, Petr; Lédl, Vít.
2016-11-01
An interferometer has been used to measure the surface profile of generic object. Frequency scanning interferometry has been employed to provide unambiguous phase readings, to suppress etalon fringes, and to supersede phase-shifting. The frequency scan has been performed in three narrow wavelength bands, each generated by a temperature tuned laser diode. It is shown, that for certain portions of measured object, it was possible to get absolute phase measurement, counting all wave periods from the point of zero path difference, yielding precision of 2.7nm RMS over 11.75mm total path difference. For the other areas where steep slopes were present in object geometry, a relative measurement is still possible, at measured surface roughness comparable to that of machining process (the same 2.7nm RMS). It is concluded, that areas containing steep slopes exhibit systematic error, attributed to a combined factors of dispersion and retrace error.
Measurement of the Absolute Branching Fraction of D0 to K- pi+
Aubert, B.; Bona, M.; Boutigny, D.; Karyotakis, Y.; Lees, J.P.; Poireau, V.; Prudent, X.; Tisserand, V.; Zghiche, A.; /Annecy, LAPP; Garra Tico, J.; Grauges, E.; /Barcelona U., ECM; Lopez, L.; Palano, A.; /Bari U.; Eigen, G.; Ofte, I.; Stugu, B.; Sun, L.; /Bergen U.; Abrams, G.S.; Battaglia, M.; Brown, D.N.; Button-Shafer, J.; /LBL, Berkeley
2007-04-25
The authors measure the absolute branching fraction for D{sup 0} {yields} K{sup -} {pi}{sup +} using partial reconstruction of {bar B}{sup 0} {yields} D*{sup +}X{ell}{sup -}{bar {nu}}{sub {ell}} decays, in which only the charged lepton and the pion from the decay D*{sup +} {yields} D{sup 0}{pi}{sup +} are used. Based on a data sample of 230 million B{bar B} pairs collected at the {Upsilon}(4S) resonance with the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC, they obtain {Beta}(D{sup 0} {yields} K{sup -}{pi}{sup +}) = (4.007 {+-} 0.037 {+-} 0.070)%, where the first error is statistical and the second error is systematic.
Refractive error sensing from wavefront slopes.
Navarro, Rafael
2010-01-01
The problem of measuring the objective refractive error with an aberrometer has shown to be more elusive than expected. Here, the formalism of differential geometry is applied to develop a theoretical framework of refractive error sensing. At each point of the pupil, the local refractive error is given by the wavefront curvature, which is a 2 × 2 symmetric matrix, whose elements are directly related to sphere, cylinder, and axis. Aberrometers usually measure the local gradient of the wavefront. Then refractive error sensing consists of differentiating the gradient, instead of integrating as in wavefront sensing. A statistical approach is proposed to pass from the local to the global (clinically meaningful) refractive error, in which the best correction is assumed to be the maximum likelihood estimation. In the practical implementation, this corresponds to the mode of the joint histogram of the 3 different elements of the curvature matrix. Results obtained both in computer simulations and with real data provide a close agreement and consistency with the main optical image quality metrics such as the Strehl ratio.
Partin, Judson Wiley
The West Pacific Warm Pool (WPWP) plays an important role in the global heat budget and global hydrologic cycle, so knowledge about its past variability would improve our understanding of global climate. Variations in WPWP precipitation are most notable during El Nino-Southern Oscillation events, when climate changes in the tropical Pacific impact rainfall not only in the WPWP, but around the globe. The stalagmite records presented in this dissertation provide centennial-to-millennial-scale constraints of WPWP precipitation during three distinct climatic periods: the Last Glacial Maximum (LGM), the last deglaciation, and the Holocene. In Chapter 2, the methodologies associated with the generation of U/Th-based absolute ages for the stalagmites are presented. In the final age models for the stalagmites, dates younger than 11,000 years have absolute errors of +/-400 years or less, and dates older than 11,000 years have a relative error of +/-2%. Stalagmite-specific 230Th/ 232Th ratios, calculated using isochrons, are used to correct for the presence of unsupported 230Th in a stalagmite at the time of formation. Hiatuses in the record are identified using a combination of optical properties, high 232Th concentrations, and extrapolation from adjacent U/Th dates. In Chapter 3, stalagmite oxygen isotopic composition (delta18O) records from N. Borneo are presented which reveal millennial-scale rainfall changes that occurred in response to changes in global climate boundary conditions, radiative forcing, and abrupt climate changes. The stalagmite delta18O records detect little change in inferred precipitation between the LGM and the present, although significant uncertainties are associated with the impact of the Sunda Shelf on rainfall delta 18O during the LGM. A millennial-scale drying in N. Borneo, inferred from an increase in stalagmite delta18O, peaks at ˜16.5ka coeval with timing of Heinrich event 1, possibly related to a southward movement of the Intertropical
Absolute Humidity and the Seasonality of Influenza (Invited)
Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.
2010-12-01
Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.
Novalis' Poetic Uncertainty: A "Bildung" with the Absolute
Mika, Carl
2016-01-01
Novalis, the Early German Romantic poet and philosopher, had at the core of his work a mysterious depiction of the "absolute." The absolute is Novalis' name for a substance that defies precise knowledge yet calls for a tentative and sensitive speculation. How one asserts a truth, represents an object, and sets about encountering things…
A Global Forecast of Absolute Poverty and Employment.
Hopkins, M. J. D.
1980-01-01
Estimates are made of absolute poverty and employment under the hypothesis that existing trends continue. Concludes that while the number of people in absolute poverty is not likely to decline by 2000, the proportion will fall. Jobs will have to grow 3.9% per year in developing countries to achieve full employment. (JOW)
Determination of Absolute Zero Using a Computer-Based Laboratory
Amrani, D.
2007-01-01
We present a simple computer-based laboratory experiment for evaluating absolute zero in degrees Celsius, which can be performed in college and undergraduate physical sciences laboratory courses. With a computer, absolute zero apparatus can help demonstrators or students to observe the relationship between temperature and pressure and use…
Absolute instruments and perfect imaging in geometrical optics
Tyc, Tomas; Sarbort, Martin; Bering, Klaus
2011-01-01
We investigate imaging by spherically symmetric absolute instruments that provide perfect imaging in the sense of geometrical optics. We derive a number of properties of such devices, present a general method for designing them and use this method to propose several new absolute instruments, in particular a lens providing a stigmatic image of an optically homogeneous region and having a moderate refractive index range.
ABSOLUTE STABILITY OF GENERAL LURIE DISCRETE NONLINEAR CONTROL SYSTEMS
GAN Zuoxin; HAN Jingqing; ZHAO Suxia; WU Yongxian
2002-01-01
In the present paper, the absolute stability of general Lurie discrete nonlinear control systems has been discussed by Lyapunov function approach. A sufficient condition of absolute stability for the general Lurie discrete nonlinear control systems is derived, and some necessary and sufficient conditions are obtained in special cases. Meanwhile, we give a simple example to illustrate the effectiveness of the results.
Absolute density measurements in the middle atmosphere
M. Rapp
Full Text Available In the last ten years a total of 25 sounding rockets employing ionization gauges have been launched at high latitudes ( ~ 70° N to measure total atmospheric density and its small scale fluctuations in an altitude range between 70 and 110 km. While the determination of small scale fluctuations is unambiguous, the total density analysis has been complicated in the past by aerodynamical disturbances leading to densities inside the sensor which are enhanced compared to atmospheric values. Here, we present the results of both Monte Carlo simulations and wind tunnel measurements to quantify this aerodynamical effect. The comparison of the resulting ‘ram-factor’ profiles with empirically determined density ratios of ionization gauge measurements and falling sphere measurements provides excellent agreement. This demonstrates both the need, but also the possibility, to correct aerodynamical influences on measurements from sounding rockets. We have determined a total of 20 density profiles of the mesosphere-lower-thermosphere (MLT region. Grouping these profiles according to season, a listing of mean density profiles is included in the paper. A comparison with density profiles taken from the reference atmospheres CIRA86 and MSIS90 results in differences of up to 40%. This reflects that current reference atmospheres are a significant potential error source for the determination of mixing ratios of, for example, trace gas constituents in the MLT region.
Key words. Middle atmosphere (composition and chemistry; pressure, density, and temperature; instruments and techniques
Patient error: a preliminary taxonomy.
Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.
2009-01-01
PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca
Automatic Error Analysis Using Intervals
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Rieger, Martina; Martinez, Fanny; Wenke, Dorit
2011-01-01
Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…
A developmental study of latent absolute pitch memory.
Jakubowski, Kelly; Müllensiefen, Daniel; Stewart, Lauren
2017-03-01
The ability to recall the absolute pitch level of familiar music (latent absolute pitch memory) is widespread in adults, in contrast to the rare ability to label single pitches without a reference tone (overt absolute pitch memory). The present research investigated the developmental profile of latent absolute pitch (AP) memory and explored individual differences related to this ability. In two experiments, 288 children from 4 to12 years of age performed significantly above chance at recognizing the absolute pitch level of familiar melodies. No age-related improvement or decline, nor effects of musical training, gender, or familiarity with the stimuli were found in regard to latent AP task performance. These findings suggest that latent AP memory is a stable ability that is developed from as early as age 4 and persists into adulthood.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Error bars in experimental biology.
Cumming, Geoff; Fidler, Fiona; Vaux, David L
2007-04-09
Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Video Error Correction Using Steganography
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Kukush, Alexander
2011-01-16
With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.
A new method to calibrate the absolute sensitivity of a soft X-ray streak camera
Yu, Jian; Liu, Shenye; Li, Jin; Yang, Zhiwen; Chen, Ming; Guo, Luting; Yao, Li; Xiao, Shali
2016-12-01
In this paper, we introduce a new method to calibrate the absolute sensitivity of a soft X-ray streak camera (SXRSC). The calibrations are done in the static mode by using a small laser-produced X-ray source. A calibrated X-ray CCD is used as a secondary standard detector to monitor the X-ray source intensity. In addition, two sets of holographic flat-field grating spectrometers are chosen as the spectral discrimination systems of the SXRSC and the X-ray CCD. The absolute sensitivity of the SXRSC is obtained by comparing the signal counts of the SXRSC to the output counts of the X-ray CCD. Results show that the calibrated spectrum covers the range from 200 eV to 1040 eV. The change of the absolute sensitivity in the vicinity of the K-edge of the carbon can also be clearly seen. The experimental values agree with the calculated values to within 29% error. Compared with previous calibration methods, the proposed method has several advantages: a wide spectral range, high accuracy, and simple data processing. Our calibration results can be used to make quantitative X-ray flux measurements in laser fusion research.
Richter, J. P.; Mollendorf, J. C.; DesJardin, P. E.
2016-11-01
Accurate knowledge of the absolute combustion gas composition is necessary in the automotive, aircraft, processing, heating and air conditioning industries where emissions reduction is a major concern. Those industries use a variety of sensor technologies. Many of these sensors are used to analyze the gas by pumping a sample through a system of tubes to reach a remote sensor location. An inherent characteristic with this type of sampling strategy is that the mixture state changes as the sample is drawn towards the sensor. Specifically, temperature and humidity changes can be significant, resulting in a very different gas mixture at the sensor interface compared with the in situ location (water vapor dilution effect). Consequently, the gas concentrations obtained from remotely sampled gas analyzers can be significantly different than in situ values. In this study, inherent errors associated with sampled combustion gas concentration measurements are explored, and a correction methodology is presented to determine the absolute gas composition from remotely measured gas species concentrations. For in situ (wet) measurements a heated zirconium dioxide (ZrO2) oxygen sensor (Bosch LSU 4.9) is used to measure the absolute oxygen concentration. This is used to correct the remotely sampled (dry) measurements taken with an electrochemical sensor within the remote analyzer (Testo 330-2LL). In this study, such a correction is experimentally validated for a specified concentration of carbon monoxide (5020 ppmv).
A Characterization of Prediction Errors
Meek, Christopher
2016-01-01
Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.
Error Analysis and Its Implication
崔蕾
2007-01-01
Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.
Error bars in experimental biology
2007-01-01
Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...
Split-step eigenvector-following technique for exploring enthalpy landscapes at absolute zero.
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra
2006-03-16
The mapping of enthalpy landscapes is complicated by the coupling of particle position and volume coordinates. To address this issue, we have developed a new split-step eigenvector-following technique for locating minima and transition points in an enthalpy landscape at absolute zero. Each iteration is split into two steps in order to independently vary system volume and relative atomic coordinates. A separate Lagrange multiplier is used for each eigendirection in order to provide maximum flexibility in determining step sizes. This technique will be useful for mapping the enthalpy landscapes of bulk systems such as supercooled liquids and glasses.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-05-23
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.
Muhammad Ilyas
2016-05-01
Full Text Available This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF and Unscented Kalman filter (UKF were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
Phantom Validation of Tc-99m Absolute Quantification in a SPECT/CT Commercial Device
Silvano Gnesin
2016-01-01
Full Text Available Aim. Similar to PET, absolute quantitative imaging is becoming available in commercial SPECT/CT devices. This study’s goal was to assess quantitative accuracy of activity recovery as a function of image reconstruction parameters and count statistics in a variety of phantoms. Materials and Methods. We performed quantitative Tc99m-SPECT/CT acquisitions (Siemens Symbia Intevo, Erlangen, Germany of a uniform cylindrical, NEMA/IEC, and an anthropomorphic abdominal phantom. Background activity concentrations tested ranged: 2–80 kBq/mL. SPECT acquisitions used 120 projections (20 s/projection. Reconstructions were performed with the proprietary iterative conjugate gradient algorithm. NEMA phantom reconstructions were obtained as a function of the iteration number (range: 4–48. Recovery coefficients, hot contrast, relative lung error (NEMA phantom, and image noise were assessed. Results. In all cases, absolute activity and activity concentration were measured within 10% of the expected value. Recovery coefficients and hot contrast in hot inserts did not vary appreciably with count statistics. RC converged at 16 iterations for insert size > 22 mm. Relative lung errors were comparable to PET levels indicating the efficient integration of attenuation and scatter corrections with adequate detector modeling. Conclusions. The tested device provided accurate activity recovery within 10% of correct values; these performances are comparable to current generation PET/CT systems.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Diagnostic errors in pediatric radiology
Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)
2011-03-15
Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)
Common errors in disease mapping
Ricardo Ocaña-Riola
2010-05-01
Full Text Available Many morbid-mortality atlases and small-area studies have been carried out over the last decade. However, the methods used to draw up such research, the interpretation of results and the conclusions published are often inaccurate. Often, the proliferation of this practice has led to inefficient decision-making, implementation of inappropriate health policies and negative impact on the advancement of scientific knowledge. This paper reviews the most frequent errors in the design, analysis and interpretation of small-area epidemiological studies and proposes a diagnostic evaluation test that should enable the scientific quality of published papers to be ascertained. Nine common mistakes in disease mapping methods are discussed. From this framework, and following the theory of diagnostic evaluation, a standardised test to evaluate the scientific quality of a small-area epidemiology study has been developed. Optimal quality is achieved with the maximum score (16 points, average with a score between 8 and 15 points, and low with a score of 7 or below. A systematic evaluation of scientific papers, together with an enhanced quality in future research, will contribute towards increased efficacy in epidemiological surveillance and in health planning based on the spatio-temporal analysis of ecological information.
Absolute gravimetry - for monitoring climate change and geodynamics in Greenland
Nielsen, Jens Emil
with the GPS data, it is possible to separate the different signals. The method used in this study is absolute gravimetry. An absolute gravimeter of the A10 type has been purchased by DTU Space for this purpose. This instrument can measure gravity changes as small as 6µGal (= 60nm=s2), which provides....... The time allocated for a PhD project is not sufficient to gather enough data for an elaborated analysis of the different signals which can be detected in Greenland. However, as will be presented in this thesis, the preliminary results indicate interesting possibilities for the use of absolute gravimetry...
An All Fiber White Light Interferometric Absolute Temperature Measurement System
Jeonggon Harrison Kim
2008-11-01
Full Text Available Recently the author of this article proposed a new signal processing algorithm for an all fiber white light interferometer. In this article, an all fiber white light interferometric absolute temperature measurement system is presented using the previously proposed signal processing algorithm. Stability and absolute temperature measurement were demonstrated. These two tests demonstrated the feasibility of absolute temperature measurement with an accuracy of 0.015 fringe and 0.0005 fringe, respectively. A hysteresis test from 373K to 873K was also presented. Finally, robustness of the sensor system towards laser diode temperature drift, AFMZI temperature drift and PZT non-linearity was demonstrated.
Aerial measurement error with a dot planimeter: Some experimental estimates
Yuill, R. S.
1971-01-01
A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.
Transient Error Data Analysis.
1979-05-01
Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Results of the first North American comparison of absolute gravimeters, NACAG-2010
Schmerge, David; Francis, Olvier; Henton, J.; Ingles, D.; Jones, D.; Kennedy, Jeffrey R.; Krauterbluth, K.; Liard, J.; Newell, D.; Sands, R.; Schiel, J.; Silliker, J.; van Westrum, D.
2012-01-01
The first North American Comparison of absolute gravimeters (NACAG-2010) was hosted by the National Oceanic and Atmospheric Administration at its newly renovated Table Mountain Geophysical Observatory (TMGO) north of Boulder, Colorado, in October 2010. NACAG-2010 and the renovation of TMGO are part of NGS’s GRAV-D project (Gravity for the Redefinition of the American Vertical Datum). Nine absolute gravimeters from three countries participated in the comparison. Before the comparison, the gravimeter operators agreed to a protocol describing the strategy to measure, calculate, and present the results. Nine sites were used to measure the free-fall acceleration of g. Each gravimeter measured the value of g at a subset of three of the sites, for a total set of 27 g-values for the comparison. The absolute gravimeters agree with one another with a standard deviation of 1.6 µGal (1 Gal = 1 cm s-2). The minimum and maximum offsets are -2.8 and 2.7 µGal. This is an excellent agreement and can be attributed to multiple factors, including gravimeters that were in good working order, good operators, a quiet observatory, and a short duration time for the experiment. These results can be used to standardize gravity surveys internationally.
Analytical maximum likelihood estimation of stellar magnetic fields
González, M J Martínez; Ramos, A Asensio; Belluzzi, L
2011-01-01
The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.
Rackwitz, Jenny; Ranković, Miloš Lj.; Milosavljević, Aleksandar R.; Bald, Ilko
2017-02-01
Low-energy electrons (LEEs) play an important role in DNA radiation damage. Here we present a method to quantify LEE induced strand breakage in well-defined oligonucleotide single strands in terms of absolute cross sections. An LEE irradiation setup covering electron energies Measurements are presented for 10.0 and 5.5 eV for different oligonucleotide targets. The determination of absolute strand break cross sections is performed by atomic force microscopy analysis. An accurate fluence determination ensures small margins of error of the determined absolute single strand break cross sections σ SSB . In this way, the influence of sequence modification with the radiosensitive 5-Fluorouracil (5FU) is studied using an absolute and relative data analysis. We demonstrate an increase in the strand break yields of 5FU containing oligonucleotides by a factor of 1.5 to 1.6 compared with non-modified oligonucleotide sequences when irradiated with 10 eV electrons.
Trilisky, Igor; Ward, Emily; Dachman, Abraham H
2015-10-01
CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.
Producing Absolute Truth: CSI Science as Wishful Thinking
Kruse, Corinna
2010-01-01
...). I argue that CSI science, in delivering an absolute “truth” about how and by whom crimes have been committed, is equated with justice, effectively superseding nonfictional forensic science as well as nonfictional judicature as a whole...
Changes in Absolute Sea Level Along U.S. Coasts
U.S. Environmental Protection Agency — This map shows changes in absolute sea level from 1960 to 2016 based on satellite measurements. Data were adjusted by applying an inverted barometer (air pressure)...
Absolute position total internal reflection microscopy with an optical tweezer
Lulu Liu; Alexander Woolf; Alejandro W. Rodriguez; Federico Capasso
2014-01-01
.... We show that by making only simple modifications to the basic TIRM sensing setup and procedure, a probe particle's absolute position relative to a dielectric interface may be known with better than...
Monochromator-Based Absolute Calibration of Radiation Thermometers
Keawprasert, T.; Anhalt, K.; Taubert, D. R.; Hartmann, J.
2011-08-01
A monochromator integrating-sphere-based spectral comparator facility has been developed to calibrate standard radiation thermometers in terms of the absolute spectral radiance responsivity, traceable to the PTB cryogenic radiometer. The absolute responsivity calibration has been improved using a 75 W xenon lamp with a reflective mirror and imaging optics to a relative standard uncertainty at the peak wavelength of approximately 0.17 % ( k = 1). Via a relative measurement of the out-of-band responsivity, the spectral responsivity of radiation thermometers can be fully characterized. To verify the calibration accuracy, the absolutely calibrated radiation thermometer is used to measure Au and Cu freezing-point temperatures and then to compare the obtained results with the values obtained by absolute methods, resulting in T - T 90 values of +52 mK and -50 mK for the gold and copper fixed points, respectively.
Preparation of an oakmoss absolute with reduced allergenic potential.
Ehret, C; Maupetit, P; Petrzilka, M; Klecak, G
1992-06-01
Synopsis Oakmoss absolute, an extract of the lichen Evernia prunastri, is known to cause allergenic skin reactions due to the presence of certain aromatic aldehydes such as atranorin, chloratranorin, ethyl hematommate and ethyl chlorohematommate. In this paper it is shown that treatment of Oakmoss absolute with amino acids such as lysine and/or leucine, lowers considerably the content of these allergenic constituents including atranol and chloratranol. The resulting Oakmoss absolute, which exhibits an excellent olfactive quality, was tested extensively in comparative studies on guinea pigs and on man. The results of the Guinea Pig Maximization Test (GPMT) and Human Repeated Insult Patch Test (HRIPT) indicate that, in comparison with the commercial test sample, the allergenicity of this new quality of Oakmoss absolute was considerably reduced, and consequently better skin tolerance of this fragrance for man was achieved.
On the absolute calibration of SO2 cameras
J. Zielcke
2012-09-01
Full Text Available Sulphur dioxide emission flux measurements are an important tool for volcanic monitoring and eruption risk assessment. The SO2 camera technique remotely measures volcanic emissions by analysing the ultraviolet absorption of SO2 in a narrow spectral window between 305 nm and 320 nm using solar radiation scattered in the atmosphere. The SO2 absorption is selectively detected by mounting band-pass interference filters in front of a two-dimensional, UV-sensitive CCD detector. While this approach is simple and delivers valuable insights into the two-dimensional SO2 distribution, absolute calibration has proven to be difficult. An accurate calibration of the SO2 camera (i.e., conversion from optical density to SO2 column density, CD is crucial to obtain correct SO2 CDs and flux measurements that are comparable to other measurement techniques and can be used for volcanological applications. The most common approach for calibrating SO2 camera measurements is based on inserting quartz cells (cuvettes containing known amounts of SO2 into the light path. It has been found, however, that reflections from the windows of the calibration cell can considerably affect the signal measured by the camera. Another possibility for calibration relies on performing simultaneous measurements in a small area of the camera's field-of-view (FOV by a narrow-field-of-view Differential Optical Absorption Spectroscopy (NFOV-DOAS system. This procedure combines the very good spatial and temporal resolution of the SO2 camera technique with the more accurate column densities obtainable from DOAS measurements. This work investigates the uncertainty of results gained through the two commonly used, but quite different calibration methods (DOAS and calibration cells. Measurements with three different instruments, an SO2 camera, a NFOV-DOAS system and an Imaging DOAS (IDOAS, are presented. We compare the calibration-cell approach with the calibration from the NFOV-DOAS system. The
Error Analysis in Mathematics Education.
Rittner, Max
1982-01-01
The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)
Payment Error Rate Measurement (PERM)
U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...
PV Maximum Power-Point Tracking by Using Artificial Neural Network
Farzad Sedaghati
2012-01-01
Full Text Available In this paper, using artificial neural network (ANN for tracking of maximum power point is discussed. Error back propagation method is used in order to train neural network. Neural network has advantages of fast and precisely tracking of maximum power point. In this method neural network is used to specify the reference voltage of maximum power point under different atmospheric conditions. By properly controling of dc-dc boost converter, tracking of maximum power point is feasible. To verify theory analysis, simulation result is obtained by using MATLAB/SIMULINK.
Global trends in relative and absolute wealth concentrations
2014-01-01
This paper compares changes in relative and absolute wealth concentrations to establish if both processes have followed similar trajectories. The findings indicate that while the level of relative wealth concentration has increased recently, it is not extraordinarily high in an historical perspective. On the contrary, the level of absolute wealth concentration is most likely higher than that previously occurred because of the increase in the wealth holdings and population size of high net wor...
Spectra of absolute instruments from the WKB approximation
Tyc, Tomas
2013-01-01
We calculate frequency spectra of absolute optical instruments using the WKB approximation. The resulting eigenfrequencies approximate the actual values very accurately, in some cases they even give the exact values. Our calculations confirm results obtained previously by a completely different method. In particular, the eigenfrequencies of absolute instruments form tight groups that are almost equidistantly spaced. We demonstrate our method and its results on several examples.
Absolute Free Energies for Biomolecules in Implicit or Explicit Solvent
Berryman, Joshua T.; Schilling, Tanja
Methods for absolute free energy calculation by alchemical transformation of a quantitative model to an analytically tractable one are discussed. These absolute free energy methods are placed in the context of other methods, and an attempt is made to describe the best practice for such calculations given the current state of the art. Calculations of the equilibria between the four free energy basins of the dialanine molecule and the two right- and left-twisted basins of DNA are discussed as examples.
Bayesian and maximum likelihood estimation of genetic maps
York, Thomas L.; Durrett, Richard T.; Tanksley, Steven;
2005-01-01
There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances...
Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion
Zongze Wu
2015-10-01
Full Text Available The maximum correntropy criterion (MCC has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE criterion, the proposed algorithm can achieve better convergence performance especially in the presence of impulsive non-Gaussian (e.g., α-stable noises. Additionally, some theoretical results concerning the convergence behavior are also obtained. Simulation examples are presented to confirm the superior performance of the new algorithm.
Use of Maximum Entropy Modeling in Wildlife Research
Roger A. Baldwin
2009-11-01
Full Text Available Maximum entropy (Maxent modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful models, and performs better than other presence-only modeling approaches. Further advances are needed to better define model thresholds, to test model significance, and to address model selection. Additionally, development of modeling approaches is needed when using repeated sampling of known individuals to assess habitat selection. These advancements would strengthen the utility of Maxent for wildlife research and management.
Maximum Power Point Tracking Based on Sliding Mode Control
Nimrod Vázquez
2015-01-01
Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.
Probabilistic maximum-value wind prediction for offshore environments
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
, and probabilistic forecasts result in greater value to the end-user. The models outperform traditional baseline forecast methods and achieve low predictive errors on the order of 1–2 m s−1. We show the results of their predictive accuracy for different lead times and different training methodologies....... statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...
Error bounds for set inclusions
ZHENG; Xiyin(郑喜印)
2003-01-01
A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.
Uncertainty quantification and error analysis
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Feature Referenced Error Correction Apparatus.
A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Errors and Correction of Precipitation Measurements in China
REN Zhihua; LI Mingqin
2007-01-01
In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.
Periodic sequences with stable $k$-error linear complexity
Zhou, Jianqin
2011-01-01
The linear complexity of a sequence has been used as an important measure of keystream strength, hence designing a sequence which possesses high linear complexity and $k$-error linear complexity is a hot topic in cryptography and communication. Niederreiter first noticed many periodic sequences with high $k$-error linear complexity over GF(q). In this paper, the concept of stable $k$-error linear complexity is presented to study sequences with high $k$-error linear complexity. By studying linear complexity of binary sequences with period $2^n$, the method using cube theory to construct sequences with maximum stable $k$-error linear complexity is presented. It is proved that a binary sequence with period $2^n$ can be decomposed into some disjoint cubes. The cube theory is a new tool to study $k$-error linear complexity. Finally, it is proved that the maximum $k$-error linear complexity is $2^n-(2^l-1)$ over all $2^n$-periodic binary sequences, where $2^{l-1}\\le k<2^{l}$.
Firewall Configuration Errors Revisited
Wool, Avishai
2009-01-01
The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).
1984-01-01
The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.
Catalytic quantum error correction
Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-01-01
We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.
Kappatou, A.; Jaspers, R. J. E.; Delabie, E.; Marchuk, O.; Biel, W.; Jakobs, M. A.
2012-10-01
Investigation of impurity transport properties in tokamak plasmas is essential and a diagnostic that can provide information on the impurity content is required. Combining charge exchange recombination spectroscopy (CXRS) and beam emission spectroscopy (BES), absolute radial profiles of impurity densities can be obtained from the CXRS and BES intensities, electron density and CXRS and BES emission rates, without requiring any absolute calibration of the spectra. The technique is demonstrated here with absolute impurity density radial profiles obtained in TEXTOR plasmas, using a high efficiency charge exchange spectrometer with high etendue, that measures the CXRS and BES spectra along the same lines-of-sight, offering an additional advantage for the determination of absolute impurity densities.
Kappatou, A.; Delabie, E. [FOM Institute DIFFER - Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM, 3430 BE Nieuwegein (Netherlands); Jaspers, R. J. E.; Jakobs, M. A. [Science and Technology of Nuclear Fusion, Eindhoven University of Technology, 5600 MB Eindhoven (Netherlands); Marchuk, O.; Biel, W. [Institute for Energy and Climate Research, Forschungszentrum Julich GmbH, Trilateral Euregio Cluster, 52425 Julich (Germany)
2012-10-15
Investigation of impurity transport properties in tokamak plasmas is essential and a diagnostic that can provide information on the impurity content is required. Combining charge exchange recombination spectroscopy (CXRS) and beam emission spectroscopy (BES), absolute radial profiles of impurity densities can be obtained from the CXRS and BES intensities, electron density and CXRS and BES emission rates, without requiring any absolute calibration of the spectra. The technique is demonstrated here with absolute impurity density radial profiles obtained in TEXTOR plasmas, using a high efficiency charge exchange spectrometer with high etendue, that measures the CXRS and BES spectra along the same lines-of-sight, offering an additional advantage for the determination of absolute impurity densities.
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Experimental repetitive quantum error correction.
Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer
2011-05-27
The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Controlling errors in unidosis carts
Inmaculada Díaz Fernández
2010-01-01
Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.
Prediction of discretization error using the error transport equation
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren
2016-11-01
The performance of digital image correlation (DIC) is influenced by the quality of speckle patterns significantly. Thus, it is crucial to present a valid and practical method to assess the quality of speckle patterns. However, existing assessment methods either lack a solid theoretical foundation or fail to consider the errors due to interpolation. In this work, it is proposed to assess the quality of speckle patterns by estimating the root mean square error (RMSE) of DIC, which is the square root of the sum of square of systematic error and random error. Two performance evaluation parameters, respectively the maximum and the quadratic mean of RMSE, are proposed to characterize the total error. An efficient algorithm is developed to estimate these parameters, and the correctness of this algorithm is verified by numerical experiments for both 1 dimensional signal and actual speckle images. The influences of correlation criterion, shape function order, and sub-pixel registration algorithm are briefly discussed. Compared to existing methods, method presented by this paper is more valid due to the consideration of both measurement accuracy and precision.
Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement
Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui
2017-01-01
Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.
Study on error analysis and accuracy improvement for aspheric profile measurement
Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou
2017-06-01
Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.
Prioritising interventions against medication errors
Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard
2011-01-01
Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...
Sex-Specific Equations to Estimate Maximum Oxygen Uptake in Cycle Ergometry
Christina G. de Souza e Silva; Araújo,Claudio Gil S.
2015-01-01
Abstract Background: Aerobic fitness, assessed by measuring VO2max in maximum cardiopulmonary exercise testing (CPX) or by estimating VO2max through the use of equations in exercise testing, is a predictor of mortality. However, the error resulting from this estimate in a given individual can be high, affecting clinical decisions. Objective: To determine the error of estimate of VO2max in cycle ergometry in a population attending clinical exercise testing laboratories, and to propose sex-spec...
Gou, Zhi-yang; Yan, Lei; Chen, Wei; Jing, Xin; Yin, Zhong-yi; Duan, Yi-ni
2012-02-01
With the data in Urad Front Banner, Inner Mongolia on November 14th, 2010, hyper-spectral camera on UAV was calibrated adopting reflectance-based method. During the in-flight absolute radiometric calibration, 6 hyper-spectral radiometric gray-scale targets were arranged in the validation field. These targets' reflectances are 4.5%, 20%, 30%, 40%, 50% and 60% separately. To validate the calibration result, four extra hyper-spectral targets with sharp-edge spectrum were arranged to simulate the reflection and absorption peaks in natural objectives. With these peaks, the apparent radiance calculated by radiation transfer model and that calculated through calibration coefficients are much different. The result shows that in the first 15 bands (blue bands), errors are somewhat huge due to the noises of equipment. In the rest bands with quite even spectrum, the errors are small, most of which are less than 10%. For those bands with sharp changes in spectral curves, the errors are quite considerable, varying from 10% to 25%.
Pravec, Petr; Harris, Alan W.; Kušnirák, Peter; Galád, Adrián; Hornoch, Kamil
2012-09-01
We obtained estimates of the Johnson V absolute magnitudes (H) and slope parameters (G) for 583 main-belt and near-Earth asteroids observed at Ondřejov and Table Mountain Observatory from 1978 to 2011. Uncertainties of the absolute magnitudes in our sample are estimates reported by asteroid surveys. With our photometric H and G data, we revised the preliminary WISE albedo estimates made by Masiero et al. (Masired, J.R. et al. [2011]. Astrophys. J. 741, 68-89) and Mainzer et al. (Mainzer, A. et al. [2011b]. Astrophys. J. 743, 156-172) for asteroids in our sample. We found that the mean geometric albedo of Tholen/Bus/DeMeo C/G/B/F/P/D types with sizes of 25-300 km is pV = 0.057 with the standard deviation (dispersion) of the sample of 0.013 and the mean albedo of S/A/L types with sizes 0.6-200 km is 0.197 with the standard deviation of the sample of 0.051. The standard errors of the mean albedos are 0.002 and 0.006, respectively; systematic observational or modeling errors can predominate over the quoted formal errors. There is apparent only a small, marginally significant difference of 0.031 ± 0.011 between the mean albedos of sub-samples of large and small (divided at diameter 25 km) S/A/L asteroids, with the smaller ones having a higher albedo. The difference will have to be confirmed and explained; we speculate that it may be either a real size dependence of surface properties of S type asteroids or a small size-dependent bias in the data (e.g., a bias towards higher albedos in the optically-selected sample of asteroids). A trend of the mean of the preliminary WISE albedo estimates increasing with asteroid size decreasing from D ∼ 30 down to ∼5 km (for S types) showed in Mainzer et al. (Mainzer, A. et al. [2011a]. Astrophys. J. 741, 90-114) appears to be mainly due to the systematic bias in the MPCORB absolute magnitudes that progressively increases with H in the corresponding range H = 10-14.
Moderate Deviations for M-estimators in Linear Models with φ-mixing Errors
Jun FAN
2012-01-01
In this paper,the moderate deviations for the M-estimators of regression parameter in a linear model are obtained when the errors form a strictly stationary φ-mixing sequence.The results are applied to study many different types of M-estimators such as Huber's estimator,Lp-regression estimator,least squares estimator and least absolute deviation estimator.
ADJUSTMENT ERRORS IN ASCENDING AND DESCENDING PHASES OF TARGET LEVEL IN CONTROLLED FORCE EXERTION.
Kubota, Hiroshi; Demura, Shinichi
2015-10-01
Hand grip force adjustment errors to ascending and descending phases of a sinusoidal target force in a controlled force exertion (CFE) test were measured and the laterality of responses evaluated. 75 men (M age = 19.6 yr., SD = 1.6) performed the CFE test after one practice trial by matching handgrip force to target level (5-25% of maximal grip force). The CFE errors in ascending and descending phases of the target force were calculated as the absolute differences between actual force and target force in each phase. There were significantly smaller CFE errors in the ascending phase for both dominant and non-dominant hands, but CFE error for the dominant hand was significantly smaller in both phases. Therefore, error in force exertion in the ascending and descending phases of the target force differed, and laterality influenced error in both phases.
Improved Error Thresholds for Measurement-Free Error Correction
Crow, Daniel; Joynt, Robert; Saffman, M.
2016-09-01
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Cylindricity Error Measuring and Evaluating for Engine Cylinder Bore in Manufacturing Procedure
Qiang Chen
2016-01-01
Full Text Available On-line measuring device of cylindricity error is designed based on two-point method error separation technique (EST, which can separate spindle rotation error from measuring error. According to the principle of measuring device, the mathematical model of the minimum zone method for cylindricity error evaluating is established. Optimized parameters of objective function decrease to four from six by assuming that c is equal to zero and h is equal to one. Initial values of optimized parameters are obtained from least square method and final values are acquired by the genetic algorithm. The ideal axis of cylinder is fitted in MATLAB. Compared to the error results of the least square method, the minimum circumscribed cylinder method, and the maximum inscribed cylinder method, the error result of the minimum zone method conforms to the theory of error evaluation. The results indicate that the method can meet the requirement of engine cylinder bore cylindricity error measuring and evaluating.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
Absolute neutrophil values in malignant patients on cytotoxic chemotherapy.
Madu, A J; Ibegbulam, O G; Ocheni, S; Madu, K A; Aguwa, E N
2011-01-01
A total of eighty patients with various malignancies seen between September 2008 and April 2009 at the University of Nigeria Teaching Hospital (UNTH) Ituku Ozalla, Enugu, Nigeria, had their absolute neutrophil counts, done at Days 0 and 12 of the first cycle of their various chemotherapeutic regimens. They were adult patients who had been diagnosed of various malignancies, consisting of Breast cancer 36 (45%), Non-Hodgkin's lymphoma 8 (10%), Hodgkin's lymphoma 13 (16.25%), Colorectal carcinoma 6 (7.5%), Multiple myeloma 7 (8.75%), Cervical carcinoma 1 (1.25%) and other malignancies 9 (11.25%), Manual counting of absolute neutrophil count was done using Turks solution and improved Neubauer counting chamber and Galen 2000 Olympus microscope. The socio demographic data of the patients were assessed from a questionnaire. There were 27 males (33.75%) and 53 females (66.25%). Their ages ranged from 18 - 80 years with a median of 45 years. The mean absolute neutrophil count of the respondents pre-and post chemotherapy was 3.7 +/- 2.1 x 10(9)/L and 2.5 +/- 1.6 x 10(9)/L respectively. There were significant differences in both the absolute neutrophil count (p=0.00) compared to the pre-chemotherapy values. Chemotherapeutic combinations containing cyclophosphamide and Adriamycin were observed to cause significant reduction in absolute neutrophil.
Absolute quantification of somatic DNA alterations in human cancer.
Carter, Scott L; Cibulskis, Kristian; Helman, Elena; McKenna, Aaron; Shen, Hui; Zack, Travis; Laird, Peter W; Onofrio, Robert C; Winckler, Wendy; Weir, Barbara A; Beroukhim, Rameen; Pellman, David; Levine, Douglas A; Lander, Eric S; Meyerson, Matthew; Getz, Gad
2012-05-01
We describe a computational method that infers tumor purity and malignant cell ploidy directly from analysis of somatic DNA alterations. The method, named ABSOLUTE, can detect subclonal heterogeneity and somatic homozygosity, and it can calculate statistical sensitivity for detection of specific aberrations. We used ABSOLUTE to analyze exome sequencing data from 214 ovarian carcinoma tumor-normal pairs. This analysis identified both pervasive subclonal somatic point-mutations and a small subset of predominantly clonal and homozygous mutations, which were overrepresented in the tumor suppressor genes TP53 and NF1 and in a candidate tumor suppressor gene CDK12. We also used ABSOLUTE to infer absolute allelic copy-number profiles from 3,155 diverse cancer specimens, revealing that genome-doubling events are common in human cancer, likely occur in cells that are already aneuploid, and influence pathways of tumor progression (for example, with recessive inactivation of NF1 being less common after genome doubling). ABSOLUTE will facilitate the design of clinical sequencing studies and studies of cancer genome evolution and intra-tumor heterogeneity.
Absolute frequency measurement of unstable lasers with optical frequency combs
Beverini, N.; Poli, N.; Sutyrin, D.; Wang, F.-Y.; Schioppo, M.; Tarallo, M. G.; Tino, G. M.
2010-09-01
Here we report on absolute frequency measurements of a commercial high power CW diode-pumped solid-state laser (Coherent Verdi-V5). This kind of lasers usually presents large frequency jitter (up to 50 MHz) both in the short term (1 ms time scale) and in the long term (>10 s time scale). A precise measurement of absolute frequency deviations in both temporal scales should require a set of different devices (optical cavities, optical wave-meters), each suited for measurements only at a specific integration time. Here we demonstrate how a frequency comb can be used to overcome this difficulty, allowing in a single step a full characterization of both short ( 103 s) absolute frequency jitter with a resolution better than 1 MHz. We demonstrate in this way the flexibility of optical frequency combs for absolute frequency measurements not only of ultra-stable lasers but also of relatively unstable lasers. The absolute frequency calibration of the Verdi laser that we have obtained have been used in order to improve the accuracy of the measurements of the local gravitational acceleration value with 88Sr atoms trapped in 1D vertical lattices.
Absolute and relative family affluence and psychosomatic symptoms in adolescents.
Elgar, Frank J; De Clercq, Bart; Schnohr, Christina W; Bird, Phillippa; Pickett, Kate E; Torsheim, Torbjørn; Hofmann, Felix; Currie, Candace
2013-08-01
Previous research on the links between income inequality and health and socioeconomic differences in health suggests that relative differences in affluence impact health and well-being more than absolute affluence. This study explored whether self-reported psychosomatic symptoms in adolescents relate more closely to relative affluence (i.e., relative deprivation or rank affluence within regions or schools) than to absolute affluence. Data on family material assets and psychosomatic symptoms were collected from 48,523 adolescents in eight countries (Austria, Belgium, Canada, Norway, Scotland, Poland, Turkey, and Ukraine) as part of the 2009/10 Health Behaviour in School-aged Children study. Multilevel regression analyses of the data showed that relative deprivation (Yitzhaki Index, calculated in regions and in schools) and rank affluence (in regions) (1) related more closely to symptoms than absolute affluence, and (2) related to symptoms after differences in absolute affluence were held constant. However, differences in family material assets, whether they are measured in absolute or relative terms, account for a significant variation in adolescent psychosomatic symptoms. Conceptual and empirical issues relating to the use of material affluence indices to estimate socioeconomic position are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Absolute Navigation Information Estimation for Micro Planetary Rovers
Muhammad Ilyas
2016-03-01
Full Text Available This paper provides algorithms to estimate absolute navigation information, e.g., absolute attitude and position, by using low power, weight and volume Microelectromechanical Systems-type (MEMS sensors that are suitable for micro planetary rovers. Planetary rovers appear to be easily navigable robots due to their extreme slow speed and rotation but, unfortunately, the sensor suites available for terrestrial robots are not always available for planetary rover navigation. This makes them difficult to navigate in a completely unexplored, harsh and complex environment. Whereas the relative attitude and position can be tracked in a similar way as for ground robots, absolute navigation information, unlike in terrestrial applications, is difficult to obtain for a remote celestial body, such as Mars or the Moon. In this paper, an algorithm called the EASI algorithm (Estimation of Attitude using Sun sensor and Inclinometer is presented to estimate the absolute attitude using a MEMS-type sun sensor and inclinometer, only. Moreover, the output of the EASI algorithm is fused with MEMS gyros to produce more accurate and reliable attitude estimates. An absolute position estimation algorithm has also been presented based on these on-board sensors. Experimental results demonstrate the viability of the proposed algorithms and the sensor suite for low-cost and low-weight micro planetary rovers.
Junge Zhang
2012-08-01
Full Text Available This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.
Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge
2012-01-01
This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.
Jiang, Ying; Zeng, Jie; Liang, Da-Kai; Wang, Xue-Liang; Ni, Xiao-Yu; Zhang, Xiao-Yan; Li, Ji-Feng; Luo, Wen-Yong
2013-12-01
In the present paper, the theoretical expression of the wavelength change and the axial strain of birefringence fiber loop mirror is developed. The theoretical result shows that the axial strain sensitivity of birefringence photonic crystal fiber loop mirror is much lower than conventional birefringence fiber loop mirror. It is difficult to measure the axial strain by monitoring the wavelength change of birefringence photonic crystal fiber loop mirror, and it is easy to cause the measurement error because the output spectrum is not perfectly smooth. The different strain spectrum of birefringence photonic crystal fiber loop mirror was measured experimentally by an optical spectrum analyzer. The measured spectrum was analysed. The results show that the absolute integral of the monitoring peak decreases with increasing strain and the absolute integral is linear versus strain. Based on the above results, it is proposed that the axial strain can be measured by monitoring the absolute integral of the monitoring peak in this paper. The absolute integral of the monitoring peak is a comprehensive index which can indicate the light intensity of different wavelength. This method of monitoring the absolute integral of the monitoring peak to measure the axial strain can not only overcome the difficulty of monitoring the wavelength change of birefringence photonic crystal fiber loop mirror, but also reduce the measurement error caused by the unsmooth output spectrum.
Error Immune Logic for Low-Power Probabilistic Computing
Bo Marr
2010-01-01
design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.
Application of Joint Error Maximal Mutual Compensation to hexapod robots
Veryha, Yauheni; Petersen, Henrik Gordon
2008-01-01
A good practice to ensure high-positioning accuracy in industrial robots is to use joint error maximum mutual compensation (JEMMC). This paper presents an application of JEMMC for positioning of hexapod robots to improve end-effector positioning accuracy. We developed an algorithm and simulation ...
PREVENTABLE ERRORS: NEVER EVENTS
Narra Gopal
2014-07-01
Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.
Comparison of analytical error and sampling error for contaminated soil.
Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders
2006-11-16
Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Data Analysis & Statistical Methods for Command File Errors
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Absolute distance sensing by two laser optical interferometry.
Thurner, Klaus; Braun, Pierre-François; Karrai, Khaled
2013-11-01
We have developed a method for absolute distance sensing by two laser optical interferometry. A particularity of this technique is that a target distance is determined in absolute and is no longer limited to within an ambiguity range affecting usually multiple wavelength interferometers. We implemented the technique in a low-finesse Fabry-Pérot miniature fiber based interferometer. We used two diode lasers, both operating in the 1550 nm wavelength range. The wavelength difference is chosen to create a 25 μm long periodic beating interferometric pattern allowing a nanometer precise position measurement but limited to within an ambiguity range of 25 μm. The ambiguity is then eliminated by scanning one of the wavelengths over a small range (3.4 nm). We measured absolute distances in the sub-meter range and this with just few nanometer repeatability.
Abelian varieties with many endomorphisms and their absolutely simple factors
Guitart, Xavier
2011-01-01
We characterize the abelian varieties arising as absolutely simple factors of GL2-type varieties over a number field k. In order to obtain this result, we study a wider class of abelian varieties: the k-varieties A/k satisfying that $\\End_k^0(A)$ is a maximal subfield of $\\End_k^0(A)$. We call them Ribet-Pyle varieties over k. We see that every Ribet-Pyle variety over k is isogenous over $\\bar k$ to a power of an abelian k-variety and, conversely, that every abelian k-variety occurs as the absolutely simple factor of some Ribet-Pyle variety over k. We deduce from this correspondence a precise description of the absolutely simple factors of the varieties over k of GL2-type.
Absolute Uniqueness of Phase Retrieval with Random Illumination
Fannjiang, Albert
2011-01-01
Random phase or amplitude illumination is proposed to remove at once all types of ambiguity, trivial or nontrivial, at once from phase retrieval. Almost sure irreducibility is proved for {\\em any} complex-valued object of arbitrary sparsity. While this irreducibility result can be viewed as a probabilistic version of the classical result by Bruck, Sodin and Hayes, it provides a new perspective and an effective method for achieving absolute uniqueness in phase retrieval for {\\em every} object, not just objects outside of a measure-zero set. In particular, almost sure absolute uniqueness is proved for complex-valued objects under a general two-point assumption. For objects of nonnegative real and imaginary parts, absolute uniqueness is proved to hold with probability exponentially close to unity as the object sparsity increases.
The mixed Littlewood conjecture for pseudo-absolute values
Harrap, Stephen
2010-01-01
In this paper we prove the mixed Littlewood conjecture for a p-adic absolute value and any pseudo-absolute value with bounded ratios. More precisely we show that if p is a prime and D is a pseudo-absolute value sequence with elements divisible by finitely many primes not equal to p, and if the terms of D grow more slowly than the exponential of a polynomial then the infimum over natural numbers n of the quantity n.|n|_p.|n|_D.||nx|| equals 0 for all real x. Our proof relies on two deep results, a measure rigidity theorem due to Lindenstrauss and lower bounds for linear forms in logarithms due to Baker and Wustholz. We also deduce the answer to the related metric question of how fast the infimum above tends to zero, for almost every x.
Determination of absolute adsorption in highly ordered porous media
Mertens, Florian O.
2009-06-01
Recently developed Metal Organic Frameworks (MOFs) are the materials with the highest intrinsic surface areas to date and their discovery increased the research activity in the field of microporous adsorption materials significantly. In this contribution, a generic method of analysis for volumetrically measured adsorption isotherms is presented that separates absolute adsorption from excess adsorption to the best possible degree by representing the absolute adsorption isotherm by a superposition of in respect to pressure strictly monotonously increasing fitting function. The procedure allows to determine the heat of adsorption at constant gas uptake via implicitly defined quantities. The method was applied to adsorption data of hydrogen on MOF-5 ranging from 40 K to 200 K. Methane adsorption on MOF-5 was used to demonstrate that the common practice of neglecting the difference between excess and absolute adsorption leads to erroneously increased heat of adsorption values at high coverages and temperatures.
2013-01-01
ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....
Nested Quantum Error Correction Codes
Wang, Zhuo; Fan, Hen; Vedral, Vlatko
2009-01-01
The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.
Khanmohammadi, Roya; Talebian, Saeed; Hadian, Mohammad Reza; Olyaei, Gholamreza; Bagheri, Hossein
2017-02-01
It has been thought that for scientific acceptance of a parameter, its psychometric properties such as reliability, validity and responsiveness have critical roles. Therefore, this study was conducted to estimate how many trials are required to obtain a reliable center of pressure (COP) parameter during gait initiation (GI) and to investigate the effect of number of trials on the relative and absolute reliability. Twenty older adults participated in the study. Subjects began stepping over the force platform in response to an auditory stimulus. Ten trials were collected in one session. The displacement, velocity, mean and median frequency of the COP in the mediolateral (ML) and anteroposterior (AP) directions were evaluated. Relative reliability was determined using the intraclass correlation coefficient (ICC), and absolute reliability was evaluated using the standard error of measurement (SEM) and minimal detectable change (MDC95). The results revealed with respect to parameter, one to five trials should be averaged to ensure excellent reliability. Moreover, ICC, SEM% and MDC95% values were between 0.39-0.89, 4.84-41.5% and 13.4-115% for single trial and 0.86-0.99, 1.74-19.7% and 4.83-54.7% for ten trials averaged, respectively. Moreover, the ML and AP COP displacement in locomotor phase had the most relative reliability as well as the ML and AP median frequency in locomotor phase had the most absolute reliability. In general, the results showed that the COP-related parameters in time and frequency domains, based on average of five trials, provide reliable outcome measures for evaluation of dynamic postural control in older adults.
Density Reconstructions with Errors in the Data
Erika Gomes-Gonçalves
2014-06-01
Full Text Available The maximum entropy method was originally proposed as a variational technique to determine probability densities from the knowledge of a few expected values. The applications of the method beyond its original role in statistical physics are manifold. An interesting feature of the method is its potential to incorporate errors in the data. Here, we examine two possible ways of doing that. The two approaches have different intuitive interpretations, and one of them allows for error estimation. Our motivating example comes from the field of risk analysis, but the statement of the problem might as well come from any branch of applied sciences. We apply the methodology to a problem consisting of the determination of a probability density from a few values of its numerically-determined Laplace transform. This problem can be mapped onto a problem consisting of the determination of a probability density on [0, 1] from the knowledge of a few of its fractional moments up to some measurement errors stemming from insufficient data.
Absolute and Relative Socioeconomic Health Inequalities across Age Groups.
Sander K R van Zon
Full Text Available The magnitude of socioeconomic health inequalities differs across age groups. It is less clear whether socioeconomic health inequalities differ across age groups by other factors that are known to affect the relation between socioeconomic position and health, like the indicator of socioeconomic position, the health outcome, gender, and as to whether socioeconomic health inequalities are measured in absolute or in relative terms. The aim is to investigate whether absolute and relative socioeconomic health inequalities differ across age groups by indicator of socioeconomic position, health outcome and gender.The study sample was derived from the baseline measurement of the LifeLines Cohort Study and consisted of 95,432 participants. Socioeconomic position was measured as educational level and household income. Physical and mental health were measured with the RAND-36. Age concerned eleven 5-years age groups. Absolute inequalities were examined by comparing means. Relative inequalities were examined by comparing Gini-coefficients. Analyses were performed for both health outcomes by both educational level and household income. Analyses were performed for all age groups, and stratified by gender.Absolute and relative socioeconomic health inequalities differed across age groups by indicator of socioeconomic position, health outcome, and gender. Absolute inequalities were most pronounced for mental health by household income. They were larger in younger than older age groups. Relative inequalities were most pronounced for physical health by educational level. Gini-coefficients were largest in young age groups and smallest in older age groups.Absolute and relative socioeconomic health inequalities differed cross-sectionally across age groups by indicator of socioeconomic position, health outcome and gender. Researchers should critically consider the implications of choosing a specific age group, in addition to the indicator of socioeconomic position and
Neural Sensitivity to Absolute and Relative Anticipated Reward in Adolescents
Vaidya, Jatin G.; Knutson, Brian; O'Leary, Daniel S.; Block, Robert I.; Magnotta, Vincent
2013-01-01
Adolescence is associated with a dramatic increase in risky and impulsive behaviors that have been attributed to developmental differences in neural processing of rewards. In the present study, we sought to identify age differences in anticipation of absolute and relative rewards. To do so, we modified a commonly used monetary incentive delay (MID) task in order to examine brain activity to relative anticipated reward value (neural sensitivity to the value of a reward as a function of other available rewards). This design also made it possible to examine developmental differences in brain activation to absolute anticipated reward magnitude (the degree to which neural activity increases with increasing reward magnitude). While undergoing fMRI, 18 adolescents and 18 adult participants were presented with cues associated with different reward magnitudes. After the cue, participants responded to a target to win money on that trial. Presentation of cues was blocked such that two reward cues associated with $.20, $1.00, or $5.00 were in play on a given block. Thus, the relative value of the $1.00 reward varied depending on whether it was paired with a smaller or larger reward. Reflecting age differences in neural responses to relative anticipated reward (i.e., reference dependent processing), adults, but not adolescents, demonstrated greater activity to a $1 reward when it was the larger of the two available rewards. Adults also demonstrated a more linear increase in ventral striatal activity as a function of increasing absolute reward magnitude compared to adolescents. Additionally, reduced ventral striatal sensitivity to absolute anticipated reward (i.e., the difference in activity to medium versus small rewards) correlated with higher levels of trait Impulsivity. Thus, ventral striatal activity in anticipation of absolute and relative rewards develops with age. Absolute reward processing is also linked to individual differences in Impulsivity. PMID:23544046
Absolute frequency references at 1529 nm and 1560 nm using modulation transfer spectroscopy
de Escobar, Y Natali Martinez; Coop, Simon; Vanderbruggen, Thomas; Kaczmarek, Krzysztof T; Mitchell, Morgan W
2015-01-01
We demonstrate a double optical frequency reference (1529 nm and 1560 nm) for the telecom C-band using $^{87}$Rb modulation transfer spectroscopy. The two reference frequencies are defined by the 5S$_{1/2} F=2 \\rightarrow $ 5P$_{3/2} F'=3$ two-level and 5S$_{1/2} F=2 \\rightarrow $ 5P$_{3/2} F'=3 \\rightarrow $ 4D$_{5/2} F"=4$ ladder transitions. We examine the sensitivity of the frequency stabilization to probe power and magnetic field fluctuations, calculate its frequency shift due to residual amplitude modulation, and estimate its shift due to gas collisions. The short-term Allan deviation was estimated from the error signal slope for the two transitions. Our scheme provides a simple and high performing system for references at these important wavelengths. We estimate an absolute accuracy of $\\sim$ 1 kHz is realistic.
Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system
Baltzer, M. M.; Craig, D.; Den Hartog, D. J.; Nishizawa, T.; Nornberg, M. D.
2016-11-01
An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.
Accounting for Convective Blue-Shifts in the Determination of Absolute Stellar Radial Velocities
Prieto, C Allende; Ramírez, I; Ludwig, H -G; Asplund, M
2009-01-01
For late-type non-active stars, gravitational redshifts and convective blueshifts are the main source of biases in the determination of radial velocities. If ignored, these effects can introduce systematic errors of the order of ~ 0.5 km/s. We demonstrate that three-dimensional hydrodynamical simulations of solar surface convection can be used to predict the convective blue-shifts of weak spectral lines in solar-like stars to ~ 0.070 km/s. Using accurate trigonometric parallaxes and stellar evolution models, the gravitational redshifts can be constrained with a similar uncertainty, leading to absolute radial velocities accurate to better than ~ 0.1 km/s.
Jauniskis, L.; Foukal, P.; Kochling, H.
1992-01-01
We carry out the calibration of an ultraviolet spectrometer by using a cryogenic electrical-substitution radiometer and intensity-stabilized laser sources. A comparison of the error budgets for the laser-based calibration described here and for a calibration using a type-FEL tungsten spectral-irradiance standard indicates that this technique could provide an improvement of a factor of about three in the uncertainty of the spectrometer calibration, resulting in an absolute accuracy (standard deviation of three) of about 1 percent at 257 nm. The technique described here might significantly improve the accuracy of calibrations on NASA ozone-monitoring and solar ultraviolet-monitoring spectrophotometers when used to complement present procedures that employ lamps and the SURF II synchrotron ultraviolet radiation facility at the National Institute of Standards and Technology.
Determination of optimal period of absolute encoders with single track cyclic gray code
张帆; 朱衡君
2008-01-01
Low cost and miniaturized rotary encoders are important in automatic and precise production. Presented here is a code called Single Track Cyclic Gray Code (STCGC) that is an image etched on a single circular track of a rotary encoder disk read by a group of even spread reading heads to provide a unique codeword for every angular position and features such that every two adjacent words differ in exactly one component, thus avoiding coarse error. The existing construction or combination methods are helpful but not sufficient in determining the period of the STCGC of large word length and the theoretical approach needs further development to extend the word length. Three principles, such as the seed combination, short code removal and ergodicity examination were put forward that suffice determination of the optimal period for such absolute rotary encoders using STCGC with even spread heads. The optimal periods of STCGC in 3 through 29 bit length were determined and listed.
Calibration of Fourier domain short coherence interferometer for absolute distance measurements.
Montonen, R; Kassamakov, I; Hæggström, E; Österberg, K
2015-05-20
We calibrated and determined the measurement uncertainty of a custom-made Fourier domain short coherence interferometer operated in laboratory conditions. We compared the optical thickness of two thickness standards and three coverslips determined with our interferometer to the geometric thickness determined by SEM. Using this calibration data, we derived a calibration function with a 95% confidence level system uncertainty of (5.9×10(-3)r+2.3) μm, where r is the optical distance in μm, across the 240 μm optical measurement range. The confidence limit includes contributions from uncertainties in the optical thickness, geometric thickness, and refractive index measurements as well as uncertainties arising from cosine errors and thermal expansion. The results show feasibility for noncontacting absolute distance characterization with micrometer-level accuracy. This instrument is intended for verifying the alignment of the discs of an accelerating structure in the possible future compact linear collider.
Zhao, Motian; Zhou, Tao; Wang, Jun; Lu, Hai; Fang, Xiang; Guo, Chunhua; Li, Qiuli; Li, Chaofeng
2005-01-01
Synthetic mixtures prepared gravimetrically from highly enriched isotopes of neodymium in the form of oxides of well-defined purity were used to calibrate a thermal ionization mass spectrometer. A new error analysis was applied to calculate the final uncertainty of the atomic weight value. Measurements on natural neodymium samples yielded an absolute isotopic composition of 27.153(19) atomic percent (at.%) 142Nd, 12.173(18) at.% 143Nd, 23.798(12) at.% 144Nd, 8.293(7) at.% 145Nd, 17.189(17) at.% 146Nd, 5.756(8) at.% 148Nd, and 5.638(9) at.% 150Nd, and the atomic weight of neodymium as 144.2415(13), with uncertainties given on the basis of 95% confidence limits. No isotopic fractionation was found in terrestrial neodymium materials.
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
Inversion of levelling data: how important is error treatment?
Amoruso, A.; Crescentini, L.
2007-12-01
Even if proper treatment of error statistics is potentially essential for the reliability of experimental data inversion, a critical evaluation of its effects on levelling data inversion is still lacking. In this paper, we consider the complete covariance matrix for levelling measurements, obtained by combining the covariance matrix due to measurement errors and the covariance matrix due to non-measurement errors, under the simple hypothesis of uncorrelated non-measurement errors on bench mark vertical displacements. The complete covariance matrix is reduced to diagonal form by means of a rotation matrix; the same rotation transforms the data to independent form. The eigenvalues of the complete covariance matrix give the uncertainties of the transformed independent data. This procedure can be used also with non-normal distributions of errors, in which case misfit functions other than χ2 (e.g. the mean absolute deviation) are minimized. Here we focus on two test cases (the 1989 Loma Prieta earthquake and the 1908 Messina earthquake) inverting both real data and synthetics. The inversion of synthetic data sets does not evidence any systematic dependence of retrieved parameter values on the covariance matrix. Most retrieved fault parameter values are close to what used in the forward model, whatever covariance matrix is used. As a consequence, large discrepancies among results obtained using covariance matrices including different combinations of measurement and non-measurement errors when inverting measured and synthetic data sets would possibly indicate the need for further investigations. While measurement errors can be a priori evaluated, it is difficult to estimate non-measurement errors. Our synthetic tests using a uniform-slip rectangular fault in a homogeneous elastic half-space show that, if measurement errors have been correctly evaluated, average non-measurement errors can be estimated by choosing their weight inside the covariance matrix so that the ratio
A temperature error correction method for a naturally ventilated radiation shield
Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Rrenhui
2016-11-01
Due to solar radiation exposure, air flowing inside a naturally ventilated radiation shield may produce a measurement error of 0.8 °C or higher. To improve the air temperature observation accuracy, a temperature error correction method is proposed. The correction method is based on a Computational Fluid Dynamics (CFD) method and a Genetic Algorithm (GA) method. The CFD method is implemented to analyze and calculate the temperature errors of a naturally ventilated radiation shield under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean temperature error given by measurements is 0.36 °C, and the mean temperature error given by correction equation is 0.34 °C. This correction equation allows the temperature error to be reduced by approximately 95%. The mean absolute error (MAE) and the root mean square error (RMSE) between the temperature errors given by the correction equation and the temperature errors given by the measurements are 0.07 °C and 0.08 °C, respectively.
Absolute small-angle measurement based on optical feedback interferometry
Jingang Zhong; Xianhua Zhang; Zhixiang Ju
2008-01-01
We present a simple but effective method for small-angle measurement based on optical feedback inter-ferometry (or laser self-mixing interferometry). The absolute zero angle can be defined at the biggest fringe amplitude point, so this method can also achieve absolute angle measurement. In order to verify the method, we construct an angle measurement system. The Fourier-transform method is used to analysis the interference signal. Rotation angles are experimentally measured with a resolution of 10-6 rad and a measurement range of approximately from -0.0007 to +0.0007 rad.
Improved Absolute Radiometric Calibration of a UHF Airborne Radar
Chapin, Elaine; Hawkins, Brian P.; Harcke, Leif; Hensley, Scott; Lou, Yunling; Michel, Thierry R.; Moreira, Laila; Muellerschoen, Ronald J.; Shimada, Joanne G.; Tham, Kean W.;
2015-01-01
The AirMOSS airborne SAR operates at UHF and produces fully polarimetric imagery. The AirMOSS radar data are used to produce Root Zone Soil Moisture (RZSM) depth profiles. The absolute radiometric accuracy of the imagery, ideally of better than 0.5 dB, is key to retrieving RZSM, especially in wet soils where the backscatter as a function of soil moisture function tends to flatten out. In this paper we assess the absolute radiometric uncertainty in previously delivered data, describe a method to utilize Built In Test (BIT) data to improve the radiometric calibration, and evaluate the improvement from applying the method.
Absolute, Extreme-Ultraviolet, Solar Spectral Irradiance Monitor (AESSIM)
Huber, Martin C. E.; Smith, Peter L.; Parkinson, W. H.; Kuehne, M.; Kock, M.
1988-01-01
AESSIM, the Absolute, Extreme-Ultraviolet, Solar Spectral Irradiance Monitor, is designed to measure the absolute solar spectral irradiance at extreme-ultraviolet (EUV) wavelengths. The data are required for studies of the processes that occur in the earth's upper atmosphere and for predictions of atmospheric drag on space vehicles. AESSIM is comprised of sun-pointed spectrometers and newly-developed, secondary standards of spectral irradiance for the EUV. Use of the in-orbit standard sources will eliminate the uncertainties caused by changes in spectrometer efficiency that have plagued all previous measurements of the solar spectral EUV flux.
Properties of Absolute Stability in the Presence of Time Lags
M. De la Sen
2005-01-01
Full Text Available This study is concerned with the properties of absolute stability independent of the delays of time-delay systems, possessing non commensurate internal point delays, for any nonlinearity satisfying a Popov’s- type time positivity inequality. That property holds if an associate delay-free system is absolutely stable and the size of the delayed dynamics is sufficiently small. The results are obtained for nonlinearities belonging to sectors [0, k] and [h, k+h], and are based on a parabola test type.
Stability comparison of two absolute gravimeters: optical versus atomic interferometers
Gillot, Pierre; Landragin, Arnaud; Santos, Franck Pereira Dos; Merlet, Sébastien
2014-01-01
We report the direct comparison between the stabilities of two mobile absolute gravimeters of different technology: the LNE-SYRTE Cold Atom Gravimeter and FG5X\\#216 of the Universit\\'e du Luxembourg. These instruments rely on two different principles of operation: atomic and optical interferometry. The comparison took place in the Walferdange Underground Laboratory for Geodynamics in Luxembourg, at the beginning of the last International Comparison of Absolute Gravimeters, ICAG-2013. We analyse a 2h10 duration common measurement, and find that the CAG shows better immunity with respect to changes in the level of vibration noise, as well as a slightly better short term stability.
Absolute cross-sections from X-{gamma} coincidence measurements
Lemasson, A. [GANIL, CEA/DSM - CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); Shrivastava, A. [Nuclear Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Navin, A. [GANIL, CEA/DSM - CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France)], E-mail: navin@ganil.fr; Rejmund, M. [GANIL, CEA/DSM - CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); Nanal, V. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Bhattacharyya, S. [Variable Energy Cyclotron Centre, 1/AF Bidhan Nagar, Kolkata 700064 (India); Chatterjee, A.; Kailas, S.; Mahata, K.; Parkar, V.V. [Nuclear Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Pillay, R.G. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Ramachandran, K.; Rout, P.C. [Nuclear Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)
2009-01-11
An activation technique using coincidences between characteristic X-rays and {gamma}-rays to obtain absolute cross-sections is described. This method is particularly useful in the case of nuclei that decay by electron capture. In addition to the reduction of possible contamination, an improved detection sensitivity is achieved as compared to inclusive measurements, thereby allowing the extraction of absolute fusion cross-sections in the nano-barn range. Results of this technique for {sup 6}Li+{sup 198}Pt system, at energies around the Coulomb barrier are described. Future applications with low intensity radioactive ion beams are also discussed.
Total Synthesis and Absolute Configuration of the Marine Norditerpenoid Xestenone
Hiroaki Miyaoka
2009-11-01
Full Text Available Xestenone is a marine norditerpenoid found in the northeastern Pacific sponge Xestospongia vanilla. The relative configuration of C-3 and C-7 in xestenone was determined by NOESY spectral analysis. However the relative configuration of C-12 and the absolute configuration of this compound were not determined. The authors have now achieved the total synthesis of xestenone using their developed one-pot synthesis of cyclopentane derivatives employing allyl phenyl sulfone and an epoxy iodide as a key step. The relative and absolute configurations of xestenone were thus successfully determined by this synthesis.
Errors in device localization in MRI using Z-frames.
Cepek, Jeremy; Chronik, Blaine A; Fenster, Aaron
2013-01-01
The use of a passive MRI-visible tracking frame is a common method of localizing devices in MRI space for MRI-guided procedures. One of the most common tracking frame designs found in the literature is the z-frame, as it allows six degree-of-freedom pose estimation using only a single image slice. Despite the popularity of this design, it is susceptible to errors in pose estimation due to various image distortion mechanisms in MRI. In this paper, the absolute error in using a z-frame to localize a tool in MRI is quantified over various positions of the z-frame relative to the MRI isocenter, and for various levels of static magnetic field inhomogeneity. It was found that the error increases rapidly with distance from the isocenter in both the horizontal and vertical directions, but the error is much less sensitive to position when multiple contiguous slices are used with slice-select gradient nonlinearity correction enabled, as opposed to the more common approach of only using a single image slice. In addition, the error is found to increase rapidly with an increasing level of static field inhomogeneity, even with the z-frame placed within 10 cm of the isocenter.
Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-01
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
Quasi-absolute surface figure test with two orthogonal transverse spatial shifts
Xue, Shuai; Chen, Shanyong; Zhai, Dede; Shi, Feng
2017-04-01
A new zonal wavefront reconstruction algorithm with pixel-level spatial resolution and high accuracy is proposed, which is able to reconstruct the original wavefront of general aperture shape from only two difference wavefronts measured at two orthogonal shear directions with shear amounts equaling arbitrary moderate integral multiples of the sample interval. Based on this algorithm, a quasi-absolute surface figure test method is presented, which requires only two additional translational measurements with shifts of arbitrary moderate integral multiples of sample interval along x and y directions besides the original position measurement. Optical schemes of the proposed method for testing flat, spherical and cylindrical surfaces are investigated, and special considerations and challenges for calibrating spheres and cylinders are also briefly formulated theoretically. Thorough errors analysis is formulated for obtaining high accuracy test result. Simulations and experiments on a flat surface are conducted to validate the proposed algorithm and method. Compared with existing absolute test methods with Pseudo-Shear Interferometry (PSI) technique, the presented method has advantages, like, less number of measurements, arbitrary moderate shear amounts and the high signal-to-noise ratio it can reach.
Re-estimation of absolute gamma ray intensities of 56Mn using k0- standardization
M. AHMAD; W. AHMAD; M. U. RAJPUT; A. QAYYUM
2005-01-01
The thermal neutron capture gamma ray facility at Pakistan Research Reactor (PARR-1) is being used for the re-estimation of various properties like capture cross-sections, resonance integral, absolute gamma intensities, etc.of different isotopes. The data for gamma ray transitions from the capture of thermal neutrons by 55Mn are not in good agreement specifically below 2 MeV. So there is a need to re-estimate its intensities with better accuracy. Analytical grade MnCl2 powder and high purity Mn metal pieces were used in this study. Standard 152Eu and 60Co radioactive sources as well as thermal neutron capture γ-rays in chlorine were chosen for efficiency calibration. The k0standardization technique was applied for these measurements to eliminate systematic errors in efficiencies. Chlorine also acted as a comparator in k0- factor calculations. The results have been tabulated for the main gamma rays from 56Mn in the low as well as in the medium energy regions. The absolute intensities are in good agreement with most of the reported values.
Self-attraction effect and correction on the T-1 absolute gravimeter
Li, Z.; Hu, H.; Wu, K.; Li, G.; Wang, G.; Wang, L. J.
2015-12-01
The self-attraction effect (SAE) in an absolute gravimeter is a kind of systematic error due to the gravitation of the instrument to the falling object. This effect depends on the mass distribution of the gravimeter, and is estimated to be a few microgals (1 μGal = 10-8 m s-2) for the FG5 gravimeter. In this paper, the SAE of a home-made T-1 absolute gravimeter is analyzed and calculated. Most of the stationary components, including the dropping chamber, the laser interferometer, the vibration isolation device and two tripods, are finely modelled, and the related SAEs are computed. In addition, the SAE of the co-falling carriage inside the dropping chamber is carefully calculated because the distance between the falling object and the co-falling carriage varies during the measurement. In order to get the correction of the SAE, two different methods are compared. One is to linearize the SAE curve, the other one is to calculate the perturbed trajectory. The results from these two methods agree with each other within 0.01 μGal. With an uncertainty analysis, the correction of the SAE of the T-1 gravimeter is estimated to be (-1.9 ± 0.1) μGal.
Absolute photoabsorption cross section of C{sub 60} in the extreme ultraviolet
Mori, T. [Department of Vacuum UV Photoscience, Institute for Molecular Science, Myodaiji, Okazaki 444-8585 (Japan); Kou, J. [Department of Vacuum UV Photoscience, Institute for Molecular Science, Myodaiji, Okazaki 444-8585 (Japan); Haruyama, Y. [Department of Chemistry, Faculty of Science, Okayama University, Okayama 700-8530 (Japan); Kubozono, Y. [Department of Chemistry, Faculty of Science, Okayama University, Okayama 700-8530 (Japan); Mitsuke, K. [Department of Vacuum UV Photoscience, Institute for Molecular Science, Myodaiji, Okazaki 444-8585 (Japan) and Graduate University for Advanced Studies, Myodaiji, Okazaki 444-8585 (Japan)]. E-mail: mitsuke@ims.ac.jp
2005-06-15
The absolute photoabsorption cross section curve of C{sub 60} has been determined by means of mass spectrometry with the photon source of monochromatized synchrotron radiation of h{nu} = 24.5-150 eV. Description has been made on a high-temperature source of gaseous fullerenes and an efficient time-of-flight mass spectrometer. The cross section was estimated by assuming an approximate expression of the number density of C{sub 60} in the ionization region. The resultant values were 762, 241, and 195 Mb at h{nu} = 24.5, 90, and 110 eV, respectively, with about 10% errors. The cross section curve was then normalized at h{nu} = 25 eV to the absolute photoabsorption cross section reported by Jaensch and Kamke [R. Jaensch, W. Kamke, Mol. Mater. 13 (2000) 143], the most reliable data so far available in the valence excitation region of C{sub 60}. Accordingly, the present cross section data were altered to 407, 144, and 114 Mb at h{nu} = 25, 90, and 110 eV, respectively.
2013-01-01
ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....
Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?
Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.
2007-01-01
This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…
Measurement Error and Equating Error in Power Analysis
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
On the effect of distortion and dispersion in fringe signal of the FG5 absolute gravimeters
Křen, Petr; Pálinkáš, Vojtech; Mašika, Pavel
2016-02-01
The knowledge of absolute gravity acceleration at the level of 1 × 10-9 is needed in geosciences (e.g. for monitoring crustal deformations and mass transports) and in metrology for watt balance experiments related to the new SI definition of the unit of kilogram. The gravity reference, which results from the international comparisons held with the participation of numerous absolute gravimeters, is significantly affected by qualities of instruments prevailing in the comparisons (i.e. at present, FG5 gravimeters). Therefore, it is necessary to thoroughly investigate all instrumental (particularly systematic) errors. This paper deals with systematic errors of the FG5#215 coming from the distorted fringe signal and from the electronic dispersion at several electronic components including cables. In order to investigate these effects, we developed a new experimental system for acquiring and analysing the data parallel to the FG5 built-in system. The new system based on the analogue-to-digital converter with digital waveform processing using the FFT swept band pass filter is developed and tested on the FG5#215 gravimeter equipped with a new fast analogue output. The system is characterized by a low timing jitter, digital handling of the distorted swept signal with determination of zero-crossings for the fundamental frequency sweep and also for its harmonics and can be used for any gravimeter based on the laser interferometry. Comparison of the original FG5 system and the experimental systems is provided on g-values, residuals and additional measurements/models. Moreover, advanced approach for the solution of the free-fall motion is presented, which allows to take into account a non-linear gravity change with height.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously