Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
Optimal quantum error correcting codes from absolutely maximally entangled states
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
Sub-nanometer periodic nonlinearity error in absolute distance interferometers
Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang
2015-05-01
Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.
Study of errors in absolute flux density measurements of Cassiopeia A
International Nuclear Information System (INIS)
Kanda, M.
1975-10-01
An error analysis for absolute flux density measurements of Cassiopeia A is discussed. The lower-bound quadrature-accumulation error for state-of-the-art measurements of the absolute flux density of Cas A around 7 GHz is estimated to be 1.71% for 3 sigma limits. The corresponding practicable error for the careful but not state-of-the-art measurement is estimated to be 4.46% for 3 sigma limits
Gao, J.
2014-12-01
Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Assessing energy forecasting inaccuracy by simultaneously considering temporal and absolute errors
International Nuclear Information System (INIS)
Frías-Paredes, Laura; Mallor, Fermín; Gastón-Romeo, Martín; León, Teresa
2017-01-01
Highlights: • A new method to match time series is defined to assess energy forecasting accuracy. • This method relies in a new family of step patterns that optimizes the MAE. • A new definition of the Temporal Distortion Index between two series is provided. • A parametric extension controls both the temporal distortion index and the MAE. • Pareto optimal transformations of the forecast series are obtained for both indexes. - Abstract: Recent years have seen a growing trend in wind and solar energy generation globally and it is expected that an important percentage of total energy production comes from these energy sources. However, they present inherent variability that implies fluctuations in energy generation that are difficult to forecast. Thus, forecasting errors have a considerable role in the impacts and costs of renewable energy integration, management, and commercialization. This study presents an important advance in the task of analyzing prediction models, in particular, in the timing component of prediction error, which improves previous pioneering results. A new method to match time series is defined in order to assess energy forecasting accuracy. This method relies on a new family of step patterns, an essential component of the algorithm to evaluate the temporal distortion index (TDI). This family minimizes the mean absolute error (MAE) of the transformation with respect to the reference series (the real energy series) and also allows detailed control of the temporal distortion entailed in the prediction series. The simultaneous consideration of temporal and absolute errors allows the use of Pareto frontiers as characteristic error curves. Real examples of wind energy forecasts are used to illustrate the results.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2013-01-01
A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.
International Nuclear Information System (INIS)
Erdtmann, G.
1993-08-01
A sufficiently accurate characterization of the neutron flux and spectrum, i.e. the determination of the thermal flux, the flux ratio and the epithermal flux spectrum shape factor, α, is a prerequisite for all types of absolute and monostandard methods of reactor neutron activation analysis. A convenient method for these measurements is the bare triple monitor method. However, the results of this method, are very imprecise, because there are high error propagation factors form the counting errors of the monitor activities. Procedures are described to calculate the errors of the flux parameters, the α-dependent cross-section ratios, and of the analytical results from the errors of the activities of the monitor isotopes. They are included in FORTRAN programs which also allow a graphical representation of the results. A great number of examples were calculated for ten different irradiation facilities in four reactors and for 28 elements. Plots of the results are presented and discussed. (orig./HP) [de
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.
1985-05-08
also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
International Nuclear Information System (INIS)
Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa
2015-01-01
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Energy Technology Data Exchange (ETDEWEB)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano
2017-01-01
Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders
2010-06-01
Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Regional compensation for statistical maximum likelihood reconstruction error of PET image pixels
International Nuclear Information System (INIS)
Forma, J; Ruotsalainen, U; Niemi, J A
2013-01-01
In positron emission tomography (PET), there is an increasing interest in studying not only the regional mean tracer concentration, but its variation arising from local differences in physiology, the tissue heterogeneity. However, in reconstructed images this physiological variation is shadowed by a large reconstruction error, which is caused by noisy data and the inversion of tomographic problem. We present a new procedure which can quantify the error variation in regional reconstructed values for given PET measurement, and reveal the remaining tissue heterogeneity. The error quantification is made by creating and reconstructing the noise realizations of virtual sinograms, which are statistically similar with the measured sinogram. Tests with physical phantom data show that the characterization of error variation and the true heterogeneity are possible, despite the existing model error when real measurement is considered. (paper)
International Nuclear Information System (INIS)
Krisciunas, Kevin; Marion, G. H.; Suntzeff, Nicholas B.
2009-01-01
We obtained optical photometry of SN 2003gs on 49 nights, from 2 to 494 days after T(B max ). We also obtained near-IR photometry on 21 nights. SN 2003gs was the first fast declining Type Ia SN that has been well observed since SN 1999by. While it was subluminous in optical bands compared to more slowly declining Type Ia SNe, it was not subluminous at maximum light in the near-IR bands. There appears to be a bimodal distribution in the near-IR absolute magnitudes of Type Ia SNe at maximum light. Those that peak in the near-IR after T(B max ) are subluminous in the all bands. Those that peak in the near-IR prior to T(B max ), such as SN 2003gs, have effectively the same near-IR absolute magnitudes at maximum light regardless of the decline rate Δm 15 (B). Near-IR spectral evidence suggests that opacities in the outer layers of SN 2003gs are reduced much earlier than for normal Type Ia SNe. That may allow γ rays that power the luminosity to escape more rapidly and accelerate the decline rate. This conclusion is consistent with the photometric behavior of SN 2003gs in the IR, which indicates a faster than normal decline from approximately normal peak brightness.
Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.
2014-01-01
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545
Maximum error-bounded Piecewise Linear Representation for online stream approximation
Xie, Qing; Pang, Chaoyi; Zhou, Xiaofang; Zhang, Xiangliang; Deng, Ke
2014-01-01
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.
Maximum error-bounded Piecewise Linear Representation for online stream approximation
Xie, Qing
2014-04-04
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Karátson, J.
2017-01-01
Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Karátson, J.
2017-01-01
Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub
International Nuclear Information System (INIS)
Bilbao, L.; Bruzzone, H.; Grondona, D.
1994-01-01
The reliable determination of a plasma electron structure requires a good knowledge of the errors affecting the employed technique. A technique based on the measurements of the absolute light intensity emitted by travelling plasma structures in plasma focus devices has been used, but it can be easily modified to other geometries and even to stationary plasma structures with time-varying plasma densities. The purpose of this work is to discuss in some detail the errors and limits of this technique. Three separate errors are shown: the minimum size of the density structure that can be resolved, an overall error in the measurements themselves, and an uncertainty in the shape of the density profile. (author)
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
International Nuclear Information System (INIS)
Beck, S.M.
1975-04-01
A mobile self-contained Faraday cup system for beam current measurments of nominal 600-MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600-MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV
International Nuclear Information System (INIS)
Beck, S.M.
1975-04-01
A mobile self-contained Faraday cup system for beam current measurements of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV. (auth)
Energy Technology Data Exchange (ETDEWEB)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico; Reichardt, Christian L. [School of Physics, University of Melbourne, 313 David Caro building, Swanston St and Tin Alley, Parkville VIC 3010 (Australia); Baxter, Eric J. [Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104 (United States); Bleem, Lindsey E. [Argonne National Laboratory, High-Energy Physics Division, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Crawford, Thomas M. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Holder, Gilbert P. [Department of Astronomy and Department of Physics, University of Illinois, 1002 West Green St., Urbana, IL 61801 (United States); Manzotti, Alessandro, E-mail: srinivasan.raghunathan@unimelb.edu.au, E-mail: s.patil2@student.unimelb.edu.au, E-mail: ebax@sas.upenn.edu, E-mail: federico.bianchini@unimelb.edu.au, E-mail: bleeml@uchicago.edu, E-mail: tcrawfor@kicp.uchicago.edu, E-mail: gholder@illinois.edu, E-mail: manzotti@uchicago.edu, E-mail: christian.reichardt@unimelb.edu.au [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Bind, A.K.; Sunil, Saurav; Singh, R.N.; Chakravartty, J.K.
2016-03-01
Recently it was found that maximum load toughness (J max ) for Zr-2.5Nb pressure tube material was practically unaffected by error in Δ a . To check the sensitivity of the J max to error in Δ a measurement, the J max was calculated assuming no crack growth up to the maximum load (P max ) for as received and hydrogen charged Zr-2.5Nb pressure tube material. For load up to the P max , the J values calculated assuming no crack growth (J NC ) were slightly higher than that calculated based on Δ a measured using DCPD technique (JDCPD). In general, error in the J calculation found to be increased exponentially with Δ a . The error in J max calculation was increased with an increase in Δ a and a decrease in J max . Based on deformation theory of J, an analytic criterion was developed to check the insensitivity of the J max to error in Δ a . There was very good linear relation was found between the J max calculated based on Δ a measured using DCPD technique and the J max calculated assuming no crack growth. This relation will be very useful to calculate J max without measuring the crack growth during fracture test especially for irradiated material. (author)
Energy Technology Data Exchange (ETDEWEB)
Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex
2012-06-21
Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the
J.G.M. van Marrewijk (Charles)
2008-01-01
textabstractA country is said to have an absolute advantage over another country in the production of a good or service if it can produce that good or service using fewer real resources. Equivalently, using the same inputs, the country can produce more output. The concept of absolute advantage can
Parnis, J Mark; Mackay, Donald
2017-03-22
A series of 12 oligomeric models for polydimethylsiloxane (PDMS) were evaluated for their effectiveness in estimating the PDMS-water partition ratio, K PDMS-w . Models ranging in size and complexity from the -Si(CH 3 ) 2 -O- model previously published by Goss in 2011 to octadeca-methyloctasiloxane (CH 3 -(Si(CH 3 ) 2 -O-) 8 CH 3 ) were assessed based on their RMS error with 253 experimental measurements of log K PDMS-w from six published works. The lowest RMS error for log K PDMS-w (0.40 in log K) was obtained with the cyclic oligomer, decamethyl-cyclo-penta-siloxane (D5), (-Si(CH 3 ) 2 -O-) 5 , with the mixing-entropy associated combinatorial term included in the chemical potential calculation. The presence or absence of terminal methyl groups on linear oligomer models is shown to have significant impact only for oligomers containing 1 or 2 -Si(CH 3 ) 2 -O- units. Removal of the combinatorial term resulted in a significant increase in the RMS error for most models, with the smallest increase associated with the largest oligomer studied. The importance of inclusion of the combinatorial term in the chemical potential for liquid oligomer models is discussed.
Phillips, Alfred, Jr.
Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .
International Nuclear Information System (INIS)
Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex
2012-01-01
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
Absolute transition probabilities in the NeI 3p-3s fine structure by beam-gas-dye laser spectroscopy
International Nuclear Information System (INIS)
Hartmetz, P.; Schmoranzer, H.
1983-01-01
The beam-gas-dye laser two-step excitation technique is further developed and applied to the direct measurement of absolute atomic transition probabilities in the NeI 3p-3s fine-structure transition array with a maximum experimental error of 5%. (orig.)
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2010-07-13
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute magnitudes by statistical parallaxes
International Nuclear Information System (INIS)
Heck, A.
1978-01-01
The author describes an algorithm for stellar luminosity calibrations (based on the principle of maximum likelihood) which allows the calibration of relations of the type: Msub(i)=sup(N)sub(j=1)Σqsub(j)Csub(ij), i=1,...,n, where n is the size of the sample at hand, Msub(i) are the individual absolute magnitudes, Csub(ij) are observational quantities (j=1,...,N), and qsub(j) are the coefficients to be determined. If one puts N=1 and Csub(iN)=1, one has q 1 =M(mean), the mean absolute magnitude of the sample. As additional output, the algorithm provides one also with the dispersion in magnitude of the sample sigmasub(M), the mean solar motion (U,V,W) and the corresponding velocity ellipsoid (sigmasub(u), sigmasub(v), sigmasub(w). The use of this algorithm is illustrated. (Auth.)
Danish Towns during Absolutism
DEFF Research Database (Denmark)
This anthology, No. 4 in the Danish Urban Studies Series, presents in English recent significant research on Denmark's urban development during the Age of Absolutism, 1660-1848, and features 13 articles written by leading Danish urban historians. The years of Absolutism were marked by a general...
DEFF Research Database (Denmark)
Schechter, J.; Shahid, M. N.
2012-01-01
We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos.......We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos....
National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Absolute Gravity data (78 stations) was received in July 1993. Principal gravity parameters include Gravity Value, Uncertainty, and Vertical Gradient. The...
Indian Academy of Sciences (India)
more and more difficult to remove heat as one approaches absolute zero. This is the ... A new and active branch of engineering ... This temperature is called the critical temperature, Te' For sulfur dioxide the critical ..... adsorbent charcoal.
Directory of Open Access Journals (Sweden)
Uroš Martinčič
2014-05-01
Full Text Available The paper explores the issue of structure and case in English absolute constructions, whose subjects are deduced by several descriptive grammars as being in the nominative case due to its supposed neutrality in terms of register. This deduction is countered by systematic accounts presented within the framework of the Minimalist Program which relate the case of absolute constructions to specific grammatical factors. Each proposal is shown as an attempt of analysing absolute constructions as basic predication structures, either full clauses or small clauses. I argue in favour of the small clause approach due to its minimal reliance on transformations and unique stipulations. Furthermore, I propose that small clauses project a singular category, and show that the use of two cases in English absolute constructions can be accounted for if they are analysed as depictive phrases, possibly selected by prepositions. The case of the subject in absolutes is shown to be a result of syntactic and non-syntactic factors. I thus argue in accordance with Minimalist goals that syntactic case does not exist, attributing its role in absolutes to other mechanisms.
Rectangular maximum-volume submatrices and their applications
Mikhalev, Aleksandr; Oseledets, I.V.
2017-01-01
We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.
Rectangular maximum-volume submatrices and their applications
Mikhalev, Aleksandr
2017-10-18
We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.
International Nuclear Information System (INIS)
Baba, Hiroshi; Baba, Sumiko; Ichikawa, Shinichi; Sekine, Toshiaki; Ishikawa, Isamu
1981-08-01
A new method of the absolute measurement for 152 Eu was established based on the 4πβ-γ spectroscopic anti-coincidence method. It is a coincidence counting method consisting of a 4πβ-counter and a Ge(Li) γ-ray detector, in which the effective counting efficiencies of the 4πβ-counter for β-rays, conversion electrons, and Auger electrons were obtained by taking the intensity ratios for certain γ-rays between the single spectrum and the spectrum coincident with the pulses from the 4πβ-counter. First, in order to verify the method, three different methods of the absolute measurement were performed with a prepared 60 Co source to find excellent agreement among the results deduced by them. Next, the 4πβ-γ spectroscopic coincidence measurement was applied to 152 Eu sources prepared by irradiating an enriched 151 Eu target in a reactor. The result was compared with that obtained by the γ-ray spectrometry using a 152 Eu standard source supplied by LMRI. They agreed with each other within the error of 2%. (author)
Calibration with Absolute Shrinkage
DEFF Research Database (Denmark)
Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul
2001-01-01
In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 10. Approach to Absolute Zero Below 10 milli-Kelvin. R Srinivasan. Series Article Volume 2 Issue 10 October 1997 pp 8-16. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/002/10/0008-0016 ...
Effekten af absolut kumulation
DEFF Research Database (Denmark)
Kyvsgaard, Britta; Klement, Christian
2012-01-01
Som led i finansloven for 2011 blev regeringen og forligspartierne enige om at undersøge reglerne om strafudmåling ved samtidig pådømmelse af flere kriminelle forhold og i forbindelse hermed vurdere konsekvenserne af at ændre de gældende regler i forhold til kapacitetsbehovet i Kriminalforsorgens...... samlet bødesum ved en absolut kumulation i forhold til en modereret kumulation, som nu er gældende....
Towards absolute neutrino masses
Energy Technology Data Exchange (ETDEWEB)
Vogel, Petr [Kellogg Radiation Laboratory 106-38, Caltech, Pasadena, CA 91125 (United States)
2007-06-15
Various ways of determining the absolute neutrino masses are briefly reviewed and their sensitivities compared. The apparent tension between the announced but unconfirmed observation of the 0{nu}{beta}{beta} decay and the neutrino mass upper limit based on observational cosmology is used as an example of what could happen eventually. The possibility of a 'nonstandard' mechanism of the 0{nu}{beta}{beta} decay is stressed and the ways of deciding which of the possible mechanisms is actually operational are described. The importance of the 0{nu}{beta}{beta} nuclear matrix elements is discussed and their uncertainty estimated.
Thermodynamics of negative absolute pressures
International Nuclear Information System (INIS)
Lukacs, B.; Martinas, K.
1984-03-01
The authors show that the possibility of negative absolute pressure can be incorporated into the axiomatic thermodynamics, analogously to the negative absolute temperature. There are examples for such systems (GUT, QCD) processing negative absolute pressure in such domains where it can be expected from thermodynamical considerations. (author)
Absolute Gravimetry in Fennoscandia
DEFF Research Database (Denmark)
Pettersen, B. R; TImmen, L.; Gitlein, O.
The Fennoscandian postglacial uplift has been mapped geometrically using precise levelling, tide gauges, and networks of permanent GPS stations. The results identify major uplift rates at sites located around the northern part of the Gulf of Bothnia. The vertical motions decay in all directions...... motions) has its major axis in the direction of southwest to northeast and covers a distance of about 2000 km. Absolute gravimetry was made in Finland and Norway in 1976 with a rise-and fall instrument. A decade later the number of gravity stations was expanded by JILAg-5, in Finland from 1988, in Norway...... time series of several years are now available. Along the coast there are nearby tide gauge stations, many of which have time series of several decades. We describe the observing network, procedures, auxiliary observations, and discuss results obtained for selected sites. We compare the gravity results...
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Absolute risk, absolute risk reduction and relative risk
Directory of Open Access Journals (Sweden)
Jose Andres Calvache
2012-12-01
Full Text Available This article illustrates the epidemiological concepts of absolute risk, absolute risk reduction and relative risk through a clinical example. In addition, it emphasizes the usefulness of these concepts in clinical practice, clinical research and health decision-making process.
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Projective absoluteness for Sacks forcing
Ikegami, D.
2009-01-01
We show that Sigma(1)(3)-absoluteness for Sacks forcing is equivalent to the nonexistence of a Delta(1)(2) Bernstein set. We also show that Sacks forcing is the weakest forcing notion among all of the preorders that add a new real with respect to Sigma(1)(3) forcing absoluteness.
Hoede, C.; Li, Z.
2001-01-01
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,
Greco, Filippo; Biolcati, Emanuele; Pistorio, Antonio; D'Agostino, Giancarlo; Germak, Alessandro; Origlia, Claudio; Del Negro, Ciro
2015-03-01
The performances of two absolute gravimeters at three different sites in Italy between 2009 and 2011 is presented. The measurements of the gravity acceleration g were performed using the absolute gravimeters Micro-g LaCoste FG5#238 and the INRiM prototype IMGC-02, which represent the state of the art in ballistic gravimeter technology (relative uncertainty of a few parts in 109). For the comparison, the measured g values were reported at the same height by means of the vertical gravity gradient estimated at each site with relative gravimeters. The consistency and reliability of the gravity observations, as well as the performance and efficiency of the instruments, were assessed by measurements made in sites characterized by different logistics and environmental conditions. Furthermore, the various factors affecting the measurements and their uncertainty were thoroughly investigated. The measurements showed good agreement, with the minimum and maximum differences being 4.0 and 8.3 μGal. The normalized errors are very much lower than 1, ranging between 0.06 and 0.45, confirming the compatibility between the results. This excellent agreement can be attributed to several factors, including the good working order of gravimeters and the correct setup and use of the instruments in different conditions. These results can contribute to the standardization of absolute gravity surveys largely for applications in geophysics, volcanology and other branches of geosciences, allowing achieving a good trade-off between uncertainty and efficiency of gravity measurements.
Definition of correcting factors for absolute radon content measurement formula
International Nuclear Information System (INIS)
Ji Changsong; Xiao Ziyun; Yang Jianfeng
1992-01-01
The absolute method of radio content measurement is based on thomas radon measurement formula. It was found in experiment that the systematic error existed in radon content measurement by means of thomas formula. By the analysis on the behaviour of radon daughter five factors including filter efficiency, detector construction factor, self-absorbance, energy spectrum factor, and gravity factor were introduced into the thomas formula, so that the systematic error was eliminated. The measuring methods of the five factors are given
Cryogenic, Absolute, High Pressure Sensor
Chapman, John J. (Inventor); Shams. Qamar A. (Inventor); Powers, William T. (Inventor)
2001-01-01
A pressure sensor is provided for cryogenic, high pressure applications. A highly doped silicon piezoresistive pressure sensor is bonded to a silicon substrate in an absolute pressure sensing configuration. The absolute pressure sensor is bonded to an aluminum nitride substrate. Aluminum nitride has appropriate coefficient of thermal expansion for use with highly doped silicon at cryogenic temperatures. A group of sensors, either two sensors on two substrates or four sensors on a single substrate are packaged in a pressure vessel.
Partial sums of arithmetical functions with absolutely convergent ...
Indian Academy of Sciences (India)
For an arithmetical function f with absolutely convergent Ramanujan expansion, we derive an asymptotic formula for the ∑ n ≤ N f(n)$ with explicit error term. As a corollary we obtain new results about sum-of-divisors functions and Jordan's totient functions.
International Nuclear Information System (INIS)
Knuefer; Lindauer
1980-01-01
Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)
Absolute GPS Positioning Using Genetic Algorithms
Ramillien, G.
A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Error Analysis of Determining Airplane Location by Global Positioning System
Hajiyev, Chingiz; Burat, Alper
1999-01-01
This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.
Absolute flux scale for radioastronomy
International Nuclear Information System (INIS)
Ivanov, V.P.; Stankevich, K.S.
1986-01-01
The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized
Lyman alpha SMM/UVSP absolute calibration and geocoronal correction
Fontenla, Juan M.; Reichmann, Edwin J.
1987-01-01
Lyman alpha observations from the Ultraviolet Spectrometer Polarimeter (UVSP) instrument of the Solar Maximum Mission (SMM) spacecraft were analyzed and provide instrumental calibration details. Specific values of the instrument quantum efficiency, Lyman alpha absolute intensity, and correction for geocoronal absorption are presented.
Rational functions with maximal radius of absolute monotonicity
Loczi, Lajos; Ketcheson, David I.
2014-01-01
-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend
Absolute beam current monitoring in endstation c
International Nuclear Information System (INIS)
Bochna, C.
1995-01-01
The first few experiments at CEBAF require approximately 1% absolute measurements of beam currents expected to range from 10-25μA. This represents errors of 100-250 nA. The initial complement of beam current monitors are of the non intercepting type. CEBAF accelerator division has provided a stripline monitor and a cavity monitor, and the authors have installed an Unser monitor (parametric current transformer or PCT). After calibrating the Unser monitor with a precision current reference, the authors plan to transfer this calibration using CW beam to the stripline monitors and cavity monitors. It is important that this be done fairly rapidly because while the gain of the Unser monitor is quite stable, the offset may drift on the order of .5μA per hour. A summary of what the authors have learned about the linearity, zero drift, and gain drift of each type of current monitor will be presented
International Nuclear Information System (INIS)
Winterflood, A.H.
1980-01-01
In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)
Relativistic Absolutism in Moral Education.
Vogt, W. Paul
1982-01-01
Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)
Forcing absoluteness and regularity properties
Ikegami, D.
2010-01-01
For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.
Some absolutely effective product methods
Directory of Open Access Journals (Sweden)
H. P. Dikshit
1992-01-01
Full Text Available It is proved that the product method A(C,1, where (C,1 is the Cesàro arithmetic mean matrix, is totally effective under certain conditions concerning the matrix A. This general result is applied to study absolute Nörlund summability of Fourier series and other related series.
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
Maximum solid concentrations of coal water slurries predicted by neural network models
Energy Technology Data Exchange (ETDEWEB)
Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa
2010-12-15
The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
Absolute negative mobility in the anomalous diffusion
Chen, Ruyin; Chen, Chongyang; Nie, Linru
2017-12-01
Transport of an inertial Brownian particle driven by the multiplicative Lévy noise was investigated here. Numerical results indicate that: (i) The Lévy noise is able to induce absolute negative mobility (ANM) in the system, while disappearing in the deterministic case; (ii) the ANM can occur in the region of superdiffusion while disappearing in the region of normal diffusion, and the appropriate stable index of the Lévy noise makes the particle move along the opposite direction of the bias force to the maximum degree; (iii) symmetry breaking of the Lévy noise also causes the ANM effect. In addition, the intrinsic physical mechanism and conditions for the ANM to occur are discussed in detail. Our results have the implication that the Lévy noise plays an important role in the occurrence of the ANM phenomenon.
Minimum Tracking Error Volatility
Luca RICCETTI
2010-01-01
Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...
Moral absolutism and ectopic pregnancy.
Kaczor, C
2001-02-01
If one accepts a version of absolutism that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the absolutism mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate.
Absolute gravity measurements in California
Zumberge, M. A.; Sasagawa, G.; Kappus, M.
1986-08-01
An absolute gravity meter that determines the local gravitational acceleration by timing a freely falling mass with a laser interferometer has been constructed. The instrument has made measurements at 11 sites in California, four in Nevada, and one in France. The uncertainty in the results is typically 10 microgal. Repeated measurements have been made at several of the sites; only one shows a substantial change in gravity.
The Absolute Immanence in Deleuze
Park, Daeseung
2013-01-01
The absolute immanence in Deleuze Daeseung Park Abstract The plane of immanence is not unique. Deleuze and Guattari suppose a multiplicity of planes. Each great philosopher draws new planes on his own way, and these planes constitute the "time of philosophy". We can, therefore, "present the entire history of philosophy from the viewpoint of the institution of a plane of immanence" or present the time of philosophy from the viewpoint of the superposition and of the coexistence of planes. Howev...
Practical application of the theory of errors in measurement
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the practical application of the theory of errors in measurement. The topics of the chapter include fixing on a maximum desired error, selecting a maximum error, the procedure for limiting the error, utilizing a standard procedure, setting specifications for a standard procedure, and selecting the number of measurements to be made
Android Apps for Absolute Beginners
Jackson, Wallace
2011-01-01
Anybody can start building simple apps for the Android platform, and this book will show you how! Android Apps for Absolute Beginners takes you through the process of getting your first Android applications up and running using plain English and practical examples. It cuts through the fog of jargon and mystery that surrounds Android application development, and gives you simple, step-by-step instructions to get you started.* Teaches Android application development in language anyone can understand, giving you the best possible start in Android development * Provides simple, step-by-step exampl
Vernon, P E
1977-11-01
The auditory skill known as 'absolute pitch' is discussed, and it is shown that this differs greatly in accuracy of identification or reproduction of musical tones from ordinary discrimination of 'tonal height' which is to some extent trainable. The present writer possessed absolute pitch for almost any tone or chord over the normal musical range, from about the age of 17 to 52. He then started to hear all music one semitone too high, and now at the age of 71 it is heard a full tone above the true pitch. Tests were carried out under controlled conditions, in which 68 to 95 per cent of notes were identified as one semitone or one tone higher than they should be. Changes with ageing seem more likely to occur in the elasticity of the basilar membrane mechanisms than in the long-term memory which is used for aural analysis of complex sounds. Thus this experience supports the view that some resolution of complex sounds takes place at the peripheral sense organ, and this provides information which can be incorrect, for interpretation by the cortical centres.
Rational functions with maximal radius of absolute monotonicity
Loczi, Lajos
2014-05-19
We study the radius of absolute monotonicity R of rational functions with numerator and denominator of degree s that approximate the exponential function to order p. Such functions arise in the application of implicit s-stage, order p Runge-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend and Kraaijevanger. We determine the maximum attainable radius for functions in several one-parameter families of rational functions. Moreover, we prove earlier conjectured optimal radii in some families with 2 or 3 parameters via uniqueness arguments for systems of polynomial inequalities. Our results also prove the optimality of some strong stability preserving implicit and singly diagonally implicit Runge-Kutta methods. Whereas previous results in this area were primarily numerical, we give all constants as exact algebraic numbers.
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
International Nuclear Information System (INIS)
Miyahara, Hiroshi; Watanabe, Tamaki
1978-01-01
An extension of 4πe.x-γ coincidence technique is described to measure the absolute disintegration rate of 85 Sr. This nuclide shows electron capture-gamma decay, and 514keV level of 85 Rb is a meta-stable state with half life of 0.958 μsec. Therefore, the conventional 4 πe.x-γ coincidence technique with about 1 μsec of resolution time can not be applied to this nuclide. To measure the absolute disintegration rate of this, the delayed 4 πe.x-γ coincidence technique with two different resolution time has been used. The disintegration rate was determined from four counting rates of electron-x ray, gamma ray and two coincidences, and the true disintegration rate could be obtained by extraporation of the electron-x ray detection efficiency to 1. Two resolution time appearing in the calculation formulas were determined from the chance coincidence between electron-x ray and delayed gamma ray signals. When the coincidence countings with three different resolution time were carried out by one coincidence circuit, the results calculated from all combinations did not agree each other. However, when the two coincidence circuits of the same type were used to fix the resolution time, a good coincidence absorption function was obtained and the disintegration rate was determined with accuracy of +- 0.5%. To evaluate the validity of the results the disintegration rates were measured by two NaI (Tl) scintillation detectors whose gamma-ray detection efficiency was previously determined and both results were agreed within accuracy of +- 0.5%. This method can be applied with nearly same accuracy for the beta-gamma decay nuclide possessing a meta-stable state of the half life below about 10 μsec. (auth.)
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Energy Technology Data Exchange (ETDEWEB)
Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B_{0} is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB_{0}/B_{0}, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
Directory of Open Access Journals (Sweden)
Andrei ACHIMAŞ CADARIU
2004-08-01
Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Absolute entropy of ions in methanol
International Nuclear Information System (INIS)
Abakshin, V.A.; Kobenin, V.A.; Krestov, G.A.
1978-01-01
By measuring the initial thermoelectromotive forces of chains with bromo-silver electrodes in tetraalkylammonium bromide solutions the absolute entropy of bromide-ion in methanol is determined in the 298.15-318.15 K range. The anti Ssub(Brsup(-))sup(0) = 9.8 entropy units value is used for calculation of the absolute partial molar entropy of alkali metal ions and halogenide ions. It has been found that, absolute entropy of Cs + =12.0 entropy units, I - =14.0 entropy units. The obtained ion absolute entropies in methanol at 298.15 K within 1-2 entropy units is in an agreement with published data
Near threshold absolute TDCS: First results
International Nuclear Information System (INIS)
Roesel, T.; Schlemmer, P.; Roeder, J.; Frost, L.; Jung, K.; Ehrhardt, H.
1992-01-01
A new method, and first results for an impact energy 2 eV above the threshold of ionisation of helium, are presented for the measurement of absolute triple differential cross sections (TDCS) in a crossed beam experiment. The method is based upon measurement of beam/target overlap densities using known absolute total ionisation cross sections and of detection efficiencies using known absolute double differential cross sections (DDCS). For the present work the necessary absolute DDCS for 1 eV electrons had also to be measured. Results are presented for several different coplanar kinematics and are compared with recent DWBA calculations. (orig.)
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
International Nuclear Information System (INIS)
El-Shanshoury, Gh. I.; El-Hemamy, S.T.
2013-01-01
The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE
Error estimation in plant growth analysis
Directory of Open Access Journals (Sweden)
Andrzej Gregorczyk
2014-01-01
Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Introducing the Mean Absolute Deviation "Effect" Size
Gorard, Stephen
2015-01-01
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
Investigating Absolute Value: A Real World Application
Kidd, Margaret; Pagni, David
2009-01-01
Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...
An absolute distance interferometer with two external cavity diode lasers
International Nuclear Information System (INIS)
Hartmann, L; Meiners-Hagen, K; Abou-Zeid, A
2008-01-01
An absolute interferometer for length measurements in the range of several metres has been developed. The use of two external cavity diode lasers allows the implementation of a two-step procedure which combines the length measurement with a variable synthetic wavelength and its interpolation with a fixed synthetic wavelength. This synthetic wavelength is obtained at ≈42 µm by a modulation-free stabilization of both lasers to Doppler-reduced rubidium absorption lines. A stable reference interferometer is used as length standard. Different contributions to the total measurement uncertainty are discussed. It is shown that the measurement uncertainty can considerably be reduced by correcting the influence of vibrations on the measurement result and by applying linear regression to the quadrature signals of the absolute interferometer and the reference interferometer. The comparison of the absolute interferometer with a counting interferometer for distances up to 2 m results in a linearity error of 0.4 µm in good agreement with an estimation of the measurement uncertainty
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Practical, Reliable Error Bars in Quantum Tomography
Faist, Philippe; Renner, Renato
2015-01-01
Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of...
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
A global algorithm for estimating Absolute Salinity
McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.
2012-12-01
The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).
Land Use in LCIA: an absolute scale proposal for Biotic Production Potential
DEFF Research Database (Denmark)
Saez de Bikuna Salinas, Koldo; Ibrom, Andreas; Hauschild, Michael Zwicky
, the present study proposes a single absolute scale for the midpoint impact category (MIC) of Biotic Production Potential (BPP). It is hypothesized that, for an ecosystem in equilibrium (where NPP equals decay), such an ecosystem has reached the maximum biotic throughput subject to site-specific conditions...... and no externally added inputs. The original ecosystem (or Potential Natural Vegetation) of a certain land gives then the maximum BPP with no additional, downstream or upstream, impacts. This Natural BPP is proposed as the maximum BPP in a hypothetical Absolute Scale for LCA’s Land Use framework. It is argued...... that this maximum BPP is Nature’s optimal solution through evolution-adaptation mechanisms, which provides the maximum matter throughput subject to the rest of environmental constraints (without further impacts). As a consequence, this scale rises a Land Use Optimality Point that suggests the existence of a limit...
Globular Clusters: Absolute Proper Motions and Galactic Orbits
Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.
2018-04-01
We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Srimurugan Pratheep, Neeraja; Madeleine, Pascal; Arendt-Nielsen, Lars
2018-04-25
Pressure pain threshold (PPT) and PPT maps are commonly used to quantify and visualize mechanical pain sensitivity. Although PPT's have frequently been reported from patients with knee osteoarthritis (KOA), the absolute and relative reliability of PPT assessments remain to be determined. Thus, the purpose of this study was to evaluate the test-retest relative and absolute reliability of PPT in KOA. For that purpose, intra- and interclass correlation coefficient (ICC) as well as the standard error of measurement (SEM) and the minimal detectable change (MDC) values within eight anatomical locations covering the most painful knee of KOA patients was measured. Twenty KOA patients participated in two sessions with a period of 2 weeks±3 days apart. PPT's were assessed over eight anatomical locations covering the knee and two remote locations over tibialis anterior and brachioradialis. The patients rated their maximum pain intensity during the past 24 h and prior to the recordings on a visual analog scale (VAS), and completed The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and PainDetect surveys. The ICC, SEM and MDC between the sessions were assessed. The ICC for the individual variability was expressed with coefficient of variance (CV). Bland-Altman plots were used to assess potential bias in the dataset. The ICC ranged from 0.85 to 0.96 for all the anatomical locations which is considered "almost perfect". CV was lowest in session 1 and ranged from 44.2 to 57.6%. SEM for comparison ranged between 34 and 71 kPa and MDC ranged between 93 and 197 kPa with a mean PPT ranged from 273.5 to 367.7 kPa in session 1 and 268.1-331.3 kPa in session 2. The analysis of Bland-Altman plot showed no systematic bias. PPT maps showed that the patients had lower thresholds in session 2, but no significant difference was observed for the comparison between the sessions for PPT or VAS. No correlations were seen between PainDetect and PPT and PainDetect and WOMAC
The absolute environmental performance of buildings
DEFF Research Database (Denmark)
Brejnrod, Kathrine Nykjær; Kalbar, Pradip; Petersen, Steffen
2017-01-01
Our paper presents a novel approach for absolute sustainability assessment of a building's environmental performance. It is demonstrated how the absolute sustainable share of the earth carrying capacity of a specific building type can be estimated using carrying capacity based normalization factors....... A building is considered absolute sustainable if its annual environmental burden is less than its share of the earth environmental carrying capacity. Two case buildings – a standard house and an upcycled single-family house located in Denmark – were assessed according to this approach and both were found...... to exceed the target values of three (almost four) of the eleven impact categories included in the study. The worst-case excess was for the case building, representing prevalent Danish building practices, which utilized 1563% of the Climate Change carrying capacity. Four paths to reach absolute...
Absolute calibration technique for spontaneous fission sources
International Nuclear Information System (INIS)
Zucker, M.S.; Karpf, E.
1984-01-01
An absolute calibration technique for a spontaneously fissioning nuclide (which involves no arbitrary parameters) allows unique determination of the detector efficiency for that nuclide, hence of the fission source strength
MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...
African Journals Online (AJOL)
eobe
development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...
Absolute spectrophotometry of Nova Cygni 1975
International Nuclear Information System (INIS)
Kontizas, E.; Kontizas, M.; Smyth, M.J.
1976-01-01
Radiometric photoelectric spectrophotometry of Nova Cygni 1975 was carried out on 1975 August 31, September 2, 3. α Lyr was used as reference star and its absolute spectral energy distribution was used to reduce the spectrophotometry of the nova to absolute units. Emission strengths of Hα, Hβ, Hγ (in W cm -2 ) were derived. The Balmer decrement Hα:Hβ:Hγ was compared with theory, and found to deviate less than had been reported for an earlier nova. (author)
Learning from prescribing errors
Dean, B
2002-01-01
The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...
A global algorithm for estimating Absolute Salinity
Directory of Open Access Journals (Sweden)
T. J. McDougall
2012-12-01
Full Text Available The International Thermodynamic Equation of Seawater – 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density than does Practical Salinity.
When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic, Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg^{−1} in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p in the world ocean.
To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811. In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies.
Clark, Kevin B
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Directory of Open Access Journals (Sweden)
Kevin Bradley Clark
2013-08-01
Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in
Wang, Weijie; Lu, Yanmin
2018-03-01
Most existing Collaborative Filtering (CF) algorithms predict a rating as the preference of an active user toward a given item, which is always a decimal fraction. Meanwhile, the actual ratings in most data sets are integers. In this paper, we discuss and demonstrate why rounding can bring different influences to these two metrics; prove that rounding is necessary in post-processing of the predicted ratings, eliminate of model prediction bias, improving the accuracy of the prediction. In addition, we also propose two new rounding approaches based on the predicted rating probability distribution, which can be used to round the predicted rating to an optimal integer rating, and get better prediction accuracy compared to the Basic Rounding approach. Extensive experiments on different data sets validate the correctness of our analysis and the effectiveness of our proposed rounding approaches.
Maximum Safety Regenerative Power Tracking for DC Traction Power Systems
Directory of Open Access Journals (Sweden)
Guifu Du
2017-02-01
Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.
Absolute isotopic abundances of Ti in meteorites
International Nuclear Information System (INIS)
Niederer, F.R.; Papanastassiou, D.A.; Wasserburg, G.J.
1985-01-01
The absolute isotope abundance of Ti has been determined in Ca-Al-rich inclusions from the Allende and Leoville meteorites and in samples of whole meteorites. The absolute Ti isotope abundances differ by a significant mass dependent isotope fractionation transformation from the previously reported abundances, which were normalized for fractionation using 46 Ti/ 48 Ti. Therefore, the absolute compositions define distinct nucleosynthetic components from those previously identified or reflect the existence of significant mass dependent isotope fractionation in nature. We provide a general formalism for determining the possible isotope compositions of the exotic Ti from the measured composition, for different values of isotope fractionation in nature and for different mixing ratios of the exotic and normal components. The absolute Ti and Ca isotopic compositions still support the correlation of 50 Ti and 48 Ca effects in the FUN inclusions and imply contributions from neutron-rich equilibrium or quasi-equilibrium nucleosynthesis. The present identification of endemic effects at 46 Ti, for the absolute composition, implies a shortfall of an explosive-oxygen component or reflects significant isotope fractionation. Additional nucleosynthetic components are required by 47 Ti and 49 Ti effects. Components are also defined in which 48 Ti is enhanced. Results are given and discussed. (author)
Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P
2016-03-21
Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements
International Nuclear Information System (INIS)
Picard, R.R.
1989-01-01
Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process
Martínez-Legaz, Juan Enrique; Soubeyran, Antoine
2003-01-01
We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.
Absolute calibration in vivo measurement systems
International Nuclear Information System (INIS)
Kruchten, D.A.; Hickman, D.P.
1991-02-01
Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs
Determining the effect of grain size and maximum induction upon coercive field of electrical steels
Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel
2011-10-01
Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.
Mat Jan, Nur Amalina; Shabri, Ani
2017-01-01
TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.
Generalized Gaussian Error Calculus
Grabe, Michael
2010-01-01
For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...
International Nuclear Information System (INIS)
Oesterle, S.N.; Norman, J.E. Jr.
1980-01-01
Total peripheral blood lymphocytes were evaluated by age and exposure status in the Adult Health Study population during three examination cycles between 1958 and 1972. No radiation effect was observed, but a significant drop in the absolute lymphocyte counts of those aged 70 years and over and a corresponding maximum for persons aged 50 - 59 was observed. (author)
Redetermination and absolute configuration of atalaphylline
Directory of Open Access Journals (Sweden)
Hoong-Kun Fun
2010-02-01
Full Text Available The title acridone alkaloid [systematic name: 1,3,5-trihydroxy-2,4-bis(3-methylbut-2-enylacridin-9(10H-one], C23H25NO4, has previously been reported as crystallizing in the chiral orthorhombic space group P212121 [Chantrapromma et al. (2010. Acta Cryst. E66, o81–o82] but the absolute configuration could not be determined from data collected with Mo radiation. The absolute configuration has now been determined by refinement of the Flack parameter with data collected using Cu radiation. All features of the molecule and its crystal packing are similar to those previously described.
Medication errors: prescribing faults and prescription errors.
Velo, Giampaolo P; Minuz, Pietro
2009-06-01
1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
Automated absolute activation analysis with californium-252 sources
International Nuclear Information System (INIS)
MacMurdo, K.W.; Bowman, W.W.
1978-09-01
A 100-mg 252 Cf neutron activation analysis facility is used routinely at the Savannah River Laboratory for multielement analysis of many solid and liquid samples. An absolute analysis technique converts counting data directly to elemental concentration without the use of classical comparative standards and flux monitors. With the totally automated pneumatic sample transfer system, cyclic irradiation-decay-count regimes can be pre-selected for up to 40 samples, and samples can be analyzed with the facility unattended. An automatic data control system starts and stops a high-resolution gamma-ray spectrometer and/or a delayed-neutron detector; the system also stores data and controls output modes. Gamma ray data are reduced by three main programs in the IBM 360/195 computer: the 4096-channel spectrum and pertinent experimental timing, counting, and sample data are stored on magnetic tape; the spectrum is then reduced to a list of significant photopeak energies, integrated areas, and their associated statistical errors; and the third program assigns gamma ray photopeaks to the appropriate neutron activation product(s) by comparing photopeak energies to tabulated gamma ray energies. Photopeak areas are then converted to elemental concentration by using experimental timing and sample data, calculated elemental neutron capture rates, absolute detector efficiencies, and absolute spectroscopic decay data. Calculational procedures have been developed so that fissile material can be analyzed by cyclic neutron activation and delayed-neutron counting procedures. These calculations are based on a 6 half-life group model of delayed neutron emission; calculations include corrections for delayed neutron interference from 17 O. Detection sensitivities of 239 Pu were demonstrated with 15-g samples at a throughput of up to 140 per day. Over 40 elements can be detected at the sub-ppM level
Absolutely relative or relatively absolute: violations of value invariance in human decision making.
Teodorescu, Andrei R; Moran, Rani; Usher, Marius
2016-02-01
Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.
International Nuclear Information System (INIS)
Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi
2017-01-01
Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results. (paper)
DI3 - A New Procedure for Absolute Directional Measurements
Directory of Open Access Journals (Sweden)
A Geese
2011-06-01
Full Text Available The standard observatory procedure for determining a geomagnetic field's declination and inclination absolutely is the DI-flux measurement. The instrument consists of a non-magnetic theodolite equipped with a single-axis fluxgate magnetometer. Additionally, a scalar magnetometer is needed to provide all three components of the field. Using only 12 measurement steps, all systematic errors can be accounted for, but if only one of the readings is wrong, the whole measurement has to be rejected. We use a three-component sensor on top of the theodolites telescope. By performing more measurement steps, we gain much better control of the whole procedure: As the magnetometer can be fully calibrated by rotating about two independent directions, every combined reading of magnetometer output and theodolite angles provides the absolute field vector. We predefined a set of angle positions that the observer has to try to achieve. To further simplify the measurement procedure, the observer is guided by a pocket pc, in which he has only to confirm the theodolite position. The magnetic field is then stored automatically, together with the horizontal and vertical angles. The DI3 measurement is periodically performed at the Niemegk Observatory, allowing for a direct comparison with the traditional measurements.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Absolute chronology and stratigraphy of Lepenski Vir
Directory of Open Access Journals (Sweden)
Borić Dušan
2007-01-01
meaningful and representative of two separate and defined phases of occupation at this locale. This early period would correspond with the phase that the excavator of Lepenski Vir defined as Proto-Lepenski Vir although his ideas about the spatial distribution of this phase, its interpretation, duration and relation to the later phase of trapezoidal buildings must be revised in the light of new AMS dates and other available data. The phase with trapezoidal buildings most likely starts only around 6200 cal BC and most of the trapezoidal buildings might have been abandoned by around 5900 cal BC. The absolute span of only two or three hundred years and likely even less, for the flourishing of building activity related to trapezoidal structures at Lepenski Vir significantly compresses Srejović's phase I. Thus, it is difficult to maintain the excavator's five subphases which, similarly to Ivana Radovanović's more recent re-phasing of Lepenski Vir into I-1-3, remain largely guess works before more extensive and systematic dating of each building is accomplished along with statistical modeling in order to narrow the magnitude of error. On the whole, new dates from these contexts better correspond with Srejović's stratigraphic logic of sequencing buildings to particular phases on the basis of their superimposing and cutting than with Radovanović's stylistic logic, i.e. her typology of hearth forms, ash-places, entrance platforms, and presence/absence of -supports around rectangular hearths used as reliable chronological indicators. The short chronological span for phase I also suggests that phase Lepenski Vir II is not realistic. This has already been shown by overlapping plans of the phase I buildings and stone outlines that the excavator of the site attributed to Lepenski Vir II phase. According to Srejović, Lepenski Vir phase II was characterized by buildings with stone walls made in the shape of trapezes, repeating the outline of supposedly earlier limestone floors of his
Det demokratiske argument for absolut ytringsfrihed
DEFF Research Database (Denmark)
Lægaard, Sune
2014-01-01
Artiklen diskuterer den påstand, at absolut ytringsfrihed er en nødvendig forudsætning for demokratisk legitimitet med udgangspunkt i en rekonstruktion af et argument fremsat af Ronald Dworkin. Spørgsmålet er, hvorfor ytringsfrihed skulle være en forudsætning for demokratisk legitimitet, og hvorfor...
Musical Activity Tunes Up Absolute Pitch Ability
DEFF Research Database (Denmark)
Dohn, Anders; Garza-Villarreal, Eduardo A.; Ribe, Lars Riisgaard
2014-01-01
Absolute pitch (AP) is the ability to identify or produce pitches of musical tones without an external reference. Active AP (i.e., pitch production or pitch adjustment) and passive AP (i.e., pitch identification) are considered to not necessarily coincide, although no study has properly compared...
On the absolute measure of Beta activities
International Nuclear Information System (INIS)
Sanchez del Rio, C.; Jimenez Reynaldo, O.; Rodriguez Mayquez, E.
1956-01-01
A new method for absolute beta counting of solid samples is given. The mea surements is made with an inside Geiger-Muller tube of new construction. The backscattering correction when using an infinite thick mounting is discussed and results for different materials given. (Author)
Absolute measurement of a tritium standard
International Nuclear Information System (INIS)
Hadzisehovic, M.; Mocilnik, I.; Buraei, K.; Pongrac, S.; Milojevic, A.
1978-01-01
For the determination of a tritium absolute activity standard, a method of internal gas counting has been used. The procedure involves water reduction by uranium and zinc further the measurement of the absolute disintegration rate of tritium per unit of the effective volume of the counter by a compensation method. Criteria for the choice of methods and procedures concerning the determination and measurement of gaseous 3 H yield, parameters of gaseous hydrogen, sample mass of HTO and the absolute disintegration rate of tritium are discussed. In order to obtain gaseous sources of 3 H (and 2 H), the same reversible chemical reaction was used, namely, the water - uranium hydride - hydrogen system. This reaction was proved to be quantitative above 500 deg C by measuring the yield of the gas obtained and the absolute activity of an HTO standard. A brief description of the measuring apparatus is given, as well as a critical discussion of the brass counter quality and the possibility of obtaining equal working conditions at the counter ends. (T.G.)
Absolutyzm i pluralizm (ABSOLUTISM AND PLURALISM
Directory of Open Access Journals (Sweden)
Renata Ziemińska
2005-06-01
Full Text Available Alethic absolutism is a thesis that propositions can not be more or less true, that they are true or false for ever (if true at all and that their truth is independent on any circumstances of their assertion. In negative version, easier to defend, alethic absolutism claims the very same proposition can not be both true and false relative to circumstances of its assertion. Simple alethic pluralism is a thesis that we have many concepts of truth. It is a very good way to dissolve the controversy between alethic relativism and absolutism. Many philosophical concepts of truth are the best reason for such pluralism. If concept is meaning of a name, we have many concepts of truth because the name 'truth' was understood in many ways. The variety of meanings however can be superficial. Under it we can find one idea of truth expressed in correspondence truism or schema (T. The content of the truism is too poor to be content of anyone concept of truth, so it usually is connected with some picture of the world (ontology and we have so many concepts of truth as many pictures of the world. The authoress proposes the hierarchical pluralism with privileged classic (or correspondence in weak sense concept of truth as absolute property.Other author's publications:
Absolute Distance Measurements with Tunable Semiconductor Laser
Czech Academy of Sciences Publication Activity Database
Mikel, Břetislav; Číp, Ondřej; Lazar, Josef
T118, - (2005), s. 41-44 ISSN 0031-8949 R&D Projects: GA AV ČR(CZ) IAB2065001 Keywords : tunable laser * absolute interferometer Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.661, year: 2004
Thin-film magnetoresistive absolute position detector
Groenland, J.P.J.
1990-01-01
The subject of this thesis is the investigation of a digital absolute posi- tion-detection system, which is based on a position-information carrier (i.e. a magnetic tape) with one single code track on the one hand, and an array of magnetoresistive sensors for the detection of the information on the
Stimulus Probability Effects in Absolute Identification
Kent, Christopher; Lamberts, Koen
2016-01-01
This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…
Absolute tightness: the chemists hesitate to invest
International Nuclear Information System (INIS)
Anon.
1996-01-01
The safety requirements of industries as nuclear plants and the strengthening of regulations in the field of environment (more particularly those related to volatile organic compounds) have lead the manufacturers to build absolute tightness pumps. But these equipments do not answer all the problems and represent a high investment cost. In consequence, the chemists hesitate to invest. (O.L.)
Solving Absolute Value Equations Algebraically and Geometrically
Shiyuan, Wei
2005-01-01
The way in which students can improve their comprehension by understanding the geometrical meaning of algebraic equations or solving algebraic equation geometrically is described. Students can experiment with the conditions of the absolute value equation presented, for an interesting way to form an overall understanding of the concept.
Data error effects on net radiation and evapotranspiration estimation
International Nuclear Information System (INIS)
Llasat, M.C.; Snyder, R.L.
1998-01-01
The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)
A novel capacitive absolute positioning sensor based on time grating with nanometer resolution
Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng
2018-05-01
The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
Energy Technology Data Exchange (ETDEWEB)
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Parameters and error of a theoretical model
International Nuclear Information System (INIS)
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs
Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading
Directory of Open Access Journals (Sweden)
Pierluigi Guerriero
2015-01-01
Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.
Prescription Errors in Psychiatry
African Journals Online (AJOL)
Arun Kumar Agnihotri
clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.
Error studies of Halbach Magnets
Energy Technology Data Exchange (ETDEWEB)
Brooks, S. [Brookhaven National Lab. (BNL), Upton, NY (United States)
2017-03-02
These error studies were done on the Halbach magnets for the CBETA “First Girder” as described in note [CBETA001]. The CBETA magnets have since changed slightly to the lattice in [CBETA009]. However, this is not a large enough change to significantly affect the results here. The QF and BD arc FFAG magnets are considered. For each assumed set of error distributions and each ideal magnet, 100 random magnets with errors are generated. These are then run through an automated version of the iron wire multipole cancellation algorithm. The maximum wire diameter allowed is 0.063” as in the proof-of-principle magnets. Initially, 32 wires (2 per Halbach wedge) are tried, then if this does not achieve 1e-4 level accuracy in the simulation, 48 and then 64 wires. By “1e-4 accuracy”, it is meant the FOM defined by √(Σ_{n≥sextupole} a_{n} ^{2}+b_{n} ^{2}) is less than 1 unit, where the multipoles are taken at the maximum nominal beam radius, R=23mm for these magnets. The algorithm initially uses 20 convergence interations. If 64 wires does not achieve 1e-4 accuracy, this is increased to 50 iterations to check for slow converging cases. There are also classifications for magnets that do not achieve 1e-4 but do achieve 1e-3 (FOM ≤ 10 units). This is technically within the spec discussed in the Jan 30, 2017 review; however, there will be errors in practical shimming not dealt with in the simulation, so it is preferable to do much better than the spec in the simulation.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
Directory of Open Access Journals (Sweden)
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
Absolute calibration of TFTR helium proportional counters
International Nuclear Information System (INIS)
Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L.; Loughlin, M.
1995-06-01
The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments
Absolute-magnitude distributions of supernovae
Energy Technology Data Exchange (ETDEWEB)
Richardson, Dean; Wright, John [Department of Physics, Xavier University of Louisiana, New Orleans, LA 70125 (United States); Jenkins III, Robert L. [Applied Physics Department, Richard Stockton College, Galloway, NJ 08205 (United States); Maddox, Larry, E-mail: drichar7@xula.edu [Department of Chemistry and Physics, Southeastern Louisiana University, Hammond, LA 70402 (United States)
2014-05-01
The absolute-magnitude distributions of seven supernova (SN) types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M{sub B} < –21) make up only about 0.1% of all SNe in the bias-corrected sample. The subluminous events (M{sub B} > –15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of –19.25. The IIP distribution was the dimmest at –16.75.
Absolute and relative dosimetry for ELIMED
Energy Technology Data Exchange (ETDEWEB)
Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Cuttone, G.; Candiano, G.; Musumarra, A.; Pisciotta, P.; Romano, F. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Leonora, E.; Randazzo, N. [INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Presti, D. Lo [INFN-Sezione di Catania, Via Santa Sofia 64, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Raffaele, L. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Cirio, R.; Sacchi, R.; Monaco, V. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino, Italy and Università di Torino, Dipartimento di Fisica, Via P.Giuria, 1 10125 Torino (Italy); Marchetto, F.; Giordanengo, S. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy)
2013-07-26
The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.
Absolute spectrophotometry of the β Lyr
International Nuclear Information System (INIS)
Burnashev, V.I.; Skul'skij, M.Yu.
1978-01-01
In 1974 an absolute spectrophotometry of β Lyr was performed with the scanning spectrophotometer in the 3300-7400 A range. The energy distribution in the β Lyr spectrum is obtained. The β Lyr model is proposed. It is shown, that the continuous spectrum of the β Lyr radiation can be presented by the total radiation of the B8 3 and A5 3 two stars and of the gaseous envelope with Te =20000 K
Absolute photoionization cross sections of atomic oxygen
Samson, J. A. R.; Pareek, P. N.
1985-01-01
The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.
Absolute purchasing power parity in industrial countries
Zhang, Zhibai; Bian, Zhicun
2015-01-01
Different from popular studies that focus on relative purchasing power parity, we study absolute purchasing power parity (APPP) in 21 main industrial countries. Three databases are used. Both the whole period and the sub-period are analyzed. The empirical proof shows that the phenomenon that APPP holds is common, and the phenomenon that APPP does not hold is also common. In addition, some country pairs and the pooled country data indicate that the nearer the GDPPs of two countries are, the mo...
Internal descriptions of absolute Borel classes
Czech Academy of Sciences Publication Activity Database
Holický, P.; Pelant, Jan
2004-01-01
Roč. 141, č. 1 (2004), s. 87-104 ISSN 0166-8641 R&D Projects: GA ČR GA201/00/1466; GA ČR GA201/03/0933 Institutional research plan: CEZ:AV0Z1019905 Keywords : absolute Borel class * complete sequence of covers * open map Subject RIV: BA - General Mathematics Impact factor: 0.364, year: 2004
The absolute differential calculus calculus of tensors
Levi-Cività, Tullio
1926-01-01
Written by a towering figure of twentieth-century mathematics, this classic examines the mathematical background necessary for a grasp of relativity theory. Tullio Levi-Civita provides a thorough treatment of the introductory theories that form the basis for discussions of fundamental quadratic forms and absolute differential calculus, and he further explores physical applications.Part one opens with considerations of functional determinants and matrices, advancing to systems of total differential equations, linear partial differential equations, algebraic foundations, and a geometrical intro
An absolute deviation approach to assessing correlation.
Gorard, S.
2015-01-01
This paper describes two possible alternatives to the more traditional Pearson’s R correlation coefficient, both based on using the mean absolute deviation, rather than the standard deviation, as a measure of dispersion. Pearson’s R is well-established and has many advantages. However, these newer variants also have several advantages, including greater simplicity and ease of computation, and perhaps greater tolerance of underlying assumptions (such as the need for linearity). The first alter...
Benzofuranoid and bicyclooctanoid neolignans:absolute configuration
International Nuclear Information System (INIS)
Alvarenga, M.A. de; Giesbrecht, A.M.; Gottlieb, O.R.; Yoshida, M.
1977-01-01
The naturally occuring benzofuranoid and bicyclo (3,2,1) octanoid neolignans have their relative configurations established by 1 H and 13 C NMR, inclusively with aid of the solvent shift technique. Interconversion of the benzofuranoid type compounds, as well as for a benzofuranoid to a bicyclooctanoid derivate, make ORD correlations, ultimately with (2S, 3S) - and (2R,3R)-2,3- dihydrobenzofurans, possible, and led to the absolute configurations of both series of neolignans [pt
Least Squares Problems with Absolute Quadratic Constraints
Directory of Open Access Journals (Sweden)
R. Schöne
2012-01-01
Full Text Available This paper analyzes linear least squares problems with absolute quadratic constraints. We develop a generalized theory following Bookstein's conic-fitting and Fitzgibbon's direct ellipse-specific fitting. Under simple preconditions, it can be shown that a minimum always exists and can be determined by a generalized eigenvalue problem. This problem is numerically reduced to an eigenvalue problem by multiplications of Givens' rotations. Finally, four applications of this approach are presented.
Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu
2015-10-01
Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.
Absolute measurement method of environment radon content
International Nuclear Information System (INIS)
Ji Changsong
1989-11-01
A portable environment radon content device with a 40 liter decay chamber based on the method of Thomas double filter radon content absolute measurement has been developed. The correctness of the method of Thomas double filter absolute measurement has been verified by the experiments to measure the sampling gas density of radon that the theoretical density has been known. In addition, the intrinsic uncertainty of this method is also determined in the experiments. The confidence of this device is about 95%, the sensitivity is better than 0.37 Bqm -3 and the intrinsic uncertainty is less than 10%. The results show that the selected measuring and structure parameters are reasonable and the experimental methods are acceptable. In this method, the influence on the measured values from the radioactive equilibrium of radon and its daughters, the ratio of combination daughters to the total daughters and the fraction of charged particles has been excluded in the theory and experimental methods. The formula of Thomas double filter absolute measuring radon is applicable to the cylinder decay chamber, and the applicability is also verified when the diameter of exit filter is much smaller than the diameter of inlet filter
Kartush, J M
1996-11-01
Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.
International Nuclear Information System (INIS)
Bai, Ling; Smuts, Jonathan; Walsh, Phillip; Qiu, Changling; McNair, Harold M.; Schug, Kevin A.
2017-01-01
The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal
Energy Technology Data Exchange (ETDEWEB)
Bai, Ling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); Smuts, Jonathan; Walsh, Phillip [VUV Analytics, Inc., Cedar Park, TX (United States); Qiu, Changling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); McNair, Harold M. [Department of Chemistry, Virginia Tech, Blacksburg, VA (United States); Schug, Kevin A., E-mail: kschug@uta.edu [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States)
2017-02-08
The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal
International Nuclear Information System (INIS)
Cormier, T.M.; Pavlinov, A.I.; Rykov, M.V.; Rykov, V.L.; Shestermanov, K.E.
2002-01-01
The procedure for the STAR Barrel Electromagnetic Calorimeter (BEMC) absolute calibrations, using penetrating charged particle hits (MIP-hits) from physics events at RHIC, is presented. Its systematic and statistical errors are evaluated. It is shown that, using this technique, the equalization and transfer of the absolute scale from the test beam can be done to a percent level accuracy in a reasonable amount of time for the entire STAR BEMC. MIP-hits would also be an effective tool for continuously monitoring the variations of the BEMC tower's gains, virtually without interference to STAR's main physics program. The method does not rely on simulations for anything other than geometric and some other small corrections, and also for estimations of the systematic errors. It directly transfers measured test beam responses to operations at RHIC
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Directory of Open Access Journals (Sweden)
Tianzhou Chen
2013-09-01
Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.
Absolute measurement of the $\\beta\\alpha$ decay of $^{16}$N
We propose to study the $\\beta$-decay of $^{16}$N at ISOLDE with the aim of determining the branching ratio for $\\beta\\alpha$ decay on an absolute scale. There are indications that the previously measured branching ratio is in error by an amount significantly larger than the quoted uncertainty. This limits the precision with which the S-factor of the astrophysically important $^{12}$C($\\alpha, \\gamma)^{16}$O reaction can be determined.
Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation
DEFF Research Database (Denmark)
Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik
2017-01-01
The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...
Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements
Anthony, Robert E.; Ringler, Adam; Wilson, David
2018-01-01
The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.
The error in total error reduction.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2014-02-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.
THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX
International Nuclear Information System (INIS)
Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward; Szczygieł, Dorota M.; Gould, Andrew; Sneden, Christopher; Dong, Subo
2013-01-01
We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M V,RRc = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M V,RRab = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M V,RRc = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W π , W θ , W z ) = (12.0, –209.9, 3.0) km s –1 in the radial, rotational, and vertical directions with dispersions (σ W π ,σ W θ ,σ W z ) = (150.4, 106.1, 96.0) km s -1 . For the disk, we find (W π , W θ , W z ) = (13.0, –42.0, –27.3) km s –1 relative to the Sun with dispersions (σ W π ,σ W θ ,σ W z ) = (67.7,59.2,54.9) km s -1 . Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4
Achieving Climate Change Absolute Accuracy in Orbit
Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.;
2013-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.
Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano
2013-01-01
Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...
Absolute measurement of environmental radon content
International Nuclear Information System (INIS)
Ji Changsong
1987-01-01
A transportable meter for environmental radon measurement with a 40 liter decay chamber is designed on the principle of Thomas two-filter radon content absolute measurement. The sensitivity is 0.37 Bq·m -3 with 95% confidence level. This paper describes the experimental method of measuremment and it's intrinsic uncertainty. The typical intrinsic uncertainty (for n x 3.7 Bq·m -3 radon concentration) is <10%. The parameter of exit filter effeciency is introduced into the formula, and the verification is done for the case when the diameter of the exit filter is much less than the inlet one
Fractional order absolute vibration suppression (AVS) controllers
Halevi, Yoram
2017-04-01
Absolute vibration suppression (AVS) is a control method for flexible structures. The first step is an accurate, infinite dimension, transfer function (TF), from actuation to measurement. This leads to the collocated, rate feedback AVS controller that in some cases completely eliminates the vibration. In case of the 1D wave equation, the TF consists of pure time delays and low order rational terms, and the AVS controller is rational. In all other cases, the TF and consequently the controller are fractional order in both the delays and the "rational parts". The paper considers stability, performance and actual implementation in such cases.
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
National Research Council Canada - National Science Library
Byrne, Michael D
2006-01-01
.... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...
International Nuclear Information System (INIS)
Wahlstroem, B.
1993-01-01
Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)
Regional absolute conductivity reconstruction using projected current density in MREIT
International Nuclear Information System (INIS)
Sajib, Saurav Z K; Kim, Hyung Joong; Woo, Eung Je; Kwon, Oh In
2012-01-01
slice and the reconstructed regional projected current density, we propose a direct non-iterative algorithm to reconstruct the absolute conductivity in the ROI. The numerical simulations in the presence of various degrees of noise, as well as a phantom MRI imaging experiment showed that the proposed method reconstructs the regional absolute conductivity in a ROI within a subject including the defective regions. In the simulation experiment, the relative L 2 -mode errors of the reconstructed regional and global conductivities were 0.79 and 0.43, respectively, using a noise level of 50 db in the defective region. (paper)
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Linear ultrasonic motor for absolute gravimeter.
Jian, Yue; Yao, Zhiyuan; Silberschmidt, Vadim V
2017-05-01
Thanks to their compactness and suitability for vacuum applications, linear ultrasonic motors are considered as substitutes for classical electromagnetic motors as driving elements in absolute gravimeters. Still, their application is prevented by relatively low power output. To overcome this limitation and provide better stability, a V-type linear ultrasonic motor with a new clamping method is proposed for a gravimeter. In this paper, a mechanical model of stators with flexible clamping components is suggested, according to a design criterion for clamps of linear ultrasonic motors. After that, an effect of tangential and normal rigidity of the clamping components on mechanical output is studied. It is followed by discussion of a new clamping method with sufficient tangential rigidity and a capability to facilitate pre-load. Additionally, a prototype of the motor with the proposed clamping method was fabricated and the performance tests in vertical direction were implemented. Experimental results show that the suggested motor has structural stability and high dynamic performance, such as no-load speed of 1.4m/s and maximal thrust of 43N, meeting the requirements for absolute gravimeters. Copyright © 2017 Elsevier B.V. All rights reserved.
Relational versus absolute representation in categorization.
Edwards, Darren J; Pothos, Emmanuel M; Perlman, Amotz
2012-01-01
This study explores relational-like and absolute-like representations in categorization. Although there is much evidence that categorization processes can involve information about both the particular physical properties of studied instances and abstract (relational) properties, there has been little work on the factors that lead to one kind of representation as opposed to the other. We tested 370 participants in 6 experiments, in which participants had to classify new items into predefined artificial categories. In 4 experiments, we observed a predominantly relational-like mode of classification, and in 2 experiments we observed a shift toward an absolute-like mode of classification. These results suggest 3 factors that promote a relational-like mode of classification: fewer items per group, more training groups, and the presence of a time delay. Overall, we propose that less information about the distributional properties of a category or weaker memory traces for the category exemplars (induced, e.g., by having smaller categories or a time delay) can encourage relational-like categorization.
On the absolute meaning of motion
Directory of Open Access Journals (Sweden)
H. Edwards
Full Text Available The present manuscript aims to clarify why motion causes matter to age slower in a comparable sense, and how this relates to relativistic effects caused by motion. A fresh analysis of motion, build on first axiom, delivers proof with its result, from which significant new understanding and computational power is gained.A review of experimental results demonstrates, that unaccelerated motion causes matter to age slower in a comparable, observer independent sense. Whilst focusing on this absolute effect, the present manuscript clarifies its context to relativistic effects, detailing their relationship and incorporating both into one consistent picture. The presented theoretical results make new predictions and are testable through suggested experiment of a novel nature. The manuscript finally arrives at an experimental tool and methodology, which as far as motion in ungravitated space is concerned or gravity appreciated, enables us to find the absolute observer independent picture of reality, which is reflected in the comparable display of atomic clocks.The discussion of the theoretical results, derives a physical causal understanding of gravity, a mathematical formulation of which, will be presented. Keywords: Kinematics, Gravity, Atomic clocks, Cosmic microwave background
Standardization of the cumulative absolute velocity
International Nuclear Information System (INIS)
O'Hara, T.F.; Jacobson, J.P.
1991-12-01
EPRI NP-5930, ''A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Absolute determination of the deuterium content of heavy water, measurement of absolute density
International Nuclear Information System (INIS)
Ceccaldi, M.; Riedinger, M.; Menache, M.
1975-01-01
The absolute density of two heavy water samples rich in deuterium (with a grade higher than 99.9%) was determined with the hydrostatic method. The exact isotopic composition of this water (hydrogen and oxygen isotopes) was very carefully studied. A theoretical estimate enabled us to get the absolute density value of isotopically pure D 2 16 O. This value was found to be 1104.750 kg.m -3 at t 68 =22.3 0 C and under the pressure of one atmosphere. (orig.) [de
Metcalfe, Janet
2017-01-01
Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Invariant and Absolute Invariant Means of Double Sequences
Directory of Open Access Journals (Sweden)
Abdullah Alotaibi
2012-01-01
Full Text Available We examine some properties of the invariant mean, define the concepts of strong σ-convergence and absolute σ-convergence for double sequences, and determine the associated sublinear functionals. We also define the absolute invariant mean through which the space of absolutely σ-convergent double sequences is characterized.
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Energy Technology Data Exchange (ETDEWEB)
Hong, Soon Gi; Son, Sang Joon; Moon, Joon Gi; KIm, Bo Kyum; Lee, Je Hee [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of)
2016-12-15
To figure out if the treatment plan for rectum, bladder and prostate that have a lot of interfraction errors satisfies dosimetric limits without adaptive plan by analyzing MR image. This study was based on 5 prostate cancer patients who had IMRT(total dose: 70 Gy) Using ViewRay MRIdian System(ViewRay, ViewRay Inc., Cleveland, OH, USA) The treatment plans were made on the same CT images to compare with the plan quality according to adaptive plan, and the Eclipse(Ver 10.0.42, Varian, USA) was used. After registrate the 5 treatment MR images to the CT images for treatment plan to analyze the interfraction changes of organ, we measured the dose volume histogram and the changes of the absolute volume for each organ by applying the first treatment plan to each image. Over 5 fractions, the total dose for PTV was V{sub 36.25} Gy ≧ 95%. To confirm that the prescription dose satisfies the SBRT dose limit for prostate, we measured V{sub 100%} , V{sub 95%}, V{sub 90%} for CTV and V{sub 100%}, V{sub 90%}, V{sub 80%}, V{sub 50%} of rectum and bladder. All dose average value of CTV, rectum and bladder satisfied dose limit, but there was a case that exceeded dose limit more than one after analyzing the each image of treatment. After measuring the changes of absolute volume comparing the MR image of the first treatment plan with the one of the interfraction treatment, the difference values were maximum 1.72 times at rectum and maximum 2.0 times at bladder. In case of rectum, the expected values were planned under the dose limit, on average, V{sub 100%}=0.32%, V{sub 90%}=3.33%, V{sub 80%}=7.71%, V{sub 50%}=23.55% in the first treatment plan. In case of rectum, the average of absolute volume in first plan was 117.9 cc. However, the average of really treated volume was 79.2 cc. In case of CTV, the 100% prescription dose area didn't satisfy even though the margin for PTV was 5 mm because of the variation of rectal and bladder volume. There was no case that the value from average
International Nuclear Information System (INIS)
Hong, Soon Gi; Son, Sang Joon; Moon, Joon Gi; KIm, Bo Kyum; Lee, Je Hee
2016-01-01
To figure out if the treatment plan for rectum, bladder and prostate that have a lot of interfraction errors satisfies dosimetric limits without adaptive plan by analyzing MR image. This study was based on 5 prostate cancer patients who had IMRT(total dose: 70 Gy) Using ViewRay MRIdian System(ViewRay, ViewRay Inc., Cleveland, OH, USA) The treatment plans were made on the same CT images to compare with the plan quality according to adaptive plan, and the Eclipse(Ver 10.0.42, Varian, USA) was used. After registrate the 5 treatment MR images to the CT images for treatment plan to analyze the interfraction changes of organ, we measured the dose volume histogram and the changes of the absolute volume for each organ by applying the first treatment plan to each image. Over 5 fractions, the total dose for PTV was V_3_6_._2_5 Gy ≧ 95%. To confirm that the prescription dose satisfies the SBRT dose limit for prostate, we measured V_1_0_0_% , V_9_5_%, V_9_0_% for CTV and V_1_0_0_%, V_9_0_%, V_8_0_%, V_5_0_% of rectum and bladder. All dose average value of CTV, rectum and bladder satisfied dose limit, but there was a case that exceeded dose limit more than one after analyzing the each image of treatment. After measuring the changes of absolute volume comparing the MR image of the first treatment plan with the one of the interfraction treatment, the difference values were maximum 1.72 times at rectum and maximum 2.0 times at bladder. In case of rectum, the expected values were planned under the dose limit, on average, V_1_0_0_%=0.32%, V_9_0_%=3.33%, V_8_0_%=7.71%, V_5_0_%=23.55% in the first treatment plan. In case of rectum, the average of absolute volume in first plan was 117.9 cc. However, the average of really treated volume was 79.2 cc. In case of CTV, the 100% prescription dose area didn't satisfy even though the margin for PTV was 5 mm because of the variation of rectal and bladder volume. There was no case that the value from average of five fractions is over the
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
How is an absolute democracy possible?
Directory of Open Access Journals (Sweden)
Joanna Bednarek
2011-01-01
Full Text Available In the last part of the Empire trilogy, Commonwealth, Negri and Hardt ask about the possibility of the self-governance of the multitude. When answering, they argue that absolute democracy, understood as the political articulation of the multitude that does not entail its unification (construction of the people is possible. As Negri states, this way of thinking about political articulation is rooted in the tradition of democratic materialism and constitutes the alternative to the dominant current of modern political philosophy that identifies political power with sovereignty. The multitude organizes itself politically by means of the constitutive power, identical with the ontological creativity or productivity of the multitude. To state the problem of political organization means to state the problem of class composition: political democracy is at the same time economic democracy.
Absolute partial photoionization cross sections of ethylene
Grimm, F. A.; Whitley, T. A.; Keller, P. R.; Taylor, J. W.
1991-07-01
Absolute partial photoionization cross sections for ionization out of the first four valence orbitals to the X 2B 3u, A 2B 3g, B 2A g and C 2B 2u states of the C 2H 4+ ion are presented as a function of photon energy over the energy range from 12 to 26 eV. The experimental results have been compared to previously published relative partial cross sections for the first two bands at 18, 21 and 24 eV. Comparison of the experimental data with continuum multiple scattering Xα calculations provides evidence for extensive autoionization to the X 2B 3u state and confirms the predicted shape resonances in ionization to the A 2B 3g and B 2A g states. Identification of possible transitions for the autoionizing resonances have been made using multiple scattering transition state calculations on Rydberg excited states.
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Directory of Open Access Journals (Sweden)
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
An absolute calibration system for millimeter-accuracy APOLLO measurements
Adelberger, E. G.; Battat, J. B. R.; Birkmeier, K. J.; Colmenares, N. R.; Davis, R.; Hoyle, C. D.; Huang, L. R.; McMillan, R. J.; Murphy, T. W., Jr.; Schlerman, E.; Skrobol, C.; Stubbs, C. W.; Zach, A.
2017-12-01
Lunar laser ranging provides a number of leading experimental tests of gravitation—important in our quest to unify general relativity and the standard model of physics. The apache point observatory lunar laser-ranging operation (APOLLO) has for years achieved median range precision at the ∼2 mm level. Yet residuals in model-measurement comparisons are an order-of-magnitude larger, raising the question of whether the ranging data are not nearly as accurate as they are precise, or if the models are incomplete or ill-conditioned. This paper describes a new absolute calibration system (ACS) intended both as a tool for exposing and eliminating sources of systematic error, and also as a means to directly calibrate ranging data in situ. The system consists of a high-repetition-rate (80 MHz) laser emitting short (motivating continued work on model capabilities. The ACS provides the means to deliver APOLLO data both accurate and precise below the 2 mm level.
International Nuclear Information System (INIS)
Hirschfeld, T.; Honigs, D.; Hieftje, G.
1985-01-01
Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable
Preventing Errors in Laterality
Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie
2014-01-01
An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...
International Nuclear Information System (INIS)
Reason, J.
1988-01-01
This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito
2017-10-01
The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.
Genomic DNA-based absolute quantification of gene expression in Vitis.
Gambetta, Gregory A; McElrone, Andrew J; Matthews, Mark A
2013-07-01
Many studies in which gene expression is quantified by polymerase chain reaction represent the expression of a gene of interest (GOI) relative to that of a reference gene (RG). Relative expression is founded on the assumptions that RG expression is stable across samples, treatments, organs, etc., and that reaction efficiencies of the GOI and RG are equal; assumptions which are often faulty. The true variability in RG expression and actual reaction efficiencies are seldom determined experimentally. Here we present a rapid and robust method for absolute quantification of expression in Vitis where varying concentrations of genomic DNA were used to construct GOI standard curves. This methodology was utilized to absolutely quantify and determine the variability of the previously validated RG ubiquitin (VvUbi) across three test studies in three different tissues (roots, leaves and berries). In addition, in each study a GOI was absolutely quantified. Data sets resulting from relative and absolute methods of quantification were compared and the differences were striking. VvUbi expression was significantly different in magnitude between test studies and variable among individual samples. Absolute quantification consistently reduced the coefficients of variation of the GOIs by more than half, often resulting in differences in statistical significance and in some cases even changing the fundamental nature of the result. Utilizing genomic DNA-based absolute quantification is fast and efficient. Through eliminating error introduced by assuming RG stability and equal reaction efficiencies between the RG and GOI this methodology produces less variation, increased accuracy and greater statistical power. © 2012 Scandinavian Plant Physiology Society.
Reduction of weighing errors caused by tritium decay heating
International Nuclear Information System (INIS)
Shaw, J.F.
1978-01-01
The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...
2012-03-01
This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...
International Nuclear Information System (INIS)
Jeach, J.L.
1976-01-01
When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables
Spotting software errors sooner
International Nuclear Information System (INIS)
Munro, D.
1989-01-01
Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)
International Nuclear Information System (INIS)
Kop, L.
2001-01-01
On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses
Medical Errors Reduction Initiative
National Research Council Canada - National Science Library
Mutter, Michael L
2005-01-01
The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
DEFF Research Database (Denmark)
Rasmussen, Jens
1983-01-01
An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Brunke, Heinz-Peter; Matzka, Jürgen
2018-01-01
At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).
Directory of Open Access Journals (Sweden)
H.-P. Brunke
2018-01-01
Full Text Available At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer. Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996.We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017.
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
2008-01-01
One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177
Thermodynamics of Error Correction
Directory of Open Access Journals (Sweden)
Pablo Sartori
2015-12-01
Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.
2015-11-01
An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.
Error exponents for entanglement concentration
International Nuclear Information System (INIS)
Hayashi, Masahito; Koashi, Masato; Matsumoto, Keiji; Morikoshi, Fumiaki; Winter, Andreas
2003-01-01
Consider entanglement concentration schemes that convert n identical copies of a pure state into a maximally entangled state of a desired size with success probability being close to one in the asymptotic limit. We give the distillable entanglement, the number of Bell pairs distilled per copy, as a function of an error exponent, which represents the rate of decrease in failure probability as n tends to infinity. The formula fills the gap between the least upper bound of distillable entanglement in probabilistic concentration, which is the well-known entropy of entanglement, and the maximum attained in deterministic concentration. The method of types in information theory enables the detailed analysis of the distillable entanglement in terms of the error rate. In addition to the probabilistic argument, we consider another type of entanglement concentration scheme, where the initial state is deterministically transformed into a (possibly mixed) final state whose fidelity to a maximally entangled state of a desired size converges to one in the asymptotic limit. We show that the same formula as in the probabilistic argument is valid for the argument on fidelity by replacing the success probability with the fidelity. Furthermore, we also discuss entanglement yield when optimal success probability or optimal fidelity converges to zero in the asymptotic limit (strong converse), and give the explicit formulae for those cases
On the mean squared error of the ridge estimator of the covariance and precision matrix
van Wieringen, Wessel N.
2017-01-01
For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.
Quantum states and their marginals. From multipartite entanglement to quantum error-correcting codes
International Nuclear Information System (INIS)
Huber, Felix Michael
2017-01-01
At the heart of the curious phenomenon of quantum entanglement lies the relation between the whole and its parts. In my thesis, I explore different aspects of this theme in the multipartite setting by drawing connections to concepts from statistics, graph theory, and quantum error-correcting codes: first, I address the case when joint quantum states are determined by their few-body parts and by Jaynes' maximum entropy principle. This can be seen as an extension of the notion of entanglement, with less complex states already being determined by their few-body marginals. Second, I address the conditions for certain highly entangled multipartite states to exist. In particular, I present the solution of a long-standing open problem concerning the existence of an absolutely maximally entangled state on seven qubits. This sheds light on the algebraic properties of pure quantum states, and on the conditions that constrain the sharing of entanglement amongst multiple particles. Third, I investigate Ulam's graph reconstruction problems in the quantum setting, and obtain legitimacy conditions of a set of states to be the reductions of a joint graph state. Lastly, I apply and extend the weight enumerator machinery from quantum error correction to investigate the existence of codes and highly entangled states in higher dimensions. This clarifies the physical interpretation of the weight enumerators and of the quantum MacWilliams identity, leading to novel applications in multipartite entanglement.
THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX
Energy Technology Data Exchange (ETDEWEB)
Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Szczygieł, Dorota M.; Gould, Andrew [Department of Astronomy, The Ohio State University, 4051 McPherson Laboratory, Columbus, OH 43210 (United States); Sneden, Christopher [Department of Astronomy, University of Texas at Austin, TX 78712 (United States); Dong, Subo [Institute for Advanced Study, 500 Einstein Drive, Princeton, NJ 08540 (United States)
2013-09-20
We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.
Absolute Lower Bound on the Bounce Action
Sato, Ryosuke; Takimoto, Masahiro
2018-03-01
The decay rate of a false vacuum is determined by the minimal action solution of the tunneling field: bounce. In this Letter, we focus on models with scalar fields which have a canonical kinetic term in N (>2 ) dimensional Euclidean space, and derive an absolute lower bound on the bounce action. In the case of four-dimensional space, we show the bounce action is generically larger than 24 /λcr, where λcr≡max [-4 V (ϕ )/|ϕ |4] with the false vacuum being at ϕ =0 and V (0 )=0 . We derive this bound on the bounce action without solving the equation of motion explicitly. Our bound is derived by a quite simple discussion, and it provides useful information even if it is difficult to obtain the explicit form of the bounce solution. Our bound offers a sufficient condition for the stability of a false vacuum, and it is useful as a quick check on the vacuum stability for given models. Our bound can be applied to a broad class of scalar potential with any number of scalar fields. We also discuss a necessary condition for the bounce action taking a value close to this lower bound.
Gyrokinetic statistical absolute equilibrium and turbulence
International Nuclear Information System (INIS)
Zhu Jianzhou; Hammett, Gregory W.
2010-01-01
A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: a finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N+1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.
Gyrokinetic Statistical Absolute Equilibrium and Turbulence
International Nuclear Information System (INIS)
Zhu, Jian-Zhou; Hammett, Gregory W.
2011-01-01
A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence (T.-D. Lee, 'On some statistical properties of hydrodynamical and magnetohydrodynamical fields,' Q. Appl. Math. 10, 69 (1952)) is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.
Measurement of the 235 U absolute activity
International Nuclear Information System (INIS)
Bueno, C.C.; Santos, M.D.S.
1993-01-01
The absolute activity of 235 U contained in a sample was measured utilizing a sum-coincidence circuit which selects only the alpha particles emitted simultaneously with the 143 KeV gamma radiations from the 231 Th (product nucleus). The alpha particles were detected by means of a new type of a gas scintillating chamber, in which the light emitted by excitation of the gas atoms, due to the passage of a charged incoming particle, has its intensity increased by the action of an applied electric field. The gamma radiations were detected by means of a 1'x 1 1/2 Nal (TI) scintillation detector. The value obtained for the half-life of 235 U, (7.04+-0.01)10 8 y, was compared with the data available from various observers with used different experimental techniques. It is shown that our results are in excellent agreement with the best data available on the subject. (author) 15 refs, 5 figs, 1 tab
Auditory processing in absolute pitch possessors
McKetton, Larissa; Schneider, Keith A.
2018-05-01
Absolute pitch (AP) is a rare ability in classifying a musical pitch without a reference standard. It has been of great interest to researchers studying auditory processing and music cognition since it is seldom expressed and sheds light on influences pertaining to neurodevelopmental biological predispositions and the onset of musical training. We investigated the smallest frequency that could be detected or just noticeable difference (JND) between two pitches. Here, we report significant differences in JND thresholds in AP musicians and non-AP musicians compared to non-musician control groups at both 1000 Hz and 987.76 Hz testing frequencies. Although the AP-musicians did better than non-AP musicians, the difference was not significant. In addition, we looked at neuro-anatomical correlates of musicianship and AP using structural MRI. We report increased cortical thickness of the left Heschl's Gyrus (HG) and decreased cortical thickness of the inferior frontal opercular gyrus (IFO) and circular insular sulcus volume (CIS) in AP compared to non-AP musicians and controls. These structures may therefore be optimally enhanced and reduced to form the most efficient network for AP to emerge.
[Tobacco and plastic surgery: An absolute contraindication?
Matusiak, C; De Runz, A; Maschino, H; Brix, M; Simon, E; Claudot, F
2017-08-01
Smoking increases perioperative risk regarding wound healing, infection rate and failure of microsurgical procedures. There is no present consensus about plastic and aesthetic surgical indications concerning smoking patients. The aim of our study is to analyze French plastic surgeons practices concerning smokers. A questionnaire was send by e-mail to French plastic surgeons in order to evaluate their own operative indications: patient information about smoking dangers, pre- and postoperative delay of smoking cessation, type of intervention carried out, smoking cessation supports, use of screening test and smoking limit associated to surgery refusing were studied. Statistical tests were used to compare results according to practitioner activity (liberal or public), own smoking habits and time of installation. In 148 questionnaires, only one surgeon did not explain smoking risk. Of the surgeons, 49.3% proposed smoking-cessation supports, more frequently with public practice (P=0.019). In total, 85.4% of surgeons did not use screening tests. Years of installation affected operative indication with smoking patients (P=0.02). Pre- and postoperative smoking cessation delay were on average respectively 4 and 3 weeks in accordance with literature. Potential improvements could be proposed to smoking patients' care: smoking cessation assistance, screening tests, absolute contraindication of some procedures or level of consumption to determine. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Directory of Open Access Journals (Sweden)
Yang Bai
2016-05-01
Full Text Available A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10−4 pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range.
Absolute beam-charge measurement for single-bunch electron beams
International Nuclear Information System (INIS)
Suwada, Tsuyoshi; Ohsawa, Satoshi; Furukawa, Kazuro; Akasaka, Nobumasa
2000-01-01
The absolute beam charge of a single-bunch electron beam with a pulse width of 10 ps and that of a short-pulsed electron beam with a pulse width of 1 ns were measured with a Faraday cup in a beam test for the KEK B-Factory (KEKB) injector linac. It is strongly desired to obtain a precise beam-injection rate to the KEKB rings, and to estimate the amount of beam loss. A wall-current monitor was also recalibrated within an error of ±2%. This report describes the new results for an absolute beam-charge measurement for single-bunch and short-pulsed electron beams, and recalibration of the wall-current monitors in detail. (author)
Directory of Open Access Journals (Sweden)
MA. Lendita Kryeziu
2015-06-01
Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.
Compact disk error measurements
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Banks, H T; Holm, Kathleen; Robbins, Danielle
2010-11-01
We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.
Determing and monitoring of maximum permissible power for HWRR-3
International Nuclear Information System (INIS)
Jia Zhanli; Xiao Shigang; Jin Huajin; Lu Changshen
1987-01-01
The operating power of a reactor is an important parameter to be monitored. This report briefly describes the determining and monitoring of maximum permissiable power for HWRR-3. The calculating method is described, and the result of calculation and analysis of error are also given. On-line calculation and real time monitoring have been realized at the heavy water reactor. It provides the reactor with a real time and reliable supervision. This makes operation convenient and increases reliability
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Directory of Open Access Journals (Sweden)
Antonio Boldrini
2013-06-01
Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research
An Absolute Valve for the ITER Neutral Beam Injector
International Nuclear Information System (INIS)
Jones, Ch.; Chuilon, B.; Michael, W.
2006-01-01
In the ITER reference design a fast shutter was included to limit tritium migration into the beamline vacuum enclosures. The need was recently identified to extend the functionality of the fast shutter to that of an absolute valve in order to facilitate injector maintenance procedures and to satisfy safety requirements in case of an in-vessel loss of coolant event. Three concepts have been examined satisfying the ITER requirements for speed of actuation, sealing performance over the required lifetime, and pressure differential in fault scenarios, namely: a rectangular closure section; a circular cross section; and a rotary JET-type valve. The rectangular section represents the most efficient usage of the available space envelope and leads to a minimum-mass system, although it requires greater total force for a given load per unit length of seal. However, a metallic seal of the '' hard/hard '' type, where the seal relies on the elastic properties of the material and does not utilise any type of spring device, can provide the required seal performance with typical loading of 200 kg/cm. The conceptual design of the proposed absolute valve will be presented. The aperture dimensions are 1.45 m high by 0.6 m wide, with minimum achievable leak rate of 1 · 10 -9 mbarl/s and maximum pressure differential of 3 bar across the valve. Sealing force is provided using two seal plates, linked by a 3 mm thick ' omega ' diaphragm, by pressurisation of the interspace to 8 bar; this allows for a relative movement of the plates of 2 mm. Movement of the device perpendicular to the beam direction is carried out using a novel magnetic drive in order to transmit the motive force across the vacuum boundary, similar to that demonstrated on a test-rig in an earlier study. The conceptual design includes provision of all the services such as pneumatics and water cooling to cope with the heat loads from neutral beams in quasi steady-state operation and from the ITER plasma. A future programme
Evaluation of the absolute regional temperature potential
Directory of Open Access Journals (Sweden)
D. T. Shindell
2012-09-01
Full Text Available The Absolute Regional Temperature Potential (ARTP is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90–28° S, 28° S–28° N, 28–60° N and 60–90° N as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within ±20% of the actual responses, though there are some exceptions for 90–28° S and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the ±20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39–45% and 9–39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.
Orion Absolute Navigation System Progress and Challenge
Holt, Greg N.; D'Souza, Christopher
2012-01-01
The absolute navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a timely and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.
LIBERTARISMO & ERROR CATEGORIAL
Directory of Open Access Journals (Sweden)
Carlos G. Patarroyo G.
2009-01-01
Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.
Libertarismo & Error Categorial
PATARROYO G, CARLOS G
2009-01-01
En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...
1985-01-01
A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.
Planck absolute entropy of a rotating BTZ black hole
Riaz, S. M. Jawwad
2018-04-01
In this paper, the Planck absolute entropy and the Bekenstein-Smarr formula of the rotating Banados-Teitelboim-Zanelli (BTZ) black hole are presented via a complex thermodynamical system contributed by its inner and outer horizons. The redefined entropy approaches zero as the temperature of the rotating BTZ black hole tends to absolute zero, satisfying the Nernst formulation of a black hole. Hence, it can be regarded as the Planck absolute entropy of the rotating BTZ black hole.
Positioning, alignment and absolute pointing of the ANTARES neutrino telescope
International Nuclear Information System (INIS)
Fehr, F; Distefano, C
2010-01-01
A precise detector alignment and absolute pointing is crucial for point-source searches. The ANTARES neutrino telescope utilises an array of hydrophones, tiltmeters and compasses for the relative positioning of the optical sensors. The absolute calibration is accomplished by long-baseline low-frequency triangulation of the acoustic reference devices in the deep-sea with a differential GPS system at the sea surface. The absolute pointing can be independently verified by detecting the shadow of the Moon in cosmic rays.
Absolute nuclear material assay using count distribution (LAMBDA) space
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-06-05
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Maximum entropy deconvolution of low count nuclear medicine images
International Nuclear Information System (INIS)
McGrath, D.M.
1998-12-01
Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were
Conically scanning lidar error in complex terrain
Directory of Open Access Journals (Sweden)
Ferhat Bingöl
2009-05-01
Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.
International Nuclear Information System (INIS)
Tateoka, Kunihiko; Hareyama, Masato; Oouchi, Atsushi; Nakata, Kensei; Nagase, Daiki; Saikawa, Tsunehiko; Shimizume, Kazunari; Sugimoto, Harumi; Waka, Masaaki
2003-01-01
Intensity-modulated radiation therapy (IMRT) was developed to irradiate the target are more conformally, sparing organs at risk (OARs). Since the beams are sequentially delivered by many, small, irregular, and off-center fields in IMRT, dosimetric quality assurance (QA) is an extremely important issue. QA is performed by verifying both the dose distribution and doses at arbitrary points. In this work, we describe the verification of doses at arbitrary points in our hospital for Segmental multileaf collimator (SMLC)-IMRT. In general, verification of the absolute doses for IMRT is performed by comparison between the calculated doses using Radiation Treatment Planning Systems (RTP) and the measured doses using an ionization chamber with a small volume at arbitrary points in relatively flat regions of the dose gradients. However, no clear definitions of the dose gradients and the flat regions have yet been reported. We carried out verification by comparison of the measured doses with the average dose and the central point dose in a virtual Farmer type ionization chamber (V-F) and a virtual PinPoint ionization chamber (V-P) equal to the Farmer-type ionization chamber volume and PinPoint ionization chamber volumes using the RTP. Furthermore, we defined the dose gradients as the deviation of the maximum dose from the minimum dose in the virtual ionization chamber volume. In IMRT, the dose gradients may be as high as 80% or more in the virtual ionization chamber volume. Therefore, it is thought that the effective center of the ionization chamber varies by segment for IMRT fields (i.e., the variation of the ionization chamber replacement effect). Additionally, in regions with a higher dose gradient, uncertainty in the measured doses is influenced by the variations in the ionization chamber replacement effect and the ionization chamber positioning error. We more objectively examined the verification method for the absolute dose in IMRT using the virtual ionization chamber
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Indian Academy of Sciences (India)
Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...
Influence of Ephemeris Error on GPS Single Point Positioning Accuracy
Lihua, Ma; Wang, Meng
2013-09-01
The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.
Challenge and Error: Critical Events and Attention-Related Errors
Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel
2011-01-01
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…
Directory of Open Access Journals (Sweden)
Vladimir Katkovnik
2018-05-01
Full Text Available We study the problem of multiwavelength absolute phase retrieval from noisy diffraction patterns. The system is lensless with multiwavelength coherent input light beams and random phase masks applied for wavefront modulation. The light beams are formed by light sources radiating all wavelengths simultaneously. A sensor equipped by a Color Filter Array (CFA is used for spectral measurement registration. The developed algorithm targeted on optimal phase retrieval from noisy observations is based on maximum likelihood technique. The algorithm is specified for Poissonian and Gaussian noise distributions. One of the key elements of the algorithm is an original sparse modeling of the multiwavelength complex-valued wavefronts based on the complex-domain block-matching 3D filtering. Presented numerical experiments are restricted to noisy Poissonian observations. They demonstrate that the developed algorithm leads to effective solutions explicitly using the sparsity for noise suppression and enabling accurate reconstruction of absolute phase of high-dynamic range.
Team errors: definition and taxonomy
International Nuclear Information System (INIS)
Sasou, Kunihide; Reason, James
1999-01-01
In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors
WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice
Energy Technology Data Exchange (ETDEWEB)
Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States)
2015-06-15
Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly
WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice
International Nuclear Information System (INIS)
Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D
2015-01-01
Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly
Auto-calibration of Systematic Odometry Errors in Mobile Robots
DEFF Research Database (Denmark)
Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel
1999-01-01
This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....
Absolute density measurements in the middle atmosphere
Directory of Open Access Journals (Sweden)
M. Rapp
2001-05-01
Full Text Available In the last ten years a total of 25 sounding rockets employing ionization gauges have been launched at high latitudes ( ~ 70° N to measure total atmospheric density and its small scale fluctuations in an altitude range between 70 and 110 km. While the determination of small scale fluctuations is unambiguous, the total density analysis has been complicated in the past by aerodynamical disturbances leading to densities inside the sensor which are enhanced compared to atmospheric values. Here, we present the results of both Monte Carlo simulations and wind tunnel measurements to quantify this aerodynamical effect. The comparison of the resulting ‘ram-factor’ profiles with empirically determined density ratios of ionization gauge measurements and falling sphere measurements provides excellent agreement. This demonstrates both the need, but also the possibility, to correct aerodynamical influences on measurements from sounding rockets. We have determined a total of 20 density profiles of the mesosphere-lower-thermosphere (MLT region. Grouping these profiles according to season, a listing of mean density profiles is included in the paper. A comparison with density profiles taken from the reference atmospheres CIRA86 and MSIS90 results in differences of up to 40%. This reflects that current reference atmospheres are a significant potential error source for the determination of mixing ratios of, for example, trace gas constituents in the MLT region.Key words. Middle atmosphere (composition and chemistry; pressure, density, and temperature; instruments and techniques
Absolute density measurements in the middle atmosphere
Directory of Open Access Journals (Sweden)
M. Rapp
Full Text Available In the last ten years a total of 25 sounding rockets employing ionization gauges have been launched at high latitudes ( ~ 70° N to measure total atmospheric density and its small scale fluctuations in an altitude range between 70 and 110 km. While the determination of small scale fluctuations is unambiguous, the total density analysis has been complicated in the past by aerodynamical disturbances leading to densities inside the sensor which are enhanced compared to atmospheric values. Here, we present the results of both Monte Carlo simulations and wind tunnel measurements to quantify this aerodynamical effect. The comparison of the resulting ‘ram-factor’ profiles with empirically determined density ratios of ionization gauge measurements and falling sphere measurements provides excellent agreement. This demonstrates both the need, but also the possibility, to correct aerodynamical influences on measurements from sounding rockets. We have determined a total of 20 density profiles of the mesosphere-lower-thermosphere (MLT region. Grouping these profiles according to season, a listing of mean density profiles is included in the paper. A comparison with density profiles taken from the reference atmospheres CIRA86 and MSIS90 results in differences of up to 40%. This reflects that current reference atmospheres are a significant potential error source for the determination of mixing ratios of, for example, trace gas constituents in the MLT region.
Key words. Middle atmosphere (composition and chemistry; pressure, density, and temperature; instruments and techniques
Directory of Open Access Journals (Sweden)
Guochao Wang
2018-02-01
Full Text Available We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.
Wang, Guochao; Tan, Lilong; Yan, Shuhua
2018-02-07
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
Does Absolute Synonymy exist in Owere-Igbo? | Omego | AFRREV ...
African Journals Online (AJOL)
Among Igbo linguistic researchers, determining whether absolute synonymy exists in Owere–Igbo, a dialect of the Igbo language predominantly spoken by the people of Owerri, Imo State, Nigeria, has become a thorny issue. While some linguistic scholars strive to establish that absolute synonymy exists in the lexical ...
Absolute tense forms in Tswana | Pretorius | Journal for Language ...
African Journals Online (AJOL)
These views were compared in an attempt to put forth an applicable framework for the classification of the tenses in Tswana and to identify the absolute tenses of Tswana. Keywords: tense; simple tenses; compound tenses; absolute tenses; relative tenses; aspect; auxiliary verbs; auxiliary verbal groups; Tswana Opsomming
Absolute calibration of sniffer probes on Wendelstein 7-X
Moseev, D.; Laqua, H.P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.J.; Oosterbeek, J.W.
Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of
Xiong, Qiufen; Hu, Jianglin
2013-05-01
The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and
Rieger, Martina; Martinez, Fanny; Wenke, Dorit
2011-01-01
Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…
Absolute instrumental neutron activation analysis at Lawrence Livermore Laboratory
International Nuclear Information System (INIS)
Heft, R.E.
1977-01-01
The Environmental Science Division at Lawrence Livermore Laboratory has in use a system of absolute Instrumental Neutron Activation Analysis (INAA). Basically, absolute INAA is dependent upon the absolute measurement of the disintegration rates of the nuclides produced by neutron capture. From such disintegration rate data, the amount of the target element present in the irradiated sample is calculated by dividing the observed disintegration rate for each nuclide by the expected value for the disintegration rate per microgram of the target element that produced the nuclide. In absolute INAA, the expected value for disintegration rate per microgram is calculated from nuclear parameters and from measured values of both thermal and epithermal neutron fluxes which were present during irradiation. Absolute INAA does not depend on the concurrent irradiation of elemental standards but does depend on the values for thermal and epithermal neutron capture cross-sections for the target nuclides. A description of the analytical method is presented
A developmental study of latent absolute pitch memory.
Jakubowski, Kelly; Müllensiefen, Daniel; Stewart, Lauren
2017-03-01
The ability to recall the absolute pitch level of familiar music (latent absolute pitch memory) is widespread in adults, in contrast to the rare ability to label single pitches without a reference tone (overt absolute pitch memory). The present research investigated the developmental profile of latent absolute pitch (AP) memory and explored individual differences related to this ability. In two experiments, 288 children from 4 to12 years of age performed significantly above chance at recognizing the absolute pitch level of familiar melodies. No age-related improvement or decline, nor effects of musical training, gender, or familiarity with the stimuli were found in regard to latent AP task performance. These findings suggest that latent AP memory is a stable ability that is developed from as early as age 4 and persists into adulthood.
Advancing Absolute Calibration for JWST and Other Applications
Rieke, George; Bohlin, Ralph; Boyajian, Tabetha; Carey, Sean; Casagrande, Luca; Deustua, Susana; Gordon, Karl; Kraemer, Kathleen; Marengo, Massimo; Schlawin, Everett; Su, Kate; Sloan, Greg; Volk, Kevin
2017-10-01
We propose to exploit the unique optical stability of the Spitzer telescope, along with that of IRAC, to (1) transfer the accurate absolute calibration obtained with MSX on very bright stars directly to two reference stars within the dynamic range of the JWST imagers (and of other modern instrumentation); (2) establish a second accurate absolute calibration based on the absolutely calibrated spectrum of the sun, transferred onto the astronomical system via alpha Cen A; and (3) provide accurate infrared measurements for the 11 (of 15) highest priority stars with no such data but with accurate interferometrically measured diameters, allowing us to optimize determinations of effective temperatures using the infrared flux method and thus to extend the accurate absolute calibration spectrally. This program is integral to plans for an accurate absolute calibration of JWST and will also provide a valuable Spitzer legacy.
Correction of refractive errors
Directory of Open Access Journals (Sweden)
Vladimir Pfeifer
2005-10-01
Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.
Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego
2017-12-01
Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Satellite Photometric Error Determination
2015-10-18
Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical
Video Error Correction Using Steganography
Robie, David L.; Mersereau, Russell M.
2002-12-01
The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Video Error Correction Using Steganography
Directory of Open Access Journals (Sweden)
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
NDE errors and their propagation in sizing and growth estimates
International Nuclear Information System (INIS)
Horn, D.; Obrutsky, L.; Lakhan, R.
2009-01-01
The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Proton spectroscopic imaging of polyacrylamide gel dosimeters for absolute radiation dosimetry
International Nuclear Information System (INIS)
Murphy, P.S.; Schwarz, A.J.; Leach, M.O.
2000-01-01
Proton spectroscopy has been evaluated as a method for quantifying radiation induced changes in polyacrylamide gel dosimeters. A calibration was first performed using BANG-type gel samples receiving uniform doses of 6 MV photons from 0 to 9 Gy in 1 Gy intervals. The peak integral of the acrylic protons belonging to acrylamide and methylenebisacrylamide normalized to the water signal was plotted against absorbed dose. Response was approximately linear within the range 0-7 Gy. A large gel phantom irradiated with three, coplanar 3x3cm square fields to 5.74 Gy at isocentre was then imaged with an echo-filter technique to map the distribution of monomers directly. The image, normalized to the water signal, was converted into an absolute dose map. At the isocentre the measured dose was 5.69 Gy (SD = 0.09) which was in good agreement with the planned dose. The measured dose distribution elsewhere in the sample shows greater errors. A T 2 derived dose map demonstrated a better relative distribution but gave an overestimate of the dose at isocentre of 18%. The data indicate that MR measurements of monomer concentration can complement T 2 -based measurements and can be used to verify absolute dose. Compared with the more usual T 2 measurements for assessing gel polymerization, monomer concentration analysis is less sensitive to parameters such as gel pH and temperature, which can cause ambiguous relaxation time measurements and erroneous absolute dose calculations. (author)
Tinker-OpenMM: Absolute and relative alchemical free energies using AMOEBA on GPUs.
Harger, Matthew; Li, Daniel; Wang, Zhi; Dalby, Kevin; Lagardère, Louis; Piquemal, Jean-Philip; Ponder, Jay; Ren, Pengyu
2017-09-05
The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU-based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker-OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200-fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host-guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Error-related brain activity and error awareness in an error classification paradigm.
Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E
2016-10-01
Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
A self-consistent, absolute isochronal age scale for young moving groups in the solar neighbourhood
Bell, Cameron P. M.; Mamajek, Eric E.; Naylor, Tim
2015-01-01
We present a self-consistent, absolute isochronal age scale for young (< 200 Myr), nearby (< 100 pc) moving groups in the solar neighbourhood based on homogeneous fitting of semi-empirical pre-main-sequence model isochrones using the tau^2 maximum-likelihood fitting statistic of Naylor & Jeffries in the M_V, V-J colour-magnitude diagram. The final adopted ages for the groups are: 149+51-19 Myr for the AB Dor moving group, 24+/-3 Myr for the {\\beta} Pic moving group (BPMG), 45+11-7 Myr for the...
The existence of negative absolute temperatures in Axelrod’s social influence model
Villegas-Febres, J. C.; Olivares-Rivas, W.
2008-06-01
We introduce the concept of temperature as an order parameter in the standard Axelrod’s social influence model. It is defined as the relation between suitably defined entropy and energy functions, T=(. We show that at the critical point, where the order/disorder transition occurs, this absolute temperature changes in sign. At this point, which corresponds to the transition homogeneous/heterogeneous culture, the entropy of the system shows a maximum. We discuss the relationship between the temperature and other properties of the model in terms of cultural traits.
Kukush, Alexander
2011-01-16
With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.
Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre
2011-02-16
With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.
Directory of Open Access Journals (Sweden)
Farhad Azadi
2014-01-01
Full Text Available Objectives: Relative and absolute reliability are psychometric properties of the test that many clinical decisions are based on them. In many cases, only relative reliability takes into consideration while the absolute reliability is also very important. Methods & Materials: Eleven community-dwelling older adults aged 65 years and older (69.64±3.58 and 20 healthy young in the age range 20 to 35 years (28.80±4.15 using three versions of Timed Up and Go test were evaluated twice with an interval of 2 to 5 days. Results: Generally, the non-homogeneity of the study population was stratified to increase the Intra-class Correlation Coefficient (ICC this coefficient in elderly people is greater than young people and with a secondary task is reduced. In This study, absolute reliability indices using different data sources and equations lead to in more or less similar results. At general, in test–retest situations, the elderly more than the young people must be changed to be interpreted as a real change, not random. The random error contribution is slightly greater in elderly than young and with a secondary task is increased.It seems, heterogeneity leads to moderation in absolute reliability indices. Conclusion: In relative reliability studies, researchers and clinicians should pay attention to factors such as homogeneity of population and etc. As well as, absolute reliability beside relative reliability is needed and necessary in clinical decision making.
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
Diagnostic errors in pediatric radiology
International Nuclear Information System (INIS)
Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.
2011-01-01
Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)
Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.
Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2016-01-01
Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Incorrect Weighting of Absolute Performance in Self-Assessment
Jeffrey, Scott A.; Cozzarin, Brian
Students spend much of their life in an attempt to assess their aptitude for numerous tasks. For example, they expend a great deal of effort to determine their academic standing given a distribution of grades. This research finds that students use their absolute performance, or percentage correct as a yardstick for their self-assessment, even when relative standing is much more informative. An experiment shows that this reliance on absolute performance for self-evaluation causes a misallocation of time and financial resources. Reasons for this inappropriate responsiveness to absolute performance are explored.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Absolute Paleointensity Techniques: Developments in the Last 10 Years (Invited)
Bowles, J. A.; Brown, M. C.
2009-12-01
The ability to determine variations in absolute intensity of the Earth’s paleomagnetic field has greatly enhanced our understanding of geodynamo processes, including secular variation and field reversals. Igneous rocks and baked clay artifacts that carry a thermal remanence (TRM) have allowed us to study field variations over timescales ranging from decades to billions of years. All absolute paleointensity techniques are fundamentally based on repeating the natural process by which the sample acquired its magnetization, i.e. a laboratory TRM is acquired in a controlled field, and the ratio of the natural TRM to that acquired in the laboratory is directly proportional to the ancient field. Techniques for recovering paleointensity have evolved since the 1930s from relatively unsophisticated (but revolutionary for their time) single step remagnetizations to the various complicated, multi-step procedures in use today. These procedures can be broadly grouped into two categories: 1) “Thellier-type” experiments that step-wise heat samples at a series of temperatures up to the maximum unblocking temperature of the sample, progressively removing the natural remanence (NRM) and acquiring a laboratory-induced TRM; and 2) “Shaw-type” experiments that combine alternating field demagnetization of the NRM and laboratory TRM with a single heating to a temperature above the sample’s Curie temperature, acquiring a total TRM in one step. Many modifications to these techniques have been developed over the years with the goal of identifying and/or accommodating non-ideal behavior, such as alteration and multi-domain (MD) remanence, which may lead to inaccurate paleofield estimates. From a technological standpoint, perhaps the most significant development in the last decade is the use of microwave (de)magnetization in both Thellier-type and Shaw-type experiments. By using microwaves to directly generate spin waves within the magnetic grains (rather than using phonons
International Nuclear Information System (INIS)
Pan Jie; Yang Ning; Zhang Zhongzhong; Hu Libin; Chen Jie; Wu Wei; Jin Zhengyu; Liu Wei
2003-01-01
Objective: To evaluate the safety and efficacy of selective segmental sclerotherapy (SSS) of the liver by transportal absolute ethanol injection with an animal experimental study, and to discuss several technical points involved in this method. Methods: Thirty dogs received SSS of the liver by transportal absolute ethanol injection with the injection dose of 0.2-1.0 ml/kg, repeated examinations of blood ethanol level, WBC, and liver functions were done, and CT and pathological examinations of the liver were performed. Results: All dogs treated with SSS survived during the study. The maximum elevation of blood ethanol values occurred in group F. Its average value was (1.603 ± 0.083) mg/ml, which was much lower than that of death level. Transient elevations of blood WBC and ALT were seen. The average values of WBC and ALT were (46.36 ± 7.28) x 10 9 and (827.36 ± 147.25) U/L, respectively. CT and pathological examinations proved that the dogs given SSS by transportal absolute ethanol injection with the injection dose of 0.3-1.0 ml/kg had a complete wedge-shaped necrosis in the liver. Conclusion: Selective segmental sclerotherapy of the liver by transportal ethanol injection was quite safe and effective if the proper dose of ethanol was injected. SSS may be useful in the treatment of HCC
Superselective renal artery embolization with lipiodol and absolute alcohol emulsion for renal tumor
International Nuclear Information System (INIS)
Yu Miao; Li Jiakai; Sun Minglu; Wang Huixian
2008-01-01
Objective: To evaluate the efficacy of the renal arterial embolization with lipidol and absolute alcohol emulsion in the treatment of renal tumors. Methods: The superselective renal arterial embolization by using coaxial-cathaterization with infusion of lipiodol and absolute alcohol (in proportion of 2 :1) emulsion was performed in twenty patients with malignant and benign kidney tumors. 4 weeks later, the renal arteriography was taken routinely and repeated embolization was performed in case of necessary; and follow up was carried out periodically. Results: The imaging findings showed thorough tumor necrosis and feeding vessel abruption in 18 cases after one session of treatment. The volume of tumors decreased more than a half in 13 patients (82.25%, 13/18) associated with a well-distributed lipidol inside the tumors. The second session of treatment was performed in other 2 patients and the clinical symptoms relieved obviously. Conclusions: The superselective renal artery embolization with lipidol and absolute alcohol emulsion can permanently embolize all tumor feeding arteries in capillary vessel level with maximum reservation of renal function, providing definitively efficacy and worthwhile to be recommended widely. (authors)
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
Institute of Scientific and Technical Information of China (English)
Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh
2017-01-01
The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).
Dependence of absolute magnitudes (energies) of flares on the cluster age containing flare stars
International Nuclear Information System (INIS)
Parsamyan, Eh.S.
1976-01-01
Dependences between Δmsub(u) and msub(u) are given for the Orion, NGC 7000, Pleiades and Praesepe aggregations. Maximum absolute values of flares have been calculated for stars with different luminosities. It has been shown that the values of flares can be limited by a straight line which gives the representation on the distribution of maximum values of amplitudes for the stars with different luminosities in an aggregation. Presented are k and m 0 parameters characterizing the lines fot the Orion, NGC 7000, Pleiades and Praesepe aggregation and their age T dependence. From the dependence between k (angular coefficient of straight lines) and lgT for the aggregation with known T the age of those aggregation involving a great amount of flaring stars can be found. The age of flaring stars in the neighbourhood of the Sun has been determined. The age of UV Ceti has been shown by an order to exceed that of the rest stars
Energy Technology Data Exchange (ETDEWEB)
Papoular, R
1997-07-01
The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.
Efficacy of intrahepatic absolute alcohol in unrespectable hepatocellular carcinoma
International Nuclear Information System (INIS)
Farooqi, J.I.; Hameed, K.; Khan, I.U.; Shah, S.
2001-01-01
To determine efficacy of intrahepatic absolute alcohol injection in researchable hepatocellular carcinoma. A randomized, controlled, experimental and interventional clinical trial. Gastroenterology Department, PGMI, Hayatabad Medical Complex, Peshawar during the period from June, 1998 to June, 2000. Thirty patients were treated by percutaneous, intrahepatic absolute alcohol injection sin repeated sessions, 33 patients were not given or treated with alcohol to serve as control. Both the groups were comparable for age, sex and other baseline characteristics. Absolute alcohol therapy significantly improved quality of life of patients, reduced the tumor size and mortality as well as showed significantly better results regarding survival (P< 0.05) than the patients of control group. We conclude that absolute alcohol is a beneficial and safe palliative treatment measure in advanced hepatocellular carcinoma (HCC). (author)
DOES ABSOLUTE SYNONYMY EXIST IN OWERE-IGBO?
African Journals Online (AJOL)
USER
The researcher also interviewed native speakers of the dialect. The study ... The word 'synonymy' means sameness of meaning, i.e., a relationship in which more ... whether absolute synonymy exists in Owere–Igbo or not. ..... 'close this book'.
Prognostic Value of Absolute versus Relative Rise of Blood ...
African Journals Online (AJOL)
maternal outcome than a relative rise in the systolic/diastolic blood pressure from mid pregnancy, which did not reach this absolute level. We conclude that in the Nigerian obstetric population, the practice of diagnosing pregnancy hypertension on ...
Absolute calibration of sniffer probes on Wendelstein 7-X
International Nuclear Information System (INIS)
Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.
2016-01-01
Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m 2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m 2 per MW injected beam power is measured.
Absolute calibration of sniffer probes on Wendelstein 7-X
Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.
2016-08-01
Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.
Absolute calibration of sniffer probes on Wendelstein 7-X
Energy Technology Data Exchange (ETDEWEB)
Moseev, D., E-mail: dmitry.moseev@ipp.mpg.de; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Gellert, F. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Ernst-Moritz-Arndt-Universität Greifswald, Greifswald (Germany); Oosterbeek, J. W. [Eindhoven University of Technology, Eindhoven (Netherlands)
2016-08-15
Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m{sup 2} per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m{sup 2} per MW injected beam power is measured.
Probative value of absolute and relative judgments in eyewitness identification.
Clark, Steven E; Erickson, Michael A; Breneman, Jesse
2011-10-01
It is well-accepted that eyewitness identification decisions based on relative judgments are less accurate than identification decisions based on absolute judgments. However, the theoretical foundation for this view has not been established. In this study relative and absolute judgments were compared through simulations of the WITNESS model (Clark, Appl Cogn Psychol 17:629-654, 2003) to address the question: Do suspect identifications based on absolute judgments have higher probative value than suspect identifications based on relative judgments? Simulations of the WITNESS model showed a consistent advantage for absolute judgments over relative judgments for suspect-matched lineups. However, simulations of same-foils lineups showed a complex interaction based on the accuracy of memory and the similarity relationships among lineup members.
Changes in Absolute Sea Level Along U.S. Coasts
U.S. Environmental Protection Agency — This map shows changes in absolute sea level from 1960 to 2016 based on satellite measurements. Data were adjusted by applying an inverted barometer (air pressure)...
Confirmation of the absolute configuration of (−)-aurantioclavine
Behenna, Douglas C.; Krishnan, Shyam; Stoltz, Brian M.
2011-01-01
We confirm our previous assignment of the absolute configuration of (-)-aurantioclavine as 7R by crystallographically characterizing an advanced 3-bromoindole intermediate reported in our previous synthesis. This analysis also provides additional
Standard Errors for Matrix Correlations.
Ogasawara, Haruhiko
1999-01-01
Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)
A proposal to measure absolute environmental sustainability in lifecycle assessment
DEFF Research Database (Denmark)
Bjørn, Anders; Margni, Manuele; Roy, Pierre-Olivier
2016-01-01
sustainable are therefore increasingly important. Such absolute indicators exist, but suffer from shortcomings such as incomplete coverage of environmental issues, varying data quality and varying or insufficient spatial resolution. The purpose of this article is to demonstrate that life cycle assessment (LCA...... in supporting decisions aimed at simultaneously reducing environmental impacts efficiently and maintaining or achieving environmental sustainability. We have demonstrated that LCA indicators can be modified from being relative to being absolute indicators of environmental sustainability. Further research should...
Overspecification of colour, pattern, and size: Salience, absoluteness, and consistency
Sammie eTarenskeen; Mirjam eBroersma; Mirjam eBroersma; Bart eGeurts
2015-01-01
The rates of overspecification of colour, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Colour and pattern are absolute attributes, whereas size is relative and less salient. Additionally, a tendency towards consistent responses is assessed. Using a within-participants design, we find similar rates of colour and pattern overspecification, which are both higher than the rate of size overspecification. Using a bet...
Overspecification of color, pattern, and size: salience, absoluteness, and consistency
Tarenskeen, S.L.; Broersma, M.; Geurts, B.
2015-01-01
The rates of overspecification of color, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Color and pattern are absolute and salient attributes, whereas size is relative and less salient. Additionally, a tendency toward consistent responses is assessed. Using a within-participants design, we find similar rates of color and pattern overspecification, which are both higher than the rate of size overspecification. Usi...
Error forecasting schemes of error correction at receiver
International Nuclear Information System (INIS)
Bhunia, C.T.
2007-08-01
To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)
Dirnberger, J; Wiesinger, H P; Stöggl, T; Kösters, A; Müller, E
2012-09-01
Isokinetic devices are highly rated in strength-related performance diagnosis. A few years ago, the broad variety of existing products was extended by the IsoMed 2000-dynamometer. In order for an isokinetic device to be clinically useful, the reliability of specific applications must be established. Although there have already been single studies on this topic for the IsoMed 2000 concerning maximum strength measurements, there has been no study regarding the assessment of strength-endurance so far. The aim of the present study was to establish the reliability for various methods of quantification of strength-endurance using the IsoMed 2000. A sample of 33 healthy young subjects (age: 23.8 ± 2.6 years) participated in one familiarisation and two testing sessions, 3-4 days apart. Testing consisted of a series 30 full effort concentric extension-flexion cycles of the right knee muscles at an angular velocity of 180 °/s. Based on the parameters Peak, Torque and Work for each repetition, indices of absolute (KADabs) and relative (KADrel) strength-endurance were derived. KADabs was calculated as the mean value of all testing repetitions, KADrel was determined in two ways: on the one hand, as the percentage decrease between the first and the last 5 repetitions (KADrelA) and on the other, as the negative slope derived from the linear regression equitation of all repetitions (KADrelB). Detection of systematic errors was performed using paired sample t-tests, relative and absolute reliability were examined using intraclass correlation coefficient (ICC 2.1) and standard error of measurement (SEM%), respectively. In general, for extension measurements concerning KADabs and - in an weakened form - KADrel high ICC -values of 0.76-0.89 combined with clinically acceptable values of SEM% of 1.2-5.9 % could be found. For flexion measurements this only applies to KADabs, whereas results for KADrel turned out to be clearly weaker with ICC- and SEM% values of 0.42-0.62 and 9
Evaluating a medical error taxonomy.
Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie
2002-01-01
Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...
The Pragmatics of "Unruly" Dative Absolutes in Early Slavic
Directory of Open Access Journals (Sweden)
Daniel E. Collins
2011-08-01
Full Text Available This chapter examines some uses of the dative absolute in Old Church Slavonic and in early recensional Slavonic texts that depart from notions of how Indo-European absolute constructions should behave, either because they have subjects coreferential with the (putative main-clause subjects or because they function as if they were main clauses in their own right. Such "noncanonical" absolutes have generally been written off as mechanistic translations or as mistakes by scribes who did not understand the proper uses of the construction. In reality, the problem is not with literalistic translators or incompetent scribes but with the definition of the construction itself; it is quite possible to redefine the Early Slavic dative absolute in a way that accounts for the supposedly deviant cases. While the absolute is generally dependent semantically on an adjacent unit of discourse, it should not always be regarded as subordinated syntactically. There are good grounds for viewing some absolutes not as dependent clauses but as independent sentences whose collateral character is an issue not of syntax but of the pragmatics of discourse.
Approximation for maximum pressure calculation in containment of PWR reactors
International Nuclear Information System (INIS)
Souza, A.L. de
1989-01-01
A correlation was developed to estimate the maximum pressure of dry containment of PWR following a Loss-of-Coolant Accident - LOCA. The expression proposed is a function of the total energy released to the containment by the primary circuit, of the free volume of the containment building and of the total surface are of the heat-conducting structures. The results show good agreement with those present in Final Safety Analysis Report - FSAR of several PWR's plants. The errors are in the order of ± 12%. (author) [pt
Uncertainty quantification and error analysis
Energy Technology Data Exchange (ETDEWEB)
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Error Patterns in Problem Solving.
Babbitt, Beatrice C.
Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…
Performance, postmodernity and errors
DEFF Research Database (Denmark)
Harder, Peter
2013-01-01
speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Maximum Power Point Tracking Based on Sliding Mode Control
Directory of Open Access Journals (Sweden)
Nimrod Vázquez
2015-01-01
Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-03-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Errors and Correction of Precipitation Measurements in China
Institute of Scientific and Technical Information of China (English)
REN Zhihua; LI Mingqin
2007-01-01
In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.
THE DISKMASS SURVEY. II. ERROR BUDGET
International Nuclear Information System (INIS)
Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.
2010-01-01
We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.
Controlling errors in unidosis carts
Directory of Open Access Journals (Sweden)
Inmaculada Díaz Fernández
2010-01-01
Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.
Prioritising interventions against medication errors
DEFF Research Database (Denmark)
Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard
errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...
Social aspects of clinical errors.
Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave
2009-08-01
Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.
Auditory working memory predicts individual differences in absolute pitch learning.
Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C
2015-07-01
Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the
Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.
2018-03-01
Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.
Directory of Open Access Journals (Sweden)
KADEK DWI FARMANI
2012-09-01
Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.
Relative and absolute risk in epidemiology and health physics
International Nuclear Information System (INIS)
Goldsmith, R.; Peterson, H.T. Jr.
1983-01-01
The health risk from ionizing radiation commonly is expressed in two forms: (1) the relative risk, which is the percentage increase in natural disease rate and (2) the absolute or attributable risk which represents the difference between the natural rate and the rate associated with the agent in question. Relative risk estimates for ionizing radiation generally are higher than those expressed as the absolute risk. This raises the question of which risk estimator is the most appropriate under different conditions. The absolute risk has generally been used for radiation risk assessment, although mathematical combinations such as the arithmetic or geometric mean of both the absolute and relative risks, have also been used. Combinations of the two risk estimators are not valid because the absolute and relative risk are not independent variables. Both human epidemiologic studies and animal experimental data can be found to illustrate the functional relationship between the natural cancer risk and the risk associated with radiation. This implies that the radiation risk estimate derived from one population may not be appropriate for predictions in another population, unless it is adjusted for the difference in the natural disease incidence between the two populations
Absolute Navigation Information Estimation for Micro Planetary Rovers
Directory of Open Access Journals (Sweden)
Muhammad Ilyas
2016-03-01
Full Text Available This paper provides algorithms to estimate absolute navigation information, e.g., absolute attitude and position, by using low power, weight and volume Microelectromechanical Systems-type (MEMS sensors that are suitable for micro planetary rovers. Planetary rovers appear to be easily navigable robots due to their extreme slow speed and rotation but, unfortunately, the sensor suites available for terrestrial robots are not always available for planetary rover navigation. This makes them difficult to navigate in a completely unexplored, harsh and complex environment. Whereas the relative attitude and position can be tracked in a similar way as for ground robots, absolute navigation information, unlike in terrestrial applications, is difficult to obtain for a remote celestial body, such as Mars or the Moon. In this paper, an algorithm called the EASI algorithm (Estimation of Attitude using Sun sensor and Inclinometer is presented to estimate the absolute attitude using a MEMS-type sun sensor and inclinometer, only. Moreover, the output of the EASI algorithm is fused with MEMS gyros to produce more accurate and reliable attitude estimates. An absolute position estimation algorithm has also been presented based on these on-board sensors. Experimental results demonstrate the viability of the proposed algorithms and the sensor suite for low-cost and low-weight micro planetary rovers.
The relative and absolute speed of radiographic screen - film systems
International Nuclear Information System (INIS)
Lee, In Ja; Huh, Joon
1993-01-01
Recently, a large number of new screen-film systems have become available for use in diagnostic radiology. These new screens are made of materials generally known as rare - earth phosphors which have high x-ray absorption and high x-ray to light conversion efficiency compared to calcium tungstate phosphors. The major advantage of these new systems is reduction of patient exposure due to their high speed or high sensitivity. However, a system with excessively high speed can result in a significant degradation of radiographic image quality. Therefore, the speed is important parameters for users of these system. Our aim of in this was to determine accurately and precisely the absolute speed and relative speeds of both new and conventional screen - film system. We determined the absolute speed in condition of BRH phantom beam quality and the relative speed were measured by a split - screen technique in condition of BRH and ANSI phantom beam quality. The absolute and the relative speed were determined for 8 kinds of screen - 4 kinds of film in regular system and 7 kinds pf screen - 7 kinds of film in ortho system. In this study we could know the New Rx, T - MAT G has the highest film speed, also know Green system's standard deviation of relative speed larger than blue system. It was realized that there were no relationship between the absolute speed and the blue system. It was realized that there were no relationship between the absolute speed and the relative speed in ortho or regular system
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes
Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.
2018-05-01
A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.
Errors in abdominal computed tomography
International Nuclear Information System (INIS)
Stephens, S.; Marting, I.; Dixon, A.K.
1989-01-01
Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Perceiving pitch absolutely: Comparing absolute and relative pitch possessors in a pitch memory task
Directory of Open Access Journals (Sweden)
Schlaug Gottfried
2009-08-01
Full Text Available Abstract Background The perceptual-cognitive mechanisms and neural correlates of Absolute Pitch (AP are not fully understood. The aim of this fMRI study was to examine the neural network underlying AP using a pitch memory experiment and contrasting two groups of musicians with each other, those that have AP and those that do not. Results We found a common activation pattern for both groups that included the superior temporal gyrus (STG extending into the adjacent superior temporal sulcus (STS, the inferior parietal lobule (IPL extending into the adjacent intraparietal sulcus (IPS, the posterior part of the inferior frontal gyrus (IFG, the pre-supplementary motor area (pre-SMA, and superior lateral cerebellar regions. Significant between-group differences were seen in the left STS during the early encoding phase of the pitch memory task (more activation in AP musicians and in the right superior parietal lobule (SPL/intraparietal sulcus (IPS during the early perceptual phase (ITP 0–3 and later working memory/multimodal encoding phase of the pitch memory task (more activation in non-AP musicians. Non-significant between-group trends were seen in the posterior IFG (more in AP musicians and the IPL (more anterior activations in the non-AP group and more posterior activations in the AP group. Conclusion Since the increased activation of the left STS in AP musicians was observed during the early perceptual encoding phase and since the STS has been shown to be involved in categorization tasks, its activation might suggest that AP musicians involve categorization regions in tonal tasks. The increased activation of the right SPL/IPS in non-AP musicians indicates either an increased use of regions that are part of a tonal working memory (WM network, or the use of a multimodal encoding strategy such as the utilization of a visual-spatial mapping scheme (i.e., imagining notes on a staff or using a spatial coding for their relative pitch height for pitch
Laboratory errors and patient safety.
Miligy, Dawlat A
2015-01-01
Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that
Absolute marine gravimetry with matter-wave interferometry.
Bidel, Y; Zahzam, N; Blanchard, C; Bonnin, A; Cadoret, M; Bresson, A; Rouxel, D; Lequentrec-Lalancette, M F
2018-02-12
Measuring gravity from an aircraft or a ship is essential in geodesy, geophysics, mineral and hydrocarbon exploration, and navigation. Today, only relative sensors are available for onboard gravimetry. This is a major drawback because of the calibration and drift estimation procedures which lead to important operational constraints. Atom interferometry is a promising technology to obtain onboard absolute gravimeter. But, despite high performances obtained in static condition, no precise measurements were reported in dynamic. Here, we present absolute gravity measurements from a ship with a sensor based on atom interferometry. Despite rough sea conditions, we obtained precision below 10 -5 m s -2 . The atom gravimeter was also compared with a commercial spring gravimeter and showed better performances. This demonstration opens the way to the next generation of inertial sensors (accelerometer, gyroscope) based on atom interferometry which should provide high-precision absolute measurements from a moving platform.
The systematic error of temperature noise correlation measurement method and self-calibration
International Nuclear Information System (INIS)
Tian Hong; Tong Yunxian
1993-04-01
The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments
Measurement Model Specification Error in LISREL Structural Equation Models.
Baldwin, Beatrice; Lomax, Richard
This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…
Application of Joint Error Maximal Mutual Compensation to hexapod robots
DEFF Research Database (Denmark)
Veryha, Yauheni; Petersen, Henrik Gordon
2008-01-01
A good practice to ensure high-positioning accuracy in industrial robots is to use joint error maximum mutual compensation (JEMMC). This paper presents an application of JEMMC for positioning of hexapod robots to improve end-effector positioning accuracy. We developed an algorithm and simulation ...
Error Immune Logic for Low-Power Probabilistic Computing
Directory of Open Access Journals (Sweden)
Bo Marr
2010-01-01
design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.
Directory of Open Access Journals (Sweden)
Junge Zhang
2012-08-01
Full Text Available This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.
Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge
2012-01-01
This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.
Dopamine reward prediction error coding.
Schultz, Wolfram
2016-03-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.
Directory of Open Access Journals (Sweden)
Ghamar Fadavi
2016-02-01
data (24 days for each year and 2 days for each month were used for different interpolation methods. Using difference measures viz. Root Mean Square Error (RMSE, Mean Bias Error (MBE, Mean Absolute Error (MAE and Correlation Coefficient (r, the performance and accuracy of each model were tested to select the best method. Results and Discussion: The assessment of normalizing condition of data was done using Kolmogrov-Smirnov test at ninety five percent (95% level of significance in Mini Tab software. The results show that distribution of daily maximum temperature data had no significant difference with normal distribution for both years. Weighed inverse distance method used for estimation daily maximum temperature, for this purpose, root mean square error (RMSE for different status of power (1 to 5 and number of station (5,10,15 and20 was calculated. According to the minimum RMSE, power for 2 and number of station for 15 in 2007 and power for 1 and number of station for 5 in 1992 were obtained as optimum power and number of station. The results also show that in regression equation the correlation coefficient were more than 0.8 for the most of the days. The regression coefficient of elevation (h and latitude (y were almost negative for all the month and the regression coefficient of longitude (x was positive, showing that decreasing temperature with increasing elevation and increasing temperature with increasing longitude. The results revealed that for Kriging method the Gussian model had the best semivariogram and after that spherical and exponential were in the next order, respectively for 2007 year. In the year 1992, spherical and Gussian models had better semivariogram among others. Elevation was the best variable to improve Co-kriging method as auxiliary data. such that The correlation coefficient between temperature and elevation was more than 0.5 for all days. The results also show that for Co-Kriging method the spherical model had the best semivariogram and
Absolute and Relative Socioeconomic Health Inequalities across Age Groups.
van Zon, Sander K R; Bültmann, Ute; Mendes de Leon, Carlos F; Reijneveld, Sijmen A
2015-01-01
The magnitude of socioeconomic health inequalities differs across age groups. It is less clear whether socioeconomic health inequalities differ across age groups by other factors that are known to affect the relation between socioeconomic position and health, like the indicator of socioeconomic position, the health outcome, gender, and as to whether socioeconomic health inequalities are measured in absolute or in relative terms. The aim is to investigate whether absolute and relative socioeconomic health inequalities differ across age groups by indicator of socioeconomic position, health outcome and gender. The study sample was derived from the baseline measurement of the LifeLines Cohort Study and consisted of 95,432 participants. Socioeconomic position was measured as educational level and household income. Physical and mental health were measured with the RAND-36. Age concerned eleven 5-years age groups. Absolute inequalities were examined by comparing means. Relative inequalities were examined by comparing Gini-coefficients. Analyses were performed for both health outcomes by both educational level and household income. Analyses were performed for all age groups, and stratified by gender. Absolute and relative socioeconomic health inequalities differed across age groups by indicator of socioeconomic position, health outcome, and gender. Absolute inequalities were most pronounced for mental health by household income. They were larger in younger than older age groups. Relative inequalities were most pronounced for physical health by educational level. Gini-coefficients were largest in young age groups and smallest in older age groups. Absolute and relative socioeconomic health inequalities differed cross-sectionally across age groups by indicator of socioeconomic position, health outcome and gender. Researchers should critically consider the implications of choosing a specific age group, in addition to the indicator of socioeconomic position and health outcome
Statistical errors in Monte Carlo estimates of systematic errors
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.
Statistical errors in Monte Carlo estimates of systematic errors
Energy Technology Data Exchange (ETDEWEB)
Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.
Statistical errors in Monte Carlo estimates of systematic errors
International Nuclear Information System (INIS)
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2
Efficient algorithms for maximum likelihood decoding in the surface code
Bravyi, Sergey; Suchara, Martin; Vargo, Alexander
2014-09-01
We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.
Architecture design for soft errors
Mukherjee, Shubu
2008-01-01
This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.
Dopamine reward prediction error coding
Schultz, Wolfram
2016-01-01
Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...
Total Synthesis and Absolute Configuration of the Marine Norditerpenoid Xestenone
Directory of Open Access Journals (Sweden)
Hiroaki Miyaoka
2009-11-01
Full Text Available Xestenone is a marine norditerpenoid found in the northeastern Pacific sponge Xestospongia vanilla. The relative configuration of C-3 and C-7 in xestenone was determined by NOESY spectral analysis. However the relative configuration of C-12 and the absolute configuration of this compound were not determined. The authors have now achieved the total synthesis of xestenone using their developed one-pot synthesis of cyclopentane derivatives employing allyl phenyl sulfone and an epoxy iodide as a key step. The relative and absolute configurations of xestenone were thus successfully determined by this synthesis.
Absolute transition probabilities for 559 strong lines of neutral cerium
Energy Technology Data Exchange (ETDEWEB)
Curry, J J, E-mail: jjcurry@nist.go [National Institute of Standards and Technology, Gaithersburg, MD 20899-8422 (United States)
2009-07-07
Absolute radiative transition probabilities are reported for 559 strong lines of neutral cerium covering the wavelength range 340-880 nm. These transition probabilities are obtained by scaling published relative line intensities (Meggers et al 1975 Tables of Spectral Line Intensities (National Bureau of Standards Monograph 145)) with a smaller set of published absolute transition probabilities (Bisson et al 1991 J. Opt. Soc. Am. B 8 1545). All 559 new values are for lines for which transition probabilities have not previously been available. The estimated relative random uncertainty of the new data is +-35% for nearly all lines.
Strongly nonlinear theory of rapid solidification near absolute stability
Kowal, Katarzyna N.; Altieri, Anthony L.; Davis, Stephen H.
2017-10-01
We investigate the nonlinear evolution of the morphological deformation of a solid-liquid interface of a binary melt under rapid solidification conditions near two absolute stability limits. The first of these involves the complete stabilization of the system to cellular instabilities as a result of large enough surface energy. We derive nonlinear evolution equations in several limits in this scenario and investigate the effect of interfacial disequilibrium on the nonlinear deformations that arise. In contrast to the morphological stability problem in equilibrium, in which only cellular instabilities appear and only one absolute stability boundary exists, in disequilibrium the system is prone to oscillatory instabilities and a second absolute stability boundary involving attachment kinetics arises. Large enough attachment kinetics stabilize the oscillatory instabilities. We derive a nonlinear evolution equation to describe the nonlinear development of the solid-liquid interface near this oscillatory absolute stability limit. We find that strong asymmetries develop with time. For uniform oscillations, the evolution equation for the interface reduces to the simple form f''+(βf')2+f =0 , where β is the disequilibrium parameter. Lastly, we investigate a distinguished limit near both absolute stability limits in which the system is prone to both cellular and oscillatory instabilities and derive a nonlinear evolution equation that captures the nonlinear deformations in this limit. Common to all these scenarios is the emergence of larger asymmetries in the resulting shapes of the solid-liquid interface with greater departures from equilibrium and larger morphological numbers. The disturbances additionally sharpen near the oscillatory absolute stability boundary, where the interface becomes deep-rooted. The oscillations are time-periodic only for small-enough initial amplitudes and their frequency depends on a single combination of physical parameters, including the
A note on unique solvability of the absolute value equation
Directory of Open Access Journals (Sweden)
Taher Lotfi
2014-05-01
Full Text Available It is proved that applying sufficient regularity conditions to the interval matrix $[A-|B|,A+|B|]$, we can create a new unique solvability condition for the absolute value equation $Ax+B|x|=b$, since regularity of interval matrices implies unique solvability of their corresponding absolute value equation. This condition is formulated in terms of positive definiteness of a certain point matrix. Special case $B=-I$ is verified too as an application.
Absolute decay parametric instability of high-temperature plasma
International Nuclear Information System (INIS)
Zozulya, A.A.; Silin, V.P.; Tikhonchuk, V.T.
1986-01-01
A new absolute decay parametric instability having wide spatial localization region is shown to be possible near critical plasma density. Its excitation is conditioned by distributed feedback of counter-running Langmuir waves occurring during parametric decay of incident and reflected pumping wave components. In a hot plasma with the temperature of the order of kiloelectronvolt its threshold is lower than that of a known convective decay parametric instability. Minimum absolute instability threshold is shown to be realized under conditions of spatial parametric resonance of higher orders
Absolute analytical prediction of photonic crystal guided mode resonance wavelengths
DEFF Research Database (Denmark)
Hermannsson, Pétur Gordon; Vannahme, Christoph; Smith, Cameron
2014-01-01
numerically with methods such as rigorous coupled wave analysis. Here it is demonstrated how the absolute resonance wavelengths of such structures can be predicted by analytically modeling them as slab waveguides in which the propagation constant is determined by a phase matching condition. The model...... is experimentally verified to be capable of predicting the absolute resonance wavelengths to an accuracy of within 0.75 nm, as well as resonance wavelength shifts due to changes in cladding index within an accuracy of 0.45 nm across the visible wavelength regime in the case where material dispersion is taken...
The bolometric, infrared and visual absolute magnitudes of Mira variables
International Nuclear Information System (INIS)
Robertson, B.S.C.; Feast, M.W.
1981-01-01
Statistical parallaxes, as well as stars with individually known distances are used to derive bolometric and infrared absolute magnitudes of Mira (Me) variables. The derived bolometric magnitudes are in the mean about 0.75 mag fainter than recent estimates. The problem of determining the pulsation constant is discussed. Miras with periods greater than 150 days probably pulsate in the first overtone. Those of shorter periods are anomalous and may be fundamental pulsators. It is shown that the absolute visual magnitudes at mean light of Miras with individually determined distances are consistent with values derived by Clayton and Feast from statistical parallaxes. (author)
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Spatial data modelling and maximum entropy theory
Czech Academy of Sciences Publication Activity Database
Klimešová, Dana; Ocelíková, E.
2005-01-01
Roč. 51, č. 2 (2005), s. 80-83 ISSN 0139-570X Institutional research plan: CEZ:AV0Z10750506 Keywords : spatial data classification * distribution function * error distribution Subject RIV: BD - Theory of Information
Identifying Error in AUV Communication
National Research Council Canada - National Science Library
Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B
2006-01-01
Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...
Human Errors in Decision Making
Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan
2005-01-01
The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...
Finding beam focus errors automatically
International Nuclear Information System (INIS)
Lee, M.J.; Clearwater, S.H.; Kleban, S.D.
1987-01-01
An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors
Heuristic errors in clinical reasoning.
Rylander, Melanie; Guerrasio, Jeannette
2016-08-01
Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.
Data Analysis & Statistical Methods for Command File Errors
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
A Hybrid Unequal Error Protection / Unequal Error Resilience ...
African Journals Online (AJOL)
The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
Huber, Felix; Eltschka, Christopher; Siewert, Jens; Gühne, Otfried
2018-04-01
A pure multipartite quantum state is called absolutely maximally entangled (AME), if all reductions obtained by tracing out at least half of its parties are maximally mixed. Maximal entanglement is then present across every bipartition. The existence of such states is in many cases unclear. With the help of the weight enumerator machinery known from quantum error correction and the shadow inequalities, we obtain new bounds on the existence of AME states in dimensions larger than two. To complete the treatment on the weight enumerator machinery, the quantum MacWilliams identity is derived in the Bloch representation. Finally, we consider AME states whose subsystems have different local dimensions, and present an example for a 2×3×3×3 system that shows maximal entanglement across every bipartition.
Error studies for SNS Linac. Part 1: Transverse errors
International Nuclear Information System (INIS)
Crandall, K.R.
1998-01-01
The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)
Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.
Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian
2017-05-06
The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.
Kaiser, Mary Elizabeth; Morris, Matthew; Aldoroty, Lauren; Kurucz, Robert; McCandliss, Stephan; Rauscher, Bernard; Kimble, Randy; Kruk, Jeffrey; Wright, Edward L.; Feldman, Paul; Riess, Adam; Gardner, Jonathon; Bohlin, Ralph; Deustua, Susana; Dixon, Van; Sahnow, David J.; Perlmutter, Saul
2018-01-01
Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. Systematic errors associated with astrophysical data used to probe fundamental astrophysical questions, such as SNeIa observations used to constrain dark energy theories, now exceed the statistical errors associated with merged databases of these measurements. ACCESS, “Absolute Color Calibration Experiment for Standard Stars”, is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35‑1.7μm bandpass. To achieve this goal ACCESS (1) observes HST/ Calspec stars (2) above the atmosphere to eliminate telluric spectral contaminants (e.g. OH) (3) using a single optical path and (HgCdTe) detector (4) that is calibrated to NIST laboratory standards and (5) monitored on the ground and in-flight using a on-board calibration monitor. The observations are (6) cross-checked and extended through the generation of stellar atmosphere models for the targets. The ACCESS telescope and spectrograph have been designed, fabricated, and integrated. Subsystems have been tested. Performance results for subsystems, operations testing, and the integrated spectrograph will be presented. NASA sounding rocket grant NNX17AC83G supports this work.
Obermaier, Karin; Schmelzeisen-Redeker, Günther; Schoemaker, Michael; Klötzer, Hans-Martin; Kirchsteiger, Harald; Eikmeier, Heino; del Re, Luigi
2013-07-01
Even though a Clinical and Laboratory Standards Institute proposal exists on the design of studies and performance criteria for continuous glucose monitoring (CGM) systems, it has not yet led to a consistent evaluation of different systems, as no consensus has been reached on the reference method to evaluate them or on acceptance levels. As a consequence, performance assessment of CGM systems tends to be inconclusive, and a comparison of the outcome of different studies is difficult. Published information and available data (as presented in this issue of Journal of Diabetes Science and Technology by Freckmann and coauthors) are used to assess the suitability of several frequently used methods [International Organization for Standardization, continuous glucose error grid analysis, mean absolute relative deviation (MARD), precision absolute relative deviation (PARD)] when assessing performance of CGM systems in terms of accuracy and precision. The combined use of MARD and PARD seems to allow for better characterization of sensor performance. The use of different quantities for calibration and evaluation, e.g., capillary blood using a blood glucose (BG) meter versus venous blood using a laboratory measurement, introduces an additional error source. Using BG values measured in more or less large intervals as the only reference leads to a significant loss of information in comparison with the continuous sensor signal and possibly to an erroneous estimation of sensor performance during swings. Both can be improved using data from two identical CGM sensors worn by the same patient in parallel. Evaluation of CGM performance studies should follow an identical study design, including sufficient swings in glycemia. At least a part of the study participants should wear two identical CGM sensors in parallel. All data available should be used for evaluation, both by MARD and PARD, a good PARD value being a precondition to trust a good MARD value. Results should be analyzed and
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.
Measured and modelled absolute gravity changes in Greenland
DEFF Research Database (Denmark)
Nielsen, Jens Emil; Forsberg, René; Strykowski, Gabriel
2014-01-01
in Greenland. Theresult is compared with the initial measurements of absolute gravity (AG) change at selected GreenlandNetwork (GNET) sites.We find that observations are highly influenced by the direct attraction from the ice and ocean. Thisis especially evident in the measurements conducted at the GNET...
Absolute luminosity measurements with the LHCb detector at the LHC
Aaij, R; Adinolfi, M; Adrover, C; Affolder, A; Ajaltouni, Z; Albrecht, J; Alessio, F; Alexander, M; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amhis, Y; Anderson, J; Appleby, R B; Aquines Gutierrez, O; Archilli, F; Arrabito, L; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Bachmann, S; Back, J J; Bailey, D S; Balagura, V; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Bates, A; Bauer, C; Bauer, Th; Bay, A; Bediaga, I; Belous, K; Belyaev, I; Ben-Haim, E; Benayoun, M; Bencivenni, G; Benson, S; Benton, J; Bernet, R; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Bizzeti, A; Bjørnstad, P M; Blake, T; Blanc, F; Blanks, C; Blouw, J; Blusk, S; Bobrov, A; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borgia, A; Bowcock, T J V; Bozzi, C; Brambach, T; van den Brand, J; Bressieux, J; Brett, D; Brisbane, S; Britsch, M; Britton, T; Brook, N H; Brown, H; Büchler-Germann, A; Burducea, I; Bursche, A; Buytaert, J; Cadeddu, S; Caicedo Carvajal, J M; Callot, O; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carson, L; Carvalho Akiba, K; Casse, G; Cattaneo, M; Charles, M; Charpentier, Ph; Chiapolini, N; Ciba, K; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coca, C; Coco, V; Cogan, J; Collins, P; Constantin, F; Conti, G; Contu, A; Cook, A; Coombes, M; Corti, G; Cowan, G A; Currie, R; D'Almagne, B; D'Ambrosio, C; David, P; De Bonis, I; De Capua, S; De Cian, M; De Lorenzi, F; De Miranda, J M; De Paula, L; De Simone, P; Decamp, D; Deckenhoff, M; Degaudenzi, H; Deissenroth, M; Del Buono, L; Deplano, C; Deschamps, O; Dettori, F; Dickens, J; Dijkstra, H; Diniz Batista, P; Donleavy, S; Dordei, F; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dupertuis, F; Dzhelyadin, R; Eames, C; Easo, S; Egede, U; Egorychev, V; Eidelman, S; van Eijk, D; Eisele, F; Eisenhardt, S; Ekelhof, R; Eklund, L; Elsasser, Ch; d'Enterria, D G; Esperante Pereira, D; Estève, L; Falabella, A; Fanchini, E; Färber, C; Fardell, G; Farinelli, C; Farry, S; Fave, V; Fernandez Albor, V; Ferro-Luzzi, M; Filippov, S; Fitzpatrick, C; Fontana, M; Fontanelli, F; Forty, R; Frank, M; Frei, C; Frosini, M; Furcas, S; Gallas Torreira, A; Galli, D; Gandelman, M; Gandini, P; Gao, Y; Garnier, J-C; Garofoli, J; Garra Tico, J; Garrido, L; Gaspar, C; Gauvin, N; Gersabeck, M; Gershon, T; Ghez, Ph; Gibson, V; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gordon, H; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graziani, G; Grecu, A; Gregson, S; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Haefeli, G; Haen, C; Haines, S C; Hampson, T; Hansmann-Menzemer, S; Harji, R; Harnew, N; Harrison, J; Harrison, P F; He, J; Heijne, V; Hennessy, K; Henrard, P; Hernando Morata, J A; van Herwijnen, E; Hicks, E; Hofmann, W; Holubyev, K; Hopchev, P; Hulsbergen, W; Hunt, P; Huse, T; Huston, R S; Hutchcroft, D; Hynds, D; Iakovenko, V; Ilten, P; Imong, J; Jacobsson, R; Jaeger, A; Jahjah Hussein, M; Jans, E; Jansen, F; Jaton, P; Jean-Marie, B; Jing, F; John, M; Johnson, D; Jones, C R; Jost, B; Kandybei, S; Karacson, M; Karbach, T M; Keaveney, J; Kerzel, U; Ketel, T; Keune, A; Khanji, B; Kim, Y M; Knecht, M; Koblitz, S; Koppenburg, P; Kozlinskiy, A; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Kruzelecki, K; Kucharczyk, M; Kukulak, S; Kumar, R; Kvaratskheliya, T; La Thi, V N; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lambert, R W; Lanciotti, E; Lanfranchi, G; Langenbruch, C; Latham, T; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Leroy, O; Lesiak, T; Li, L; Li Gioi, L; Lieng, M; Liles, M; Lindner, R; Linn, C; Liu, B; Liu, G; Lopes, J H; Lopez Asamar, E; Lopez-March, N; Luisier, J; Machefert, F; Machikhiliyan, I V; Maciuc, F; Maev, O; Magnin, J; Malde, S; Mamunur, R M D; Manca, G; Mancinelli, G; Mangiafave, N; Marconi, U; Märki, R; Marks, J; Martellotti, G; Martens, A; Martin, L; Martín Sánchez, A; Martinez Santos, D; Massafferri, A; Matev, R; Mathe, Z; Matteuzzi, C; Matveev, M; Maurice, E; Maynard, B; Mazurov, A; McGregor, G; McNulty, R; Mclean, C; Meissner, M; Merk, M; Merkel, J; Messi, R; Miglioranzi, S; Milanes, D A; Minard, M-N; Monteil, S; Moran, D; Morawski, P; Mountain, R; Mous, I; Muheim, F; Müller, K; Muresan, R; Muryn, B; Musy, M; Mylroie-Smith, J; Naik, P; Nakada, T; Nandakumar, R; Nardulli, J; Nasteva, I; Nedos, M; Needham, M; Neufeld, N; Nguyen-Mau, C; Nicol, M; Nies, S; Niess, V; Nikitin, N; Oblakowska-Mucha, A; Obraztsov, V; Oggero, S; Ogilvy, S; Okhrimenko, O; Oldeman, R; Orlandea, M; Otalora Goicochea, J M; Owen, P; Pal, B; Palacios, J; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Parkes, C; Parkinson, C J; Passaleva, G; Patel, G D; Patel, M; Paterson, S K; Patrick, G N; Patrignani, C; Pavel-Nicorescu, C; Pazos Alvarez, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perego, D L; Perez Trigo, E; Pérez-Calero Yzquierdo, A; Perret, P; Perrin-Terrin, M; Pessina, G; Petrella, A; Petrolini, A; Pie Valls, B; Pietrzyk, B; Pilar, T; Pinci, D; Plackett, R; Playfer, S; Plo Casasus, M; Polok, G; Poluektov, A; Polycarpo, E; Popov, D; Popovici, B; Potterat, C; Powell, A; du Pree, T; Prisciandaro, J; Pugatch, V; Puig Navarro, A; Qian, W; Rademacker, J H; Rakotomiaramanana, B; Rangel, M S; Raniuk, I; Raven, G; Redford, S; Reid, M M; dos Reis, A C; Ricciardi, S; Rinnert, K; Roa Romero, D A; Robbe, P; Rodrigues, E; Rodrigues, F; Rodriguez Perez, P; Rogers, G J; Roiser, S; Romanovsky, V; Rouvinet, J; Ruf, T; Ruiz, H; Sabatino, G; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salzmann, C; Sannino, M; Santacesaria, R; Santamarina Rios, C; Santinelli, R; Santovetti, E; Sapunov, M; Sarti, A; Satriano, C; Satta, A; Savrie, M; Savrina, D; Schaack, P; Schiller, M; Schleich, S; Schmelling, M; Schmidt, B; Schneider, O; Schopper, A; Schune, M -H; Schwemmer, R; Sciubba, A; Seco, M; Semennikov, A; Senderowska, K; Sepp, I; Serra, N; Serrano, J; Seyfert, P; Shao, B; Shapkin, M; Shapoval, I; Shatalov, P; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, O; Shevchenko, V; Shires, A; Silva Coutinho, R; Skottowe, H P; Skwarnicki, T; Smith, A C; Smith, N A; Sobczak, K; Soler, F J P; Solomin, A; Soomro, F; Souza De Paula, B; Spaan, B; Sparkes, A; Spradlin, P; Stagni, F; Stahl, S; Steinkamp, O; Stoica, S; Stone, S; Storaci, B; Straticiuc, M; Straumann, U; Styles, N; Subbiah, V K; Swientek, S; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Teodorescu, E; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Topp-Joergensen, S; Tran, M T; Tsaregorodtsev, A; Tuning, N; Ubeda Garcia, M; Ukleja, A; Urquijo, P; Uwer, U; Vagnoni, V; Valenti, G; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; Velthuis, J J; Veltri, M; Vervink, K; Viaud, B; Videau, I; Vilasis-Cardona, X; Visniakov, J; Vollhardt, A; Voong, D; Vorobyev, A; Voss, H; Wacker, K; Wandernoth, S; Wang, J; Ward, D R; Webber, A D; Websdale, D; Whitehead, M; Wiedner, D; Wiggers, L; Wilkinson, G; Williams, M P; Williams, M; Wilson, F F; Wishahi, J; Witek, M; Witzeling, W; Wotton, S A; Wyllie, K; Xie, Y; Xing, F; Yang, Z; Young, R; Yushchenko, O; Zavertyaev, M; Zhang, F; Zhang, L; Zhang, W C; Zhang, Y; Zhelezov, A; Zhong, L; Zverev, E; Zvyagin, A
2012-01-01
Absolute luminosity measurements are of general interest for colliding-beam experiments at storage rings. These measurements are necessary to determine the absolute cross-sections of reaction processes and are valuable to quantify the performance of the accelerator. LHCb has applied two methods to determine the absolute scale of its luminosity measurements for proton-proton collisions at the LHC with a centre-of-mass energy of 7 TeV. In addition to the classic ``van der Meer scan'' method a novel technique has been developed which makes use of direct imaging of the individual beams using beam-gas and beam-beam interactions. This beam imaging method is made possible by the high resolution of the LHCb vertex detector and the close proximity of the detector to the beams, and allows beam parameters such as positions, angles and widths to be determined. The results of the two methods have comparable precision and are in good agreement. Combining the two methods, an overall precision of 3.5\\% in the absolute lumi...
Absolute configurations of zingiberenols isolated from ginger (Zingiber officinale) rhizomes
The sesquiterpene alcohol zingiberenol, or 1,10-bisaboladien-3-ol, was isolated some time ago from ginger, Zingiber officinale, rhizomes, but its absolute configuration had not been determined. With three chiral centers present in the molecule, zingiberenol can exist in eight stereoisomeric forms. ...
Fabricating the absolute fake: America in contemporary pop culture
Kooijman, J.
2008-01-01
Onze wereld wordt gedomineerd door de Amerikaanse popcultuur. Fabricating the Absolute Fake onderzoekt de dynamiek van Amerikanisering aan de hand van hedendaagse films, televisieprogramma's en popsterren die reflecteren op de vraag wat het betekent om Amerikaan in een mondiale popcultuur te zijn.
Confirmation of the absolute configuration of (−)-aurantioclavine
Behenna, Douglas C.
2011-04-01
We confirm our previous assignment of the absolute configuration of (-)-aurantioclavine as 7R by crystallographically characterizing an advanced 3-bromoindole intermediate reported in our previous synthesis. This analysis also provides additional support for our model of enantioinduction in the palladium(II)-catalyzed oxidative kinetic resolution of secondary alcohols. © 2010 Elsevier Ltd. All rights reserved.
Multipliers for the Absolute Euler Summability of Fourier Series
Indian Academy of Sciences (India)
In this paper, the author has investigated necessary and sufficient conditions for the absolute Euler summability of the Fourier series with multipliers. These conditions are weaker than those obtained earlier by some workers. It is further shown that the multipliers are best possible in certain sense.
The Absolute and the Relative Dimensions of Constitutional Rights
Czech Academy of Sciences Publication Activity Database
Alexy, Robert
2017-01-01
Roč. 37, č. 1 (2017), s. 31-47 ISSN 0143-6503 Keywords : constitutional rights * judicial review * proportionality Subject RIV: AG - Legal Sciences OBOR OECD: Law Impact factor: 1.242, year: 2016 https://academic.oup.com/ojls/article/37/1/31/2669583/The-Absolute-and-the-Relative-Dimensions-of
Europe's Other Poverty Measures: Absolute Thresholds Underlying Social Assistance
Bavier, Richard
2009-01-01
The first thing many learn about international poverty measurement is that European nations apply a "relative" poverty threshold and that they also do a better job of reducing poverty. Unlike the European model, the "absolute" U.S. poverty threshold does not increase in real value when the nation's standard of living rises,…
Absolute configuration of some dinorlabdanes from the copaiba oil
Energy Technology Data Exchange (ETDEWEB)
Romero, Adriano L.; Baptistela, Lucia H.B.; Imamura, Paulo M. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Inst. de Quimica], e-mail: imam@iqm.unicamp.br
2009-07-01
A novel ent-dinorlabdane ({iota})-13(R)-14,15-dinorlabd-8(17)-ene-3,13-diol was isolated from commercial copaiba oil along with two known dinorlabdanes. The absolute configuration of these dinorditerpenes was established for the first time through synthesis starting from known ({iota})-3-hydroxycopalic acid, which was also isolated from the same oleoresin. (author)
Absolute Risk Aversion and the Returns to Education.
Brunello, Giorgio
2002-01-01
Uses 1995 Italian household income and wealth survey to measure individual absolute risk aversion of 1,583 married Italian male household heads. Uses this measure as an instrument for attained education in a standard-log earnings equation. Finds that the IV estimate of the marginal return to schooling is much higher than the ordinary least squares…
Absolute differential yield of parametric x-ray radiation
International Nuclear Information System (INIS)
Shchagin, A.V.; Pristupa, V.I.; Khizhnyak, N.A.
1993-01-01
The results of measurements of absolute differential yield of parametric X-ray radiation (PXR) in thin single crystal are presented for the first time. It has been established that the experimental results are in good agreement with theoretical calculations according with kinematical theory. The influence of density effect on PXR properties is discussed. (author). 19 refs., 7 figs
Absolute measurements of chlorine Cl+ cation single photoionization cross section
Hernandez, E. M.; Juarez, A. M.; Kilcoyne, A. L. D.; Aguilar, A.; Hernandez, L.; Antillon, A.; Macaluso, D.; Morales-Mori, A.; Gonzalez-Magana, O.; Hanstorp, D.; Covington, A. M.; Davis, V.; Calabrese, D.; Hinojosa, G.
The photoionization of Cl+ leading to Cl2+ was measured in the photon energy range of 19.5-28.0 eV. A spectrum with a photon energy resolution of 15 meV normalized to absolute cross-section measurements is presented. The measurements were carried out by merging a Cl+ ion beam with a photon beam of
Comments on the theory of absolute and convective instabilities
International Nuclear Information System (INIS)
Oscarsson, T.E.; Roennmark, K.
1986-10-01
The theory of absolute and convective instabilities is discussed and we argue that the basis of the theory is questionable, since it describes the linear development of instabilities by their behaviour in the time asymptotic limit. In order to make sensible predictions on the linear development of instabilities, the problem should be studied on the finite time scale implied by the linear approximation. (authors)
Global Absolute Poverty: Behind the Veil of Dollars
Moatsos, M.
2017-01-01
The widely applied “dollar-a-day” methodology identifies global absolute poverty as declining precipitously since the early 80’s throughout the developing world. The methodological underpinnings of the “dollar-a-day” approach have been questioned in terms of adequately representing equivalent
Global Absolute Poverty: Behind the Veil of Dollars
Moatsos, M.
2015-01-01
The global absolute poverty rates of the World Bank demonstrate a continued decline of poverty in developing countries between 1983 and 2012. However, the methodology applied to derive these results has received extensive criticism by scholars for requiring the application of PPP exchange rates and
Population-based absolute risk estimation with survey data
Kovalchik, Stephanie A.; Pfeiffer, Ruth M.
2013-01-01
Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614
Absolute parametric instability in a nonuniform plane plasma ...
Indian Academy of Sciences (India)
Abstract. The paper reports an analysis of the effect of spatial plasma nonuniformity on absolute parametric instability (API) of electrostatic waves in magnetized plane waveguides subjected to an intense high-frequency (HF) electric field using the separation method. In this case the effect of strong static magnetic field is ...
Absolute parametric instability in a nonuniform plane plasma
Indian Academy of Sciences (India)
The paper reports an analysis of the effect of spatial plasma nonuniformity on absolute parametric instability (API) of electrostatic waves in magnetized plane waveguides subjected to an intense high-frequency (HF) electric field using the separation method. In this case the effect of strong static magnetic field is considered.
Absolute configuration and antiprotozoal activity of minquartynoic acid
DEFF Research Database (Denmark)
Rasmussen, H B; Christensen, Søren Brøgger; Kvist, L P
2000-01-01
Minquartynoic acid (1) was isolated as an antimalarial and antileishmanial constituent of the Peruvian tree Minquartia guianensis and its absolute configuration at C-17 established to be (+)-S through conversion to the known (+)-(S)-17-hydroxystearic acid (2) and confirmed using Mosher's method....
Relative versus Absolute Stimulus Control in the Temporal Bisection Task
de Carvalho, Marilia Pinhiero; Machado, Armando
2012-01-01
When subjects learn to associate two sample durations with two comparison keys, do they learn to associate the keys with the short and long samples (relational hypothesis), or with the specific sample durations (absolute hypothesis)? We exposed 16 pigeons to an ABA design in which phases A and B corresponded to tasks using samples of 1 s and 4 s,…
Absolute intensity calibration for ECE measurements on EAST
International Nuclear Information System (INIS)
Liu Yong; Liu Xiang; Zhao Hailin
2014-01-01
In this proceeding, the results of the in-situ absolute intensity calibration for ECE measurements on EAST are presented. A 32-channel heterodyne radiometer system and a Michelson interferometer on EAST have been calibrated independently, and preliminary results from plasma operation indicate a good agreement between the electron temperature profiles obtained with different systems. (author)
Absolute dissipative drift-wave instabilities in tokamaks
International Nuclear Information System (INIS)
Chen, L.; Chance, M.S.; Cheng, C.Z.
1979-07-01
Contrary to previous theoretical predictions, it is shown that the dissipative drift-wave instabilities are absolute in tokamak plasmas. The existence of unstable eigenmodes is shown to be associated with a new eigenmode branch induced by the finite toroidal couplings
On quantum harmonic oscillator being subjected to absolute ...
Indian Academy of Sciences (India)
On quantum harmonic oscillator being subjected to absolute potential state. SWAMI NITYAYOGANANDA. Ramakrishna Mission Ashrama, R.K. Beach, Visakhapatnam 530 003, India. E-mail: nityayogananda@gmail.com. MS received 1 May 2015; accepted 6 May 2016; published online 3 December 2016. Abstract.
Energy Technology Data Exchange (ETDEWEB)
Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-01
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
Dual Processing and Diagnostic Errors
Norman, Geoff
2009-01-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…
Barriers to medical error reporting
Directory of Open Access Journals (Sweden)
Jalal Poorolajal
2015-01-01
Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Cognitive aspect of diagnostic errors.
Phua, Dong Haur; Tan, Nigel C K
2013-01-01
Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.
Absolute Gravity Datum in the Age of Cold Atom Gravimeters
Childers, V. A.; Eckl, M. C.
2014-12-01
The international gravity datum is defined today by the International Gravity Standardization Net of 1971 (IGSN-71). The data supporting this network was measured in the 1950s and 60s using pendulum and spring-based gravimeter ties (plus some new ballistic absolute meters) to replace the prior protocol of referencing all gravity values to the earlier Potsdam value. Since this time, gravimeter technology has advanced significantly with the development and refinement of the FG-5 (the current standard of the industry) and again with the soon-to-be-available cold atom interferometric absolute gravimeters. This latest development is anticipated to provide improvement in the range of two orders of magnitude as compared to the measurement accuracy of technology utilized to develop ISGN-71. In this presentation, we will explore how the IGSN-71 might best be "modernized" given today's requirements and available instruments and resources. The National Geodetic Survey (NGS), along with other relevant US Government agencies, is concerned about establishing gravity control to establish and maintain high order geodetic networks as part of the nation's essential infrastructure. The need to modernize the nation's geodetic infrastructure was highlighted in "Precise Geodetic Infrastructure, National Requirements for a Shared Resource" National Academy of Science, 2010. The NGS mission, as dictated by Congress, is to establish and maintain the National Spatial Reference System, which includes gravity measurements. Absolute gravimeters measure the total gravity field directly and do not involve ties to other measurements. Periodic "intercomparisons" of multiple absolute gravimeters at reference gravity sites are used to constrain the behavior of the instruments to ensure that each would yield reasonably similar measurements of the same location (i.e. yield a sufficiently consistent datum when measured in disparate locales). New atomic interferometric gravimeters promise a significant
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Performance of Different Light Sources for the Absolute Calibration of Radiation Thermometers
Martín, M. J.; Mantilla, J. M.; del Campo, D.; Hernanz, M. L.; Pons, A.; Campos, J.
2017-09-01
The evolving mise en pratique for the definition of the kelvin (MeP-K) [1, 2] will, in its forthcoming edition, encourage the realization and dissemination of the thermodynamic temperature either directly (primary thermometry) or indirectly (relative primary thermometry) via fixed points with assigned reference thermodynamic temperatures. In the last years, the Centro Español de Metrología (CEM), in collaboration with the Instituto de Óptica of Consejo Superior de Investigaciones Científicas (IO-CSIC), has developed several setups for absolute calibration of standard radiation thermometers using the radiance method to allow CEM the direct dissemination of the thermodynamic temperature and the assignment of the thermodynamic temperatures to several fixed points. Different calibration facilities based on a monochromator and/or a laser and an integrating sphere have been developed to calibrate CEM's standard radiation thermometers (KE-LP2 and KE-LP4) and filter radiometer (FIRA2). This system is based on the one described in [3] placed in IO-CSIC. Different light sources have been tried and tested for measuring absolute spectral radiance responsivity: a Xe-Hg 500 W lamp, a supercontinuum laser NKT SuperK-EXR20 and a diode laser emitting at 6473 nm with a typical maximum power of 120 mW. Their advantages and disadvantages have been studied such as sensitivity to interferences generated by the laser inside the filter, flux stability generated by the radiant sources and so forth. This paper describes the setups used, the uncertainty budgets and the results obtained for the absolute temperatures of Cu, Co-C, Pt-C and Re-C fixed points, measured with the three thermometers with central wavelengths around 650 nm.
DEFF Research Database (Denmark)
Yang, Yongheng; Koutroulis, Eftichios; Sangwongwanich, Ariya
2015-01-01
. An increase of the inverter lifetime and a reduction of the energy yield can alter the cost of energy, demanding an optimization of the power limitation. Therefore, aiming at minimizing the Levelized Cost of Energy (LCOE), the power limit is optimized for the AAPC strategy in this paper. The optimization...... control strategy, the Absolute Active Power Control (AAPC) can effectively solve the overloading issues by limiting the maximum possible PV power to a certain level (i.e., the power limitation), and also benefit the inverter reliability. However, its feasibility is challenged by the energy loss......, compared to the conventional PV inverter operating only in the maximum power point tracking mode. In the presented case study, the minimum of LCOE is achieved for the system when the power limit is optimized to a certain level of the designed maximum feed-in power (i.e., 3 kW). In addition, the proposed...
Lawless, Mary K.; Mathies, Richard A.
1992-06-01
Absolute resonance Raman cross sections are measured for Nile blue 690 perchlorate dissolved in ethylene glycol with excitation at 514, 531, and 568 nm. These values and the absorption spectrum are modeled using a time-dependent wave packet formalism. The excited-state equilibrium geometry changes are quantitated for 40 resonance Raman active modes, seven of which (590, 1141, 1351, 1429, 1492, 1544, and 1640 cm-1 ) carry 70% of the total resonance Raman intensity. This demonstrates that in addition to the prominent 590 and 1640 cm-1 modes, a large number of vibrational degrees of freedom are Franck-Condon coupled to the electronic transition. After exposure of the explicit vibrational progressions, the residual absorption linewidth is separated into its homogeneous [350 cm-1 half-width at half-maximum (HWHM)] and inhomogeneous (313 cm-1 HWHM) components through an analysis of the absolute Raman cross sections. The value of the electronic dephasing time derived from this study (25 fs) compares well to previously published results. These data should be valuable in multimode modeling of femtosecond experiments on Nile blue.
Theoretical Calculation of Absolute Radii of Atoms and Ions. Part 1. The Atomic Radii
Directory of Open Access Journals (Sweden)
Raka Biswas
2002-02-01
Full Text Available Abstract. A set of theoretical atomic radii corresponding to the principal maximum in the radial distribution function, 4ÃÂ€r2R2 for the outermost orbital has been calculated for the ground state of 103 elements of the periodic table using Slater orbitals. The set of theoretical radii are found to reproduce the periodic law and the Lother MeyerÃ¢Â€Â™s atomic volume curve and reproduce the expected vertical and horizontal trend of variation in atomic size in the periodic table. The d-block and f-block contractions are distinct in the calculated sizes. The computed sizes qualitatively correlate with the absolute size dependent properties like ionization potentials and electronegativity of elements. The radii are used to calculate a number of size dependent periodic physical properties of isolated atoms viz., the diamagnetic part of the atomic susceptibility, atomic polarizability and the chemical hardness. The calculated global hardness and atomic polarizability of a number of atoms are found to be close to the available experimental values and the profiles of the physical properties computed in terms of the theoretical atomic radii exhibit their inherent periodicity. A simple method of computing the absolute size of atoms has been explored and a large body of known material has been brought together to reveal how many different properties correlate with atomic size.
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
Error and objectivity: cognitive illusions and qualitative research.
Paley, John
2005-07-01
Psychological research has shown that cognitive illusions, of which visual illusions are just a special case, are systematic and pervasive, raising epistemological questions about how error in all forms of research can be identified and eliminated. The quantitative sciences make use of statistical techniques for this purpose, but it is not clear what the qualitative equivalent is, particularly in view of widespread scepticism about validity and objectivity. I argue that, in the light of cognitive psychology, the 'error question' cannot be dismissed as a positivist obsession, and that the concepts of truth and objectivity are unavoidable. However, they constitute only a 'minimal realism', which does not necessarily bring a commitment to 'absolute' truth, certainty, correspondence, causation, reductionism, or universal laws in its wake. The assumption that it does reflects a misreading of positivism and, ironically, precipitates a 'crisis of legitimation and representation', as described by constructivist authors.
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
The maximum entropy method of moments and Bayesian probability theory
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms
International Nuclear Information System (INIS)
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-01-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design
International Nuclear Information System (INIS)
Vancura, J.; Kostroun, V.O.
1992-01-01
The absolute total and one and two electron transfer cross sections for Ar 8+ on Ar were measured as a function of projectile laboratory energy from 0.090 to 0.550 keV/amu. The effective one electron transfer cross section dominates above 0.32 keV/amu, while below this energy, the effective two electron transfer starts to become appreciable. The total cross section varies by a factor over the energy range explored. The overall error in the cross section measurement is estimated to be ± 15%
Logical error rate scaling of the toric code
International Nuclear Information System (INIS)
Watson, Fern H E; Barrett, Sean D
2014-01-01
To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)
Measuring worst-case errors in a robot workcell
International Nuclear Information System (INIS)
Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.
1997-10-01
Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Human errors in NPP operations
International Nuclear Information System (INIS)
Sheng Jufang
1993-01-01
Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested
Linear network error correction coding
Guang, Xuan
2014-01-01
There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an
The effects of a pilates-aerobic program on maximum exercise capacity of adult women
Directory of Open Access Journals (Sweden)
Milena Mikalački
Full Text Available ABSTRACT Introduction: Physical exercise such as the Pilates method offers clinical benefits on the aging process. Likewise, physiologic parameters may be improved through aerobic exercise. Methods: In order to compare the differences of a Pilates-Aerobic intervention program on physiologic parameters such as the maximum heart rate (HRmax, relative maximal oxygen consumption (relative VO2max and absolute (absolute VOmax, maximum heart rate during maximal oxygen consumption (VO2max-HRmax, maximum minute volume (VE and forced vital capacity (FVC, a total of 64 adult women (active group = 48.1 ± 6.7 years; control group = 47.2 ± 7.4 years participated in the study. The physiological parameters, the maximal speed and total duration of test were measured by maximum exercise capacity testing through Bruce protocol. The HRmax was calculated by a cardio-ergometric software. Pulmonary function tests, maximal speed and total time during the physical test were performed in a treadmill (Medisoft, model 870c. Likewise, the spirometry analyzed the impact on oxygen uptake parameters, including FVC and VE. Results: The VO2max (relative and absolute, VE (all, P<0.001, VO2max-HRmax (P<0.05 and maximal speed of treadmill test (P<0.001 showed significant difference in the active group after a physical exercise interventional program. Conclusion: The present study indicates that the Pilates exercises through a continuous training program might significantly improve the cardiovascular system. Hence, mixing strength and aerobic exercises into a training program is considered the optimal mechanism for healthy aging.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Directory of Open Access Journals (Sweden)
Junguo Hu
Full Text Available Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK and Co-Kriging (Co-OK methods. The results indicated that the root mean squared errors (RMSEs and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193 were less than those for the OK method (1.146 and 1.539 when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.
Proposal for an absolute, atomic definition of mass
International Nuclear Information System (INIS)
Wignall, J.W.G.
1991-11-01
It is proposed that the mass of a particle be defined absolutely as its de Broglie frequency, measured as the mean de Broglie wavelength of the particle when it has a mean speed (v) and Lorentz factor γ; the masses of systems too large to have a measurable de Broglie wavelength mean are then to be derived by specifying the usual inertial and additive properties of mass. This definition avoids the use of an arbitrary macroscopic standard such as the prototype kilogram, and, if present theory is correct, does not even require the choice of a specific particle as a mass standard. Suggestions are made as to how this absolute mass can be realized and measured at the macroscopic level and, finally, some comments are made on the effect of the new definition on the form of the equations of physics. 19 refs
Remote ultrasound palpation for robotic interventions using absolute elastography.
Schneider, Caitlin; Baghani, Ali; Rohling, Robert; Salcudean, Septimiu
2012-01-01
Although robotic surgery has addressed many of the challenges presented by minimally invasive surgery, haptic feedback and the lack of knowledge of tissue stiffness is an unsolved problem. This paper presents a system for finding the absolute elastic properties of tissue using a freehand ultrasound scanning technique, which utilizes the da Vinci Surgical robot and a custom 2D ultrasound transducer for intraoperative use. An external exciter creates shear waves in the tissue, and a local frequency estimation method computes the shear modulus. Results are reported for both phantom and in vivo models. This system can be extended to any 6 degree-of-freedom tracking method and any 2D transducer to provide real-time absolute elastic properties of tissue.
Absolute limit on rotation of gravitationally bound stars
Glendenning, N. K.
1994-03-01
The authors seek an absolute limit on the rotational period for a neutron star as a function of its mass, based on the minimal constraints imposed by Einstein's theory of relativity, Le Chatelier's principle, causality, and a low-density equation of state, uncertainties which can be evaluated as to their effect on the result. This establishes a limiting curve in the mass-period plane below which no pulsar that is a neutron star can lie. For example, the minimum possible Kepler period, which is an absolute limit on rotation below which mass-shedding would occur, is 0.33 ms for a M = 1.442 solar mass neutron star (the mass of PSR1913+16). If the limit were found to be broken by any pulsar, it would signal that the confined hadronic phase of ordinary nucleons and nuclei is only metastable.
Determination of absolute internal conversion coefficients using the SAGE spectrometer
Energy Technology Data Exchange (ETDEWEB)
Sorri, J., E-mail: juha.m.t.sorri@jyu.fi [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 University of Jyvaskyla (Finland); Greenlees, P.T.; Papadakis, P.; Konki, J. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 University of Jyvaskyla (Finland); Cox, D.M. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 University of Jyvaskyla (Finland); Department of Physics, University of Liverpool, Oxford Street, Liverpool L69 7ZE (United Kingdom); Auranen, K.; Partanen, J.; Sandzelius, M.; Pakarinen, J.; Rahkila, P.; Uusitalo, J. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 University of Jyvaskyla (Finland); Herzberg, R.-D. [Department of Physics, University of Liverpool, Oxford Street, Liverpool L69 7ZE (United Kingdom); Smallcombe, J.; Davies, P.J.; Barton, C.J.; Jenkins, D.G. [Department of Physics, University of York, Heslington, York YO10 5DD (United Kingdom)
2016-03-11
A non-reference based method to determine internal conversion coefficients using the SAGE spectrometer is carried out for transitions in the nuclei of {sup 154}Sm, {sup 152}Sm and {sup 166}Yb. The Normalised-Peak-to-Gamma method is in general an efficient tool to extract internal conversion coefficients. However, in many cases the required well-known reference transitions are not available. The data analysis steps required to determine absolute internal conversion coefficients with the SAGE spectrometer are presented. In addition, several background suppression methods are introduced and an example of how ancillary detectors can be used to select specific reaction products is given. The results obtained for ground-state band E2 transitions show that the absolute internal conversion coefficients can be extracted using the methods described with a reasonable accuracy. In some cases of less intense transitions only an upper limit for the internal conversion coefficient could be given.
Absolute elastic cross sections for electron scattering from SF6
International Nuclear Information System (INIS)
Gulley, R.J.; Uhlmann, L.J.; Dedman, C.J.; Buckman, S.J.; Cho, H.; Trantham, K.W.
2000-01-01
Full text: Absolute differential cross sections for vibrationally elastic scattering of electrons from sulphur hexafluoride (SF 6 ) have been measured at fixed angles of 60 deg, 90 deg and 120 deg over the energy range of 5 to 15 eV, and also at 11 fixed energies between 2.7 and 75 eV for scattering angles between 10 deg and 180 deg. These measurements employ the magnetic angle-changing technique of Read and Channing in combination with the relative flow technique to obtain absolute elastic scattering cross sections at backward angles (135 deg to 180 deg) for incident energies below 15 eV. The results reveal some substantial differences with several previous determinations and a reasonably good level of agreement with a recent close coupling calculation
ABSOLUTE AND COMPARATIVE SUSTAINABILITY OF FARMING ENTERPRISES IN BULGARIA
Directory of Open Access Journals (Sweden)
H. Bachev
2017-04-01
Full Text Available Evaluating absolute and comparative sustainability of farming enterprises is among the most topical issues for researchers, farmers, investors, administrators, politicians, interests groups and public at large. Nevertheless, in Bulgaria and most East European countries there are no comprehensive assessments on sustainability level of Bulgarian farms of different juridical type. This article applies a holistic framework and assesses absolute and comparative sustainability major farming structures in Bulgaria - unregistered farms of Natural Persons, Sole Traders, Cooperatives, and Companies. First, method of the study is outlined, and overall characteristics of surveyed farming enterprises presented. After that an assessment is made of integral, governance, economic, social, environmental sustainability of farming structures of different juridical type. Next, structure of farming enterprises with different sustainability levels is analyzed. Finally, conclusion from the study and directions for further research and amelioration of sustainability assessments suggested.
Absolute dating of the Aegean Late Bronze Age
International Nuclear Information System (INIS)
Warren, P.M.
1987-01-01
A recent argument for raising the absolute date of the beginning of the Aegean Late Bronze (LB) Age to about 1700 B.C. is critically examined. It is argued here that: (1) the alabaster lid from Knossos did have the stratigraphical context assigned to it by Evans, in all probability Middle Minoan IIIA, c. 1650 B.C.; (2) the attempt to date the alabastron found in an early Eighteenth Dynasty context at Aniba to Late Minoan IIIA:1 is open to objections; (3) radiocarbon dates from Aegean LB I contexts are too wide in their calibrated ranges and too inconsistent both within and between site sets to offer any reliable grounds at present for raising Aegean LB I absolute chronology to 1700 B.C. Other evidence, however, suggests this period began about 1600 B.C., i.e. some fifty years earlier than the conventional date of 1550 B.C. (author)
Limitations of absolute activity determination of I-125 sources
Energy Technology Data Exchange (ETDEWEB)
Pelled, O; German, U; Kol, R; Levinson, S; Weinstein, M; Laichter, Y [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev; Alphasy, Z [Ben-Gurion Univ. of the Negev, Beersheba (Israel)
1996-12-01
A method for absolute determination of the activity of a I-125 source, based on the counting rate values of the 27 keV photons and the coincidence photon peak is given in the literature. It is based on the principle that if a radionuclide emits two photons in coincidence , a measurement of its disintegration rate in the photopeak and in the sum- peak can determinate it`s absolute activity. When using this method , the system calibration is simplified and parameters such as source geometry or source position relative to the detector have no significant influence. However, when the coincidence rate is very low, the application of this method is limited because of the statistics of the coincidence peak (authors).
Determination of absolute internal conversion coefficients using the SAGE spectrometer
International Nuclear Information System (INIS)
Sorri, J.; Greenlees, P.T.; Papadakis, P.; Konki, J.; Cox, D.M.; Auranen, K.; Partanen, J.; Sandzelius, M.; Pakarinen, J.; Rahkila, P.; Uusitalo, J.; Herzberg, R.-D.; Smallcombe, J.; Davies, P.J.; Barton, C.J.; Jenkins, D.G.
2016-01-01
A non-reference based method to determine internal conversion coefficients using the SAGE spectrometer is carried out for transitions in the nuclei of "1"5"4Sm, "1"5"2Sm and "1"6"6Yb. The Normalised-Peak-to-Gamma method is in general an efficient tool to extract internal conversion coefficients. However, in many cases the required well-known reference transitions are not available. The data analysis steps required to determine absolute internal conversion coefficients with the SAGE spectrometer are presented. In addition, several background suppression methods are introduced and an example of how ancillary detectors can be used to select specific reaction products is given. The results obtained for ground-state band E2 transitions show that the absolute internal conversion coefficients can be extracted using the methods described with a reasonable accuracy. In some cases of less intense transitions only an upper limit for the internal conversion coefficient could be given.
Synesthesia and rhythm. The road to absolute cinema
Directory of Open Access Journals (Sweden)
Ricardo Roncero Palomar
2017-05-01
Full Text Available Absolute cinema, developed during the historical avant-garde, con-tinued with a long artistic tradition that linked musical with visual experience. Due to cinema as médium of expression, this filmmakers were able to work with the moving image to develop concepts such as rhythm, also with more complex figures than the colored spots that other devices could create at those time. This study starts with the published texts in 1704 by Newton about color, and provides an overview of those artistic highlights that link image and sound, and which creates the origins of absolute cinema. The connections and equivalences between the visual and sound experiences used by these filmmakers are also studied in order to know if there was a continuous line with the origins of these studies or if there was a rupture and other later investigations were able to have more repercussion in their works.
Error field considerations for BPX
International Nuclear Information System (INIS)
LaHaye, R.J.
1992-01-01
Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors
The uncorrected refractive error challenge
Directory of Open Access Journals (Sweden)
Kovin Naidoo
2016-11-01
Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.
Quantile Regression With Measurement Error
Wei, Ying
2009-08-27
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.
Comprehensive Error Rate Testing (CERT)
U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...
Numerical optimization with computational errors
Zaslavski, Alexander J
2016-01-01
This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...
Dual processing and diagnostic errors.
Norman, Geoff
2009-09-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.
Error analysis of the phase-shifting technique when applied to shadow moire
International Nuclear Information System (INIS)
Han, Changwoon; Han Bongtae
2006-01-01
An exact solution for the intensity distribution of shadow moire fringes produced by a broad spectrum light is presented. A mathematical study quantifies errors in fractional fringe orders determined by the phase-shifting technique, and its validity is corroborated experimentally. The errors vary cyclically as the distance between the reference grating and the specimen increases. The amplitude of the maximum error is approximately 0.017 fringe, which defines the theoretical limit of resolution enhancement offered by the phase-shifting technique
Purely absolutely continuous spectrum for almost Mathieu operators
International Nuclear Information System (INIS)
Chulaevsky, V.; Delyon, F.
1989-01-01
Using a recent result of Sinai, the authors prove that the almost Mathieu operators acting on l 2 (Z), (H αλ Psi)(n) = Ψ(n + 1) + Ψ(n - 1) + λ cos(ωn + α) Ψ(n), have a purely absolutely continuous spectrum for almost all α provided that ω is a good irrational and λ is sufficiently small. Furthermore, the generalized eigenfunctions are quasiperiodic
Absolute calibration and beam background of the Squid Polarimeter
International Nuclear Information System (INIS)
Blaskiewicz, M.M.; Cameron, P.R.; Shea, T.J.
1996-01-01
The problem of beam background in Squid Polarimetry is not without residual benefits. The authors may deliberately generate beam background by gently kicking the beam at the spin tune frequency. This signal may be used to accomplish a simple and accurate absolute calibration of the polarimeter. The authors present details of beam background calculations and their application to polarimeter calibration, and suggest a simple proof-of-principle accelerator experiment
Results from a U.S. Absolute Gravity Survey,
1982-01-01
National Bureau of Standards. La . ... ,., 831A08 NOV -2- 1. Introduction We have recently completed an absolute gravity survey at twelve sites in the...Air Force Geophysics Laboratory (AFGL) and the Istituto di Metrologia -7- "G. Colonnetti" (IMGC) [Marson and Alasia, 1978, 19801. All three...for ab- solute measurements of the earth’s gravity, Metrologia , in press, 1982. L 4 !" Table 1. Gravity values transferred to the floor in gal (cm
Blastic plasmacytoid dendritic cell neoplasm with absolute monocytosis at presentation
Directory of Open Access Journals (Sweden)
Jaworski JM
2015-02-01
Full Text Available Joseph M Jaworski,1,2 Vanlila K Swami,1 Rebecca C Heintzelman,1 Carrie A Cusack,3 Christina L Chung,3 Jeremy Peck,3 Matthew Fanelli,3 Micheal Styler,4 Sanaa Rizk,4 J Steve Hou1 1Department of Pathology and Laboratory Medicine, Hahnemann University Hospital/Drexel University College of Medicine, Philadelphia, PA, USA; 2Department of Pathology, Mercy Fitzgerald Hospital, Darby, PA, USA; 3Department of Dermatology, Hahnemann University Hospital/Drexel University College of Medicine, Philadelphia, PA, USA; 4Department of Hematology/Oncology, Hahnemann University Hospital/Drexel University College of Medicine, Philadelphia, PA, USA Abstract: Blastic plasmacytoid dendritic cell neoplasm is an uncommon malignancy derived from precursors of plasmacytoid dendritic cells. Nearly all patients present initially with cutaneous manifestations, with many having extracutaneous disease additionally. While response to chemotherapy initially is effective, relapse occurs in most, with a leukemic phase ultimately developing. The prognosis is dismal. While most of the clinical and pathologic features are well described, the association and possible prognostic significance between peripheral blood absolute monocytosis (>1.0 K/µL and blastic plasmacytoid dendritic cell neoplasm have not been reported. We report a case of a 68-year-old man who presented with a rash for 4–5 months. On physical examination, there were multiple, dull-pink, indurated plaques on the trunk and extremities. Complete blood count revealed thrombocytopenia, absolute monocytosis of 1.7 K/µL, and a negative flow cytometry study. Biopsy of an abdominal lesion revealed typical features of blastic plasmacytoid dendritic cell neoplasm. Patients having both hematologic and nonhematologic malignancies have an increased incidence of absolute monocytosis. Recent studies examining Hodgkin and non-Hodgkin lymphoma patients have suggested that this is a negative prognostic factor. The association between
THE ABSOLUTE MAGNITUDES OF TYPE Ia SUPERNOVAE IN THE ULTRAVIOLET
International Nuclear Information System (INIS)
Brown, Peter J.; Roming, Peter W. A.; Ciardullo, Robin; Gronwall, Caryl; Hoversten, Erik A.; Pritchard, Tyler; Milne, Peter; Bufano, Filomena; Mazzali, Paolo; Elias-Rosa, Nancy; Filippenko, Alexei V.; Li Weidong; Foley, Ryan J.; Hicken, Malcolm; Kirshner, Robert P.; Gehrels, Neil; Holland, Stephen T.; Immler, Stefan; Phillips, Mark M.; Still, Martin
2010-01-01
We examine the absolute magnitudes and light-curve shapes of 14 nearby (redshift z = 0.004-0.027) Type Ia supernovae (SNe Ia) observed in the ultraviolet (UV) with the Swift Ultraviolet/Optical Telescope. Colors and absolute magnitudes are calculated using both a standard Milky Way extinction law and one for the Large Magellanic Cloud that has been modified by circumstellar scattering. We find very different behavior in the near-UV filters (uvw1 rc covering ∼2600-3300 A after removing optical light, and u ∼ 3000-4000 A) compared to a mid-UV filter (uvm2 ∼2000-2400 A). The uvw1 rc - b colors show a scatter of ∼0.3 mag while uvm2-b scatters by nearly 0.9 mag. Similarly, while the scatter in colors between neighboring filters is small in the optical and somewhat larger in the near-UV, the large scatter in the uvm2 - uvw1 colors implies significantly larger spectral variability below 2600 A. We find that in the near-UV the absolute magnitudes at peak brightness of normal SNe Ia in our sample are correlated with the optical decay rate with a scatter of 0.4 mag, comparable to that found for the optical in our sample. However, in the mid-UV the scatter is larger, ∼1 mag, possibly indicating differences in metallicity. We find no strong correlation between either the UV light-curve shapes or the UV colors and the UV absolute magnitudes. With larger samples, the UV luminosity might be useful as an additional constraint to help determine distance, extinction, and metallicity in order to improve the utility of SNe Ia as standardized candles.